Theme: | Theme - Culture and Society (Theme - Cult), Activity - Collaboratory (Activity - C) |
Status: | Active |
Start Date: | 2021-03-10 |
End Date: | 2022-08-31 |
Website: | |
Leads |
Frishkopf, Michael Smallwood, Scott |
Project Overview
How to generate an adaptive sonic environment that enhances wellbeing?
This problem--sometimes subsumed under the general notion of “smart” or “intelligent” environment-- has been addressed in several literatures, from acoustic ecology and music therapy, to health science, to mechanical and civil engineering. But as AI has developed by leaps and bounds in recent years, its applications to customizing sonic environments have lagged.
Our collaboratory’s focus is AI-driven systems for adaptive soundscapes: linking people, environments, and algorithms in communications loops, using feedback to customize an auditory environment enhancing inhabitants’ social wellbeing. Potential components of such systems include computers and algorithms, biosensors (tracking physiological data, or spatial position), environmental sensors (microphones, thermometers, barometers), big datasets (including historical profile data on system users), and sound generators (speakers or headphones).
The roles of AI include recognizing and interpreting bio- and environmental signals (e.g. using supervised learning), and developing effective responses (e.g. using reinforcement learning), whether by selecting, tuning, and mixing pre-recorded sounds, or by generating new ones (e.g. using Generative Adversarial Networks). Biosignal interpretation includes AI-mediated sonification, enabling users to learn to control their own autonomic systems. Such systems may draw upon, as well as generate, big datasets, which are important for supervised learning.