A blog post by Andrew Hall PhD, CW+ Music and Sound Research Consultant
Ever since I began working on projects with CW+ at Chelsea and Westminster Hospital, I have been fascinated by the potential applications of music and sound technology within the acute healthcare environment. Software such as MaxMSP is increasingly enabling artists to create bespoke audio-visual installations that can be uniquely tailored to specific clinical areas, and can even be designed to contribute towards clinical goals. My first project to this end was the creation of a set of touchscreen rhythm exercises which were developed with the input of physiotherapists with the intention of aiding upper-limb rehabilitation following stroke: you can read about this project here.
At the end of 2016 I was asked to think about a new project which might be relevant for the new intensive care unit which was to be built at the hospital. The patient areas of the new unit were to be ‘sensor rich’, recording not only patient data but also environmental statistics, such as temperature and light levels. Immediately I began to wonder how the software described above could create new connections between patient data and the patient environment. CW+ already do a great deal of work researching and installing improvements to patients’ environments within the hospital, including the use of digital artworks by artists such as Brian Eno. What I envisaged in the new ICU was the opportunity to connect digital art to patient data, creating a responsive artwork which could react to a patients’ physiological changes in such a way as to enhance its therapeutic qualities.
Music listening has been shown to have benefits for patients in several different medical situations (see Staricoff et al 2011, and systematic reviews by Bradt et al 2010, 2013, 2014, 2016), but the differing impact of specific musical characteristics, such as notes, tunings, harmonies, rhythms and timbres, is still relatively unexplored – as a musician, it is these that are of most interest to me. One reason this area remains unexplored is perhaps because it is so difficult to isolate and examine the effect of specific musical traits, but the increasing power of new music technologies allow us to do just that.
I decided to focus my interests on tempo, perhaps the most geometric of all musical properties (being easily quantified in BPM, beats per minute) and one in which variations have been shown to cause differing effects in listeners’ heart rates. Bernardi et al (2006) found that ‘passive listening to music accelerates breathing rate and increases blood pressure, heart rate, and the LF:HF ratio (thus suggesting sympathetic activation) proportional to the tempo and perhaps to the complexity of the rhythm’. Similarly, Van Dyke et al (2017) found that ‘slowing down music could regulate the arousal effect of listening to music’.
Using the software tools mentioned above, I was able to create ‘Pulse Music’: a system which could automatically adapt the tempo of a piece of music in response to a listener’s heart rate. By connecting the data from a heart rate monitor up to the Ableton Live software via MaxMSP, I have created a system in which the reported relationship between music and heart rate can be both tested and manipulated in real time. For example, if the listener’s heart rate is higher than what would be clinically desirable, the music can be slowed down accordingly, and any resulting change in heart rate can be monitored, recorded, and reinterpreted by the system.
Putting this concept into practice has been a significant technical challenge, not least because it involved the hacking and recoding of various heart rate and ECG sensors in order acquire their data (the brilliant BITalino sensor kit, from Portugese company Plux, has proven enormously useful for this – see picture right). However, having worked through these technical challenges the system is now fully functional, and I am now working with a team of cardiologists at West Middlesex University Hospital, led by consultant cardiologist Dr Sadia Khan, to investigate its potential effects on a cohort of volunteer participants. If successful, the applications of this system could be many: one area of interest for cardiology is in the field of cardiac CT scanning, in which slower heart rates enable clearer imagery to be captured, thus reducing the need for retests and reducing patients’ exposure to radiation.
This system represents a new approach to ‘arts in health’ interventions, one in which new technology can create a real-time feedback loop between patient and environment (as shown in the diagram on the right). My work for CW+ is not alone in exploring this approach within the field of music in healthcare: for example, the BCMI-MIdAS (‘Brain-Computer Music Interface for Monitoring and Inducing Affective States’) is a joint project between Plymouth and Reading Universities, which aims ‘to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific affective states through music, automatically and adaptively’ (BCMI-MIdAS, 2018). Further to this, the continuing onset of machine-learning and artificial intelligence may pave the way to a flourishing of ‘smart’ arts interventions, simultaneously influencing the patient environment, measuring the impact of these changes, and learning how optimum conditions can be achieved: in this way, the arts may become more deeply integrated into healthcare environments than ever before.