Measurement Music: ‘Sonification’ as a tool for uncovering patterns in data
Computers and music have been mingling their intimate secrets together for over 50 years. How music is made, performed and distributed has involved pain-staking computer-based simulation, artificial intelligence research, and, nowadays, tracking of personal preferences and the means for deploying massive music access. Challenges of how to keep up with music's demands have deeply affected computer technologies which must meet expectations of quality, diverse forms of consumption and quirky turns of taste and style. It shouldn't be a surprise that where these two worlds are developing in tandem they spawn practices which are entirely novel. One of these is in listening to data. The intersection of data science with composition has produced new ways of interacting with information and produced some profoundly interesting new music.
I gave a workshop to 30 scientists at the University of British Columbia's Peter Wall Institute for Advanced Studies in March, 2016.
This "bring your own data" event was entitled, "Measurement Music: 'Sonification' as tool for uncovering patterns in data”. The afternoon included hands-on sound creation by the participants. Using examples from a variety of datasets, I showed how sonification can lead to a deeper understanding of natural and human-influenced phenomena and to the creation of novel new musical material and even whole compositions.
My own take on this begins with my computer-based composition work. Generating music from equations and exploring numerical models which simulate musical behavior were an early interest. Rather than specify every detail of every note so that computer-generated sounds would do my bidding, I automated much of the detail algorithmically. In much the same way, sonification applies sequences of numbers to the generation of sound, with the numbers coming from datasets imported from outside of the music itself. Listening to the results using our "musical ears" leads to a deeper understanding of the system behaviors that we're translating into music. Whether it be global economic trends, atmospheric CO2 changes over glacial-interglacial cycles, or seemingly mundane events such as the ripening of fruit, sonification provides a means of representing patterns and processes in the natural world and within human societies. A key difference from visualizing through data graphs is that we hear trends and details simultaneously at multiple time scales. Think of long, melodic guitar lines superimposed over fast, precise drum rhythms or all the layers present in a classical symphony.
In the workshop, I led a discussion of the practice and application of sonification across a wide array of disciplines. Those who participated in the tutorial portion brought their own laptop, and downloaded some software tools from here.
As an example, one of the participants brought his measurements of tides from Washington State. Chris Harley is a Professor in the Department of Zoology and Institute for the Oceans and Fisheries at UBC who studies the impacts of climate change on rocky coasts. His research group, Harley Lab, is interested in how factors like warming and ocean acidification affect the ways that species interact with each other to create ecological patterns in time and space.
His intent was, "to play around with some tide data" and he brought along a dataset covering one year's worth of hourly readings (from Neah Bay, Washington). "You can hear the rising and then falling chirp-chirp-chirp of the major high tides, which get highest at the new and full moons, and then the slightly lower trill of two roughly equal high tides per day, which occurs during the quarter moons." Successful and quite listenable, Chris' sonification projects the behavior of phases in the tidal year. His next idea is to see if different sites sound different.
For related activities of mine, please see: