Music from Charts: 100 Days of Sonification

Jump to: Week 01 / Week 02 / Week 03 / Week 04 / Week 05 / Week 06 / Week 07 / Week 08 / Week 09 / Week 10 / Week 11 / Week 12 / Week 13 / Week 14 / The Procedure / Sources + Inspiration / Tracks on YouTube / Contact

Share your videos with friends, family, and the world

Week 01: 001–007


Share your videos with friends, family, and the world

Week 02: 008–014


Share your videos with friends, family, and the world

Week 03: 015–021


Week 04: 022–028


Week 05: 029–035

Share your videos with friends, family, and the world


Week 06: 036–042

Sonification of ozone air quality trends data from the EPA from 1980-2019. Quote: "Nationally, average ozone levels declined in the 1980s, leveled off in the...


Week 07: 043–049

Share your videos with friends, family, and the world


Week 08: 050–056

Share your videos with friends, family, and the world


Week 09: 057–063

Share your videos with friends, family, and the world


Week 10: 064–070

Share your videos with friends, family, and the world


Week 11: 071–077


Week 12: 078–084


Week 13: 085–091


Week 14: 092–100


The Procedure

In 2020, my partner Mary introduced me to the use of data sonification for providing accessibility to Internet content beyond the use of screen readers. After tinkering with some of the available tools, I realized there are additional meanings could be ‘heard’ from a data visualization.

Over 100 sequential days in 2021, I saw how many different ways I could explore the overlap between data sonification and ambient music. I followed the rough guidelines of the 100 Day Project. This means that every day, I created a sonification of a chart using the following workflow:

1. Find an interesting data table that can be displayed as a chart or graph. In some cases, I’ve needed to get the raw data as a .CSV from the web page, pull it into Google Sheets to normalize, and then output it as a web page.

2. Load the data table into SAS Graphics Accelerator plugin for Chrome. This plugin allows you to capture data tables, translate them into charts and graphs, and then create audio representations of the data when viewing the visualizations. It also has a map sonification function that I’ll try out. (A few biases I’ve found so far: The plugin is making all the decisions regarding how to map the data to five octaves on a chromatic scale. This is a point of contention in the sonification community, as you’re immediately locked into a specific Western musical interpretation of the data.)

3. Create a data visualization from the table that can be output by the plugin. In most cases, this is a bar chart or linear graph.

4. Prepare to ‘perform' the visualization. There are a few settings that have to be selected: chord or melody for the sound itself, and an explore mode ('playing' the chart) versus a scan mode that automatically plays the chart. If I’m using explore mode, then I pick a BPM that feels on theme with the data and run a metronome to stay on time.

5. Screen/audio record the performance of the data visualization via QuickTime Pro. If there are multiple charts or graphs, I will record each as its own video with timing to match. (I'm using Soundflower to capture the audio out of Chrome so it's incorporated into the video.)

6. Pull the screen recordings from QuickTime Pro into a project in Ableton Live. I’ll set the BPM to match the timing of the performance.

7. Convert the audio recordings into MIDI tracks using Ableton’s audio detection algorithm. (Bias: While I wish this was a foolproof method at this step, it's possible that Ableton will mangle the audio data that was in the original sonification. In a few cases I have left stray data [notes], but I have had to also manually clean up the MIDI data so it’s accurate, move it up or down an octave to have it fit an instrument’s register, quantize the MIDI data to 1/16 + 1/16T to get it on a basic grid, or give up on wholesale data sets due to the impossibility of clean conversion.)

8. Assign instruments to the MIDI tracks based on the theme of the initial data. This is usually the first point in this process that I hear what the data sounds like, and start responding to it in terms of musical decision-making.

10. Set levels, panning, and effects for the track. Every data set has its own story, and while I want the source data to stay true to what it’s communicating, I’ve made some creative decisions to emphasize the theme of the data’s subject or to acknowledge biases that may be inherent in the data set itself. I have also made choices regarding instrument and timbre to push the sonification into ambient, electronic, or classical music genres.

11. Bounce and master a mix of the audio. For this step, I use iZotope’s Ozone and the Cloudbounce desktop app, which I’ve also leveraged for my other musical project.

12. Post the mix to my YouTube channel for this project. You can go here to hear all the tracks and get links to the source data for each sonification. For the second half of the project, I’ve only posted the final iteration of a particular data set’s sonification (to save you from boring and incremental repetition).


Sources + Inspiration

I used this project as a way to learn more about sonification, and do not claim to be an expert in any of this. There are folks who have spent decades of their careers mastering this set of skills and creating plenty of extraordinary sonifications.

If you're seeking more inspiration in the realm of data sonification, here are some good places to start:

This is a good article by Carolyn Beans about scientists and musicians collaborating together, highlighting the work of Margaret Schedel and Carla Scaletti.

Check out the work of Brian Foo, who is the Data-Driven DJ and curator of Citizen DJ. This episode of 20 Thousand Hertz goes into his process for the data sonificiation of an epilepsy patient's seizures.

I liked this high-level overview of the sonification practice by Miriam Quick and Duncan Geere. They’re launching a new podcast called Loud Numbers focused on the practice, with many examples they’ve hand-created. Their newsletter associated with the podcast is great.

If you want to go deep on the practice of sonification, The Sonification Handbook is available as a free PDF download. This is a huge resource I’m slowly working through.

This tool used by the team at Reveal for assignment of time-series data to MIDI notes looks super-useful, more details on it here. If you wanted to keep the output in code, using this plus Sonic Pi could yield some interesting results.


Contact

Email: dksherwin (at) msn (dot) com
Twitter: https://www.twitter.com/changeorder
Instagram: https://www.instagram.com/dksherwin

<  Back to Music