Last time I talked about our Sensory Sketching Workshop and focused primarily on smell, a sense that I am fascinated and puzzled by. I won’t say that I came to a dead end with smell, but I did become frustrated with the logistics of representing data through smell. The fact that smell can’t be shared digitally is one obstacle, and the other is due to the transient nature of smell: it’s difficult to capture and sustain. All the DIY methods of collecting smell either only work for a handful (like flowers) and/or they are not “faithful” replicates of the actual scent (I’m looking at you, Scratch and Sniff). Until I have my own smell-atory, or more realistically, access to a lab, I will be focusing more energy on sound. As the only sense besides sight that can be distributed digitally, sound reaches a wider audience and doesn’t necessarily require physical materials. To top it, off, sound is well within Jordan’s expertise.
Sound is already being leveraged to encode data or sonification. Apple and Highcharts have already integrated an out-of-the-box audio implementation for their charts. For a more creative and musical approach, you can listen to the sonorous and fabulous sonification podcast, Loud Numbers, or check out this animation-sonification from the BBC that visualizes and sonifies COVID cases. While a wide variety of tools can be used, from physical synthesizers to using programming languages, you don’t need fancy tools or knowledge to get started. If you want to learn more about sonification, make sure to check out the Sonification Archive.
Jordan and I tend to focus on ways to create participatory data experiences that are accessible to a wide spectrum of data knowledge. Our next project is no different: we aim to assemble a one-off human data choral, which we have deemed the Human Synthesizer. And some of Jordan’s previous work — her project on data recipes (including a Heartbeat Duet she performed with her mom) and the participatory sonification she conducted at the 2021 Loud Numbers festival — employs some of the key techniques we’ll be using in the Human Synthesizer: engaging the voice and audience participation.
Human Synthesizer will be a unique, facilitated experience where we will embody AQI data through our voices (tip: warm-up with your do-re-mi’s or some gurgling). You can think of it as live sonification improv.
Why are we assembling a human data choral other than the fact that it sounds awesome? As researchers, we naturally have burning questions:
Is your interest peaked? Your voice warmed up? Fill in the interest form here.
Until next time!