Why scientists are making space data into sounds

The Eagle Nebula (also known as M16 or the Pillars of Creation) was one of the 3 cosmic objects sonified and used in the study. Credit: X-ray: NASA/CXC/INAF/M.Guarcello et al.; Optical: NASA/STScI

by Kim Arcand and Megan Watzke

Images from telescopes like the James Webb Space Telescope have expanded the way we see space. But what if you can’t see? Can stars be turned into sounds instead? In this guest editorial, NASA scientists and science communicators Dr Kimberly Arcand and Megan Watzke explain how and why they and their colleagues transformed telescope data into soundscapes to share space science with the whole world. To learn more, read their new research published in Frontiers in Communication.

When you travel somewhere where they speak a language you can’t understand, it’s usually important to find a way to translate what’s being communicated to you. In some ways, the same can be said about scientific data collected from cosmic objects. A telescope like NASA's Chandra X-ray Observatory captures X-rays, which are invisible to the human eye, from sources across the cosmos. Similarly, the James Webb Space Telescope captures infrared light, also invisible to the human eye. These different kinds of light are transmitted down to Earth packed up in the form of ones and zeroes. From there, the data are transformed into a variety of formats — from plots to spectra to images.

This last category — images — is arguably what telescopes are best known for. For most of astronomy's long history, however, most people who are blind or low vision (BLV) have not been able to fully experience the data that these telescopes have captured. NASA’s Universe of Sound data sonification program, with NASA’s Chandra X-ray Observatory and NASA’s Universe of Learning, translates visual data of objects in space into sonified data. All telescopes — including Chandra, Webb, the Hubble Space Telescope, plus dozens of others — in space need to send the data they collect back to Earth as binary code, or digital signals. Typically, astronomers and others turn these digital data into images, which are often spectacular and make their way into everything from websites to pillowcases.

The music of the spheres

By taking these data through another step, however, experts on this project mathematically map the information into sound. This data-driven process is not a reimagining of what the telescopes have observed, it is yet another kind of translation. Instead of a translation from French to Mandarin, it’s a translation from visual to sound. Releases from the Universe of Sound sonification project have been immensely popular with non-experts, from viral news stories with over two billion people potentially reached according to press metrics, to triple the usual Chandra.si.edu website traffic.

But how are such data sonifications perceived by people, particularly members of the BLV community? How do data sonifications affect participant learning, enjoyment, and exploration of astronomy? Can translating scientific data into sound help enable trust or investment, emotionally or intellectually, in scientific data? Can such sonifications help improve awareness of accessibility needs that others might have?

Read and download original article

Listening closely

This study used our sonified NASA data of three astronomical objects. We surveyed blind or low-vision and sighted individuals to better understand participant experiences of the sonifications, relating to their enjoyment, understanding, and trust of the scientific data. Data analyses from 3,184 sighted or blind or low-vision participants yielded significant self-reported learning gains and positive experiential responses.

The results showed that astrophysical data engaging multiple senses like the sonifications could establish additional avenues of trust, increase access, and promote awareness of accessibility in sighted and blind or low-vision communities. In short, sonifications helped people access and engage with the Universe.

Sonification is an evolving and collaborative field. It is a project not only done for the BLV community, but with BLV partnerships. A new documentary available on NASA’s free streaming platform NASA+ explores how these sonifications are made and the team behind them. The hope is that sonifications can help communicate the scientific discoveries from our Universe with more audiences, and open the door to the cosmos just a little wider for everyone.

An image supplied by Dr Kimberly Arcand which illustrates her team's sonification research.

About the authors

Dr. Kimberly Arcand is an expert in astronomy visualization and has been a pioneer in 3D imaging, printing, and extended reality applications with astrophysics data. She has worked for NASA's Chandra X-ray Observatory at the Smithsonian Astrophysical Observatory since 1998. Her current research includes sonification of spatial data, machine learning as applied to image processing, and other intersections of emerging technology and astrophysics. She can be found on Instagram and X at @kimberlykowal.

Megan Watzke has spent her career sharing the wonders of science with the widest range of audiences possible. From her role as the press officer for the Chandra X-ray Observatory to her ‘public science’ projects that inject science into daily experiences, she seeks to de-stigmatize science for everyone, but especially for underrepresented groups including women.

REPUBLISHING GUIDELINES: Open access and sharing research is part of Frontiers’ mission. Unless otherwise noted, you can republish articles posted in the Frontiers news site — as long as you include a link back to the original research. Selling the articles is not allowed.