talk-icon
Description
Thomas Watts1 Ahlam Lee2 Roberto Myers3 Jinwoo Hwang3

1, Cornell University, Ithaca, New York, United States
2, Xavier University, Cincinnati, Ohio, United States
3, The Ohio State University, Columbus, Ohio, United States

We present a novel interactive sonification interface that allows people with visual impairments to perceive the multi-dimensional scientific data using their auditory sense. People with disabilities are traditionally underrepresented in science, technology, engineering, and math (STEM) fields. Developing new accommodation technology for them is therefore important to motivate their participation in scientific research and education, which could cultivate a diverse STEM workforce and ultimately meet the nation’s STEM workforce needs. The participation of people with disabilities in STEM fields that typically provide a higher-paying and more secure job will also enable them to join mainstream society and serve as role models for people with disabilities and many other underrepresented groups. In this regard, we focus on the people with visual impairment, whose participation in STEM research and education has been especially low because the majority of scientific data acquisition and analysis processes tend to heavily rely on visual perception. We develop a new digital interface that converts the multi-dimensional scientific data (e.g. electron microscopy images) to sound waves, a process called sonification, which allows individuals to perceive and understand the data using their auditory sense. We opted to develop our sonification software for the 6th generation of the Apple iPad. The first prototype of our iPad application is built to sonify a high-angle annular dark-field image of a β-Ga3O2 lattice. The image is converted to an intensity matrix whose entries are the pixel intensity of the image in 16-bit grayscale. A portion of the iPad’s screen is mapped to points on the image (e.g. entries of the intensity matrix). Based on the location of the user touch on the iPad’s screen, the associated point on the image is converted to sound through one of two modes of functionality. Our first mode of sonification takes advantage of the human ability for acute pitch discrimination and leverages variations in pitch in order give the impression of vertical location of the sound source. We elected to associate points of the image of higher pixel intensity to higher frequencies and points of lower intensity to lower frequencies. We conjecture that blind individuals would be able to trace their finger along the iPad’s screen and identify irregularities in the β-Ga3O2 lattice (e.g. crystallographic defects) that would appear as a distinct, underrepresented range of frequencies in the sonic space generated by the image. Our second mode of sonification utilizes the perceived loudness of a sound, through variations in amplitude, in order to give the user the impression that the sound, produced based on their touch input, is originating from some point in 3D space. This approach to sonification leverages the fact that blind users are more sensitive to “binaural sound-location cues.” We further conjecture that, through this second mode of functionality, blind users will be able to gain a spatial understanding of the β-Ga3O2 lattice and detect variations in the number of Gallium atoms per atomic column. Our hope is that this application will serve as a framework for more advanced sonification techniques may be built upon.

Tags