2019 Conference on Implantable Auditory Prostheses
14-19 July 2019
Granlibakken, Lake Tahoe
Page 271
W41: A BINAURAL ADVANCED COMBINATION ENCODER STRATEGY FOR SOUND
LOCALIZATION WITH THE CCI-MOBILE RESEARCH PLATFORM
Stephen R. Dennison, Alan Kan, Ruth Y. Litovsky
Electrical & Computer Engineering, University of Wisconsin-Madison, Madison, WI, USA
Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
Bilateral cochlear implants (biCIs) restore speech understanding and some spatial hearing
abilities. However, there is still a gap in performance between listening with biCIs and normal
hearing, especially in real-life noisy environments. In particular, recent data from our lab showed
that biCI listeners had difficulty distinguishing between stationary and moving sounds, and the
direction of movement of non-stationary sounds. While some of the limitations in performance
can be attributed to pathology and the surgical challenge of placing two implants at the same
insertion depth, a part of the gap may instead be caused by a lack of synchrony between the
two sound processors. For clinical CIs, each sound processor is controlled by an independent
time clock, and there is no communication across processors. Furthermore, each CI processes
sounds independently in each ear. For n of m sound processing strategies, such as the
Advanced Combination Encoder (ACE) strategy, the processors in each ear could be selecting
different electrodes to stimulate, thereby reducing the fidelity of binaural cues such as interaural
time and level differences (ITD and ILD, respectively).
The CCi-Mobile research processor developed by UT-Dallas aims to solve these two problems;
it has one system clock and both implants are controlled from a central processor. This allows
for precise control of the interaural cues presented via electrical stimulation. The central
processor allows for real-time binaural signal processing which enables the development of truly
binaural processing strategies, and the testing of these strategies in real-time, realistic
conditions. Hence, we developed a binaural ACE (bACE) strategy to test the hypothesis that
synchronized processing and stimulation will improve sound localization performance. In
traditional ACE, the n highest peaks in each time frame are chosen for stimulation. With
independent processing in each ear, the n channels chosen may not always be the same.
Hence, binaural cues, which are compared in the brain on a frequency-by-frequency basis, may
not be transmitted on the same electrodes across the ears, resulting in disrupted binaural
processing. In bACE, channel selection is conducted on inputs from both ears together. The n/2
highest peaks in each ear are selected and the corresponding electrode in the opposite ear is
also chosen for stimulation.
We conducted a localization experiment with bilateral Cochlear users where the stimuli
consisted of stationary and moving sounds that span trajectories of 10°, 20°, or 40°. Three
conditions were compared: ACE and bACE implemented on the CCi-
Mobile, and participant’s
listening with their own clinical processors. Our results will help determine whether synchronized
processing improves discrimination of stationary and moving sounds, and the direction of
moving sounds.
This work was supported by NIH-NIDCD R01DC016839 and NIH-NIDCD R01DC03083 to RYL,
NIH-NIDCD R03DC015321 to AK, and NIH-NICHD U54HD090256 to the Waisman Center.