Research
Research Topics
Listening to someone talk typically feels like effortless and automatic understanding. With little conscious effort in commanding our mind to comply, an interlocutor can provide information, evoke emotion, or exchange social pleasantries. Despite the ease with which hearing humans verbally communicate, the process of transforming human-articulated sounds into an infinite possibility of novel and complex meanings -- i.e., speech comprehension -- is a computationally intricate, and currently unsolved, challenge.
What computational solution does the brain implement to overcome this challenge? This is the big question that guides the lab's research. It is our goal to delineate the processing architecture upholding speech comprehension in terms of what representations the brain generates from the auditory signal, and what computations are applied to those representations during the timecourse of processing. Describing these components of the human processing architecture is key to understanding auditory, speech, and language processing, which, we believe, require complementary insight from linguistics, machine learning and neuroscience in order to be successful.
Investigating how the brain processes the rapid sequences of speech sounds in continuous speech
Rather than applying a fixed and static filter on the input signal, the brain passes information between neural populations as a function of time. This allows for joint encoding of both speech content (e.g., /b/ vs. /p/) and relative order (e.g., 1st vs 2nd vs 3rd). It is precisely this process that allows you to tell the difference between "melons" and "lemons" or "pets" and "pests"!
To learn more about the research in the lab, and our general scientific principles, read this short paper!