The first year of life and the first years of unsupervised speech recognition: How we are using big corpora to understand infant language development

Wednesday, February 24, 2016 - 15:00 - 16:30
Annexe, Building R - Stadscampus, University of Antwerp (Rodestraat 14)
Ewan Dunbar

ABSTRACT - This talk is a briefing on the state of the art in modelling early development of speech perception and lexical acquisition using big speech corpora without annotations, which is a problem that has now brought engineers and computational psycholinguists together under the banner of "unsupervised speech recognition." I'll summarize what we think we know today about how infants start to learn the sounds and words of their native language, and what that tells us about building a reasonable computational model, and I'll briefly sketch out the recent history of joint applied/cognitive research on unsupervised ASR and infant speech development. Then I'll zoom in on some of the best results from the 2015 ZeroSpeech unsupervised ASR challenge at Interspeech, and, in particular, a model in which we learn proto-words using spoken term discovery in order to bootstrap the learning of proto-phonemes. Then I'll briefly talk about some new research in which we evaluate what dimensions/features are coded in speech representations, which we hope will allow us to better tie empirical psycholinguistics together with computational modelling.


BIO - Ewan Dunbar is currently a postdoctoral fellow at the Laboratoire de Sciences Cognitives et Psycholinguistics, a highly interdisciplinary lab that involves the Ecole des Hautes Etides en Science Sociales (EHESS), the Centre National de la Recherche Scientifique (CNRS) and the Ecole Normale Supérieure (ENS) and is hosted at the Département d'Etudes Cognitives of the ENS in Paris. He started off studying Linguistics and Computing at the University of Toronto and then got a MA in Linguistics from the same university, with a thesis on the acquisition of morphophonology. In 2008, he moved to the University of Maryland where he did a PhD in Linguistics on statistical knowledge and learning in phonology, under the supervision of William Idsardi and Naomi Feldman. His interest in language has always proceded along with that for computational modeling, and his research efforts have found a home in the Synthetic Language Learner Project, which brought together researchers with diverse backgrounds to try to implement a computational model of early language acquisition and test its predictions with behavioural experiments and brain imaging techniques. You can find more information on his personal webpage,

Signups closed for this CLiPS Colloquium