Our research is multidisciplinary and covers several domains of applications. The common denominator of our team is the use and development of advanced machine learning and data acquisition methods applied to the experimental sciences. In particular we focus on the use of deep learning and representation learning in order to answer questions about ethology, human cognition and the environment.


Halting the rapid loss of marine biodiversity in light of climate change, and pollution issues is one of the global sustainability challenges of the decade. The importance of environmental monitoring, enabling effective measures is increasingly recognized. Although a variety of methods exist to survey local species richness, they have been characterized as costly, invasive and limited in space and time, highlighting the need to improve environmental data acquisition techniques. To that end, more recent research indicates the potential effectiveness of the use of (bio) acoustic signals for studying the evolution of population dynamics and modelling the variations species richness.

The DYNI team conducts research aimed at detection, clustering, classification and indexing bioacoustic big data in various ecosystems (primarily marine), space and time scales, in order to reveal information on the complex sensori-motor loop and the health of an ecosystem, revealing anthropic impacts or new biodiversity insights.

DYNI continues to install its bioacoustic surveillance equipment throughout the world including, France, Canada, Caribbean islands, Russia, Madagascar etc. Its present projects study different problematics like the influence of marine traffic on marine mammals.

Speech and hearing

We focus on several aspects of human vocal interactive behaviour and more particularly speech. Some of the research problems that we tackle are:

  • modelling speech perception,
  • robust automatic speech recognition,
  • multimodal speech (e.g. audio-visual),
  • multichannel speech processing, and
  • speech enhancement.

We are also interested in clinical applications of speech technology and research on hearing aid devices.

Currently we are focusing on two specific problems: microscopic intelligibility modeling and unsupervised speech learning.

Some of our research questions in microscopic intelligibility modeling include: (i) can we employ data-driven techniques to predict individual listener behaviors at a sub-lexical level? and (ii) can data-driven models help us better understand and validate our knowledge about the mechanisms involved in human speech perception and production? To this end we try to make machines listen more like humans in speech recognition tasks and interpret the resulting models to compare them to existing human hearing models.

We are also interested in the difficult task of unsupervised speech learning. Can we learn to transcribe speech of a language for which we do not have textual representation? Acoustic unit discovery (AUD) and linguistic unit discovery (LUD) are challenging tasks that have not received the same attention as their supervised counterparts. Yet the problems are interesting from both an engineering and a scientific point of view. Unsupervised speech learning closely relates to research on language acquisition modelling and language evolution, with ties to categorical perception and grammar induction. From an application point of view, work on this problem not only benefits speech processing of under-represented languages, but also the generic problem of pattern and structure discovery in time-series data.