Much of the study of the behavior that enhances the survival and reproduction of an animal is focused on its neural control. The generation of a behavior, however, involves strong interactions between the nervous system, the morphology and the environment. The functional morphology and biomechanics of a peripheral system impose constraints on the neural control, and also provide opportunities for the emergence of complexity in behavior. A wonderful example of this rich interplay is birdsong, where neural instructions drive a highly nonlinear physical system, the syrinx, capable of generating acoustic signals that range from simple whistles to most complex sounds. By complex sounds we denote irregular vocalizations, mostly perceived as rough sounds. They can be found not only in birdsong but also in humans in newborn cries, some vocalizations of infants, and as the result of different voice-disorders. The goal of our proposed research is to understand the mechanisms involved in the generation of complex sounds, which are commonly found in birdsong and to characterize the role of the peripheral system in this process. Existing physiological data, combined with data we collect in our lab and in the field, allow us to dissect the respective roles of peripheral mechanisms and neural instructions in the generation of complex sounds.
Schematized view of a dynamical systems model describing syringeal labial dynamics and tracheal vocal-tract filtering
Building low dimensional dynamical models allows us to explore the hierarchy of importance of different biomechanical features in the bird’s perception of its own song.
Dynamical analysis of the sound source
Many acoustic features of song are not independently controlled, but are determined by the biomechanics of the vocal organ. Moreover, many of those features do not depend on the details of the models but on the dynamical mechanisms involved.
Testing the low-dimensional model
We have shown that low dimensional dynamical models can synthesize songs realistic enough to elicit responses in the brain and in syringeal muscles of sleeping songbirds.
Integrating neural models with motor gestures
To study the interplay between brain and body while generating behavior (birdsong) we develop neural models compatible with the neural architecture of songbirds that allow us to synthetize the motor gestures used by songbirds to generate song. We also integrate these models with the vocal production model generating synthetic songs. We validate these models with experimental data obtained in our lab: neural recordings, muscle activity, respiratory activity and sound recordings.
Much of the work on birdsong focuses on birds that learn their vocalization, i.e. on those approximately four thousand species that require some level of exposure to a tutor in order to acquire the characteristic songs of the species. In the last years, evidence started to build supporting the thesis that non learners find their niche in the acoustic landscape by means of a wide variety of anatomical and dynamical adaptations. We are studying different suboscine families (widely represented in South America) in order to unveil some of these mechanisms.
Artificial Intelligence & Big Data Neuroethology
We are also interested in the use of dynamical models to train networks, in order to identify individual birds. This line of research aims at the development of hardware and software capable of following learning processes in the wild.