Skilled behavior, like speech, is actively maintained over years and adapts to new conditions.
Birdsong consists of highly stereotyped gestures executed in a sequence. In this and many other ways, birdsong resembles speech: song is learned by juveniles from a tutor, like babies learning to talk, and when adult level performance is achieved birdsong exhibits active maintenance of motor elements and syntax rules. But, existing animal models and song annotation tools only allow investigating simple syntax rules in limited datasets – leaving a gap in our understanding of compositional vocal communication.
Working with canaries, we investigate how syllable-to-syllable transition points, obeying syntax rules of various memory dependencies, change within and between days, how do they adapt to external perturbation, and what features of these syntax rules are most salient to the females that evaluate the males’ songs?
To address these questions, we recently developed machine learning algorithms for annotating complex song - making it possible to work with datasets of canary vocalizations too large for processing by existing methods. We leverage these algorithms in *realtime* behavior manipulation by targeted perturbations and combine them with detailed video tagging and motion analysis to characterize the rules governing song syntax adaptation and perception by its target audience.