Music Perception and Cognition

  • 12.2: Andrew Goldman, “Optimize This! Why Do We Care If an AI Can Write Songs?” – compares a human-generated song written by the author with an AI one generated by Suno, and provides three perspectives on how to compare them (product-based, process-based, and practice-based comparisons); suitable for students interested in the general challenges posed by AI-generated music as well as those interested in more specific topics like music ontology and types of AI architecture. A supplemental document for teaching with this video is available here.

  • 11.1: Anabel Maler, ““Music as Movement in Signed Song: Analyzing Rosa Lee Timm’s ‘River Song”” - explores how musical parameters like vocal quality, melody, and rhythm can emerge in a visual-kinesthetic medium, without sound

  • 1.1: Elizabeth Hellmuth Margulis, “Repetition and Musicality” – Professor Elizabeth H. Margulis explains how cognitive science can illuminate our understanding of the relationship between musicality and one of its essential, yet often neglected features: repetition. Drawing from her research and the work of others (including Deutsch et al. 2011), Margulis shares several key discoveries—that arbitrary audio excerpts begin to sound like music when the excerpts are looped; that exact, or verbatim, repetitions encourage tapping, moving, and singing (motions that listeners associate with musicality); and that temporal scopes can change when passages repeat. This video is suitable for students at all levels who would like to learn more about the relationship between cognitive science and music, and about the role that repetition plays in making audio phenomena sound musical. Further reading: Margulis, On Repeat: How Music Plays The Mind (Oxford University Press 2013).

 


Back to Topics