Discussion review- Reading Group 1- Introduction to music and predictive processing

Thanks to all who participated, the first EMPRes session brought about discussion from musicians, psychologists, and philosophers regarding prediction, expectation, and anticipation in music. We read Rohrmeier & Koelsch’s (2012) review of predictive information processing in music cognition and Michael and Wolf’s (2014) take on the possible impact of hierarchical predictive processing in studying music perception and performance (both summarized here). As a group, we were generally most excited about how predictive processing could illuminate the cognitive phenomena associated with joint action in music.

Requirement of a discrete, finite set of elements for predictability and combinatoriality

  • This is reminiscent of Hockett’s (1960) features of spoken language which includes duality of patterning, or the ability to form infinitely many utterances from a finite set of discrete (meaningless) units according to a series of rules. If music is a hierarchical system (in the manner of language) and has these features, then it seems plausible that the brain (or a computer) might apply similar models of hierarchical processing to each. From this arose a series of questions…

What determines the discrete units?– in Western music, sound signals are divided into 12 chromatic pitches. And this particular separation of music into discrete pitches is continually reinforced by the 5-line staff. But there are microtonal musics which contain even smaller subdivisions of pitch, or musics where a single ‘unit’ is actually a continuous sound event spanning multiple Western pitches (perhaps one unit starts in this high area and falls to a lower area). Exposure to a different corpus of music (or enculturation within a certain musical environment) yields different unit-categorizations of input, different rules, and thus imposes different expectations regarding musical events.

What determines the rules?– Explicit reference sources or explicitly learned schema may determine some rules. But most rules are learned implicitly, just like the implicit learning of linguistic grammar, from observing patterns of regularities in the surrounding musical environment. Rules within musical interaction may be explicitly agreed on (we’re going to improvise over 12-bar blues) or implicitly agreed on (the number of times a performer repeats a chorus before trading off or continuing to the end of the piece).

  • In improvisation, each performer makes decisions such as same vs new, whether to compliment or contrast, imitate or not, embellish or not, to go back to a particular rhythm/melody or not. These choices depend on the performers’ knowledge of each other ‘I’ve played with her loads of times before, and I know how she plays and what she expects me to do how she expects us to interact’.
  • Awareness of identity and personal style in performance provides a basis to form predictions of what will happen next. Even in free improvisation which is supposed to be making music without rules, performers will undoubtedly display a certain style or tendency.
  • Kate proposed a ‘game theory’ idea between the performer and audience, if the performer thinks the audience knows where she’s going next, then ‘change!’ Similarly between composer and audience, except that the composer must predict the audiences expectation on a more distant time scale, as well as account for the music’s mediation by the performer.

Prediction in perception vs production– presumably all of Rohrmeier and Koelsch’s four sources of prediction could be acquired through either (and both) perception and production of music. Perhaps producing music gives you more veridical knowledge concerning for example, the physical constraints of a certain instrument, or experiential knowledge of a certain piece in performance.

Computational Models

Information Dynamics of Music (IDyOM) vs Hierarchical Predictive Processing (HPP) We spent a lot of time going through the various computational models mentioned in regards to music, and still remain confused as to what and whether there is a difference between the IDyOM model and HPP model reviewed by Michael and Wolf. The HPP model was put forth in Clark (2013), and taken by Schaefer (2014) as a possible avenue for further research on prediction in music perception and cognition. It seems that IDyOM does not have an expectation feedback component (Pearce & Wiggins, fig. 1), whereas HPP is updated based on prediction errors and expectation feedback (when the top-down prediction is mismatched with the incoming sensory signals). And what does this all mean…? Here’s where we defaulted to the experts…

HPP: Clark (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science

HPP in Music: Schaefer, R (2014) Mental Representations in Musical Processing and their Role in Action-Perception Loops

IDyOM: Pearce & Wiggins (2011). Auditory Expectation: The Information Dynamics of Music Perception and Cognition

See you next time!

Next time we look into emotion in music: how our evolutionary history built us as predictive creatures, and how prediction underlies our experience of emotion in music. Setting the stage is the first chapter of Sweet Anticipation, followed by a bit of neurobiological theory of human emotions, and some insights from predictive coding in music.

Thursday 2 March, 1pm. Room 1.21, 7 Bristo Square. Reading: Huron (2006) Sweet Anticipation, Chapter 1 + Koelsch et al (2015) + Gebauer, Kringelbach & Vuust (2015)

This entry was posted in Spring 2017. Bookmark the permalink.

Leave a Reply