Discussion review- Reading Group 1- Introduction to music and predictive processing

Thanks to all who participated, the first EMPRes session brought about discussion from musicians, psychologists, and philosophers regarding prediction, expectation, and anticipation in music. We read Rohrmeier & Koelsch’s (2012) review of predictive information processing in music cognition and Michael and Wolf’s (2014) take on the possible impact of hierarchical predictive processing in studying music perception and performance (both summarized here). As a group, we were generally most excited about how predictive processing could illuminate the cognitive phenomena associated with joint action in music.

Requirement of a discrete, finite set of elements for predictability and combinatoriality

  • This is reminiscent of Hockett’s (1960) features of spoken language which includes duality of patterning, or the ability to form infinitely many utterances from a finite set of discrete (meaningless) units according to a series of rules. If music is a hierarchical system (in the manner of language) and has these features, then it seems plausible that the brain (or a computer) might apply similar models of hierarchical processing to each. From this arose a series of questions…

What determines the discrete units?– in Western music, sound signals are divided into 12 chromatic pitches. And this particular separation of music into discrete pitches is continually reinforced by the 5-line staff. But there are microtonal musics which contain even smaller subdivisions of pitch, or musics where a single ‘unit’ is actually a continuous sound event spanning multiple Western pitches (perhaps one unit starts in this high area and falls to a lower area). Exposure to a different corpus of music (or enculturation within a certain musical environment) yields different unit-categorizations of input, different rules, and thus imposes different expectations regarding musical events.

What determines the rules?– Explicit reference sources or explicitly learned schema may determine some rules. But most rules are learned implicitly, just like the implicit learning of linguistic grammar, from observing patterns of regularities in the surrounding musical environment. Rules within musical interaction may be explicitly agreed on (we’re going to improvise over 12-bar blues) or implicitly agreed on (the number of times a performer repeats a chorus before trading off or continuing to the end of the piece).

  • In improvisation, each performer makes decisions such as same vs new, whether to compliment or contrast, imitate or not, embellish or not, to go back to a particular rhythm/melody or not. These choices depend on the performers’ knowledge of each other ‘I’ve played with her loads of times before, and I know how she plays and what she expects me to do how she expects us to interact’.
  • Awareness of identity and personal style in performance provides a basis to form predictions of what will happen next. Even in free improvisation which is supposed to be making music without rules, performers will undoubtedly display a certain style or tendency.
  • Kate proposed a ‘game theory’ idea between the performer and audience, if the performer thinks the audience knows where she’s going next, then ‘change!’ Similarly between composer and audience, except that the composer must predict the audiences expectation on a more distant time scale, as well as account for the music’s mediation by the performer.

Prediction in perception vs production– presumably all of Rohrmeier and Koelsch’s four sources of prediction could be acquired through either (and both) perception and production of music. Perhaps producing music gives you more veridical knowledge concerning for example, the physical constraints of a certain instrument, or experiential knowledge of a certain piece in performance.

Computational Models

Information Dynamics of Music (IDyOM) vs Hierarchical Predictive Processing (HPP) We spent a lot of time going through the various computational models mentioned in regards to music, and still remain confused as to what and whether there is a difference between the IDyOM model and HPP model reviewed by Michael and Wolf. The HPP model was put forth in Clark (2013), and taken by Schaefer (2014) as a possible avenue for further research on prediction in music perception and cognition. It seems that IDyOM does not have an expectation feedback component (Pearce & Wiggins, fig. 1), whereas HPP is updated based on prediction errors and expectation feedback (when the top-down prediction is mismatched with the incoming sensory signals). And what does this all mean…? Here’s where we defaulted to the experts…

HPP: Clark (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science

HPP in Music: Schaefer, R (2014) Mental Representations in Musical Processing and their Role in Action-Perception Loops

IDyOM: Pearce & Wiggins (2011). Auditory Expectation: The Information Dynamics of Music Perception and Cognition

See you next time!

Next time we look into emotion in music: how our evolutionary history built us as predictive creatures, and how prediction underlies our experience of emotion in music. Setting the stage is the first chapter of Sweet Anticipation, followed by a bit of neurobiological theory of human emotions, and some insights from predictive coding in music.

Thursday 2 March, 1pm. Room 1.21, 7 Bristo Square. Reading: Huron (2006) Sweet Anticipation, Chapter 1 + Koelsch et al (2015) + Gebauer, Kringelbach & Vuust (2015)

Posted in Spring 2017 | Leave a comment

Reading Group 1 – Introduction to music and predictive processing

Thanks to both Nikki and Lauren for the warm welcome and for helping set up these sessions!

The first EMPRes session of 2017 provided an introduction to predictive processing in music perception and cognition. We began with Rohrmeier & Koelsch’s (2012) detailed review of existing work in predictive information processing in music cognition, including converging theoretical, behavioral, computational, and brain imaging approaches. Then we looked at a commentary by Michael and Wolf (2014) regarding the impact on music research of a specific framework of predictive processing, namely Hierarchical Predictive Processing (HPP) as put forward by Schaefer. These papers were a bit more dense than the ‘introduction’ meeting was intended to be, so I’ll lay out a summary of them here, attempting to explain some of the computational bits as well. Please feel free to comment if you have any questions, or especially if you have any answers or better explanations! A review of our discussion points will be in this subsequent blog post.

Rohrmeier & Koelsch laid out the predictable qualities of music, how our brains may be utilizing those qualities (e.g. through perceptual Gestalts, structural knowledge), behavioral evidence of prediction, followed by various computational models and neural evidence for predictive processes.

Predictable information within the music

  1. Predictability and combinatoriality requires a discrete, finite set of elements
  2. Prediction in music occurs on both lower-level processes (predicting the next note) and higher-level processes (predicting a development section in a sonata)
  3. Four sources of prediction, which may work together or be in ‘mutual conflict’:
    • Acquired style-specific syntactic/schematic knowledge
    • Sensory & low level predictions (Gestalts; ‘data-driven’ perception)
    • Veridical Knowledge (from prior knowledge of/exposure to the piece)
    • Non-sensory structures acquired through online learning (knowledge gained from current listening, e.g. motifs, statistical structures, probability profiles)
  4. Prediction in music is messy, constant parallel predictions are made in respect to not only single melodic strings, but complex harmonies, overall key structure, polyphonic and polyrhythmic sound streams, and at phrase-,movement-, or whole-composition levels. It becomes even messier when adding in texture/timbre changes, or considering more polyrhythmic, polymetrical, or complex polyphonic music of non-Western musics

Behavioral Findings

  1. Prediction effects are found in behavioral responses (identification) of unexpected musical events in the case of unexpected tones, intervals, and chords
  2. Musical priming studies (adapted from semantic priming studies in language) give evidence of implicit local knowledge, and perhaps even higher-level tonal key and temporal aspects

Computational Models- not as scary as they sound!

Why do we like them? “Predictive computational models provide a link between theoretical, behavioural, and neural accounts of music prediction”

  1. Hand crafted models such as Piston’s table of usual root progressions, show the general harmonic (root) progression expectancies based on tendencies in Western music. Your theory courses teach you to explicitly recognize these tendencies, which are implicitly learned and recognized by persons enculturated around Western music.

  1. Probabilistic models

N-gram models chop long segments up into shorter bits, and analyze those bits for statistical probabilities to predict the likelihood of the next unit.

  • Example of a 3-garm model of a sequence of pitches {A C E G C E A C E G}; the sequence will be chopped into shorter bits of three pitches, and the number of times each bit occurs [ACE: 2 (occurs two times); CEG:2 ; EGC:1; GCE:1; CEA:1; EAC:1]
  • We can use this model to predict that the notes ‘CE’ will occur after A 2/3 of the time, or after G 1/3 of the time (for this example, it’s easy to just count every instance of [_CE], 3 instances, and see 2 of those are A+CE, and 1 is G+CE)

Multiple Viewpoint Idea

  • Using information from multiple different features (viewpoints) to aid the prediction of a target feature
  • In music, using “duration or metrical position to improve the prediction of pitch class”, for example, an 8th note anacrusis at the start of a piece will likely be 5 leading to 1 (Sol, Do) — this particular prediction though, necessitates prediction based on previous exposure to a larger corpus of music, since it would be improbable to infer statistical correlations of a current piece based on only two notes

Short-term models and Long-term models

  • Short term- knowledge from the current listening; “specific repetitive and frequent patterns that are particular to the current piece and picked up during implicit online-learning”
  • Long term-knowledge from an entire corpus; “long-term acquired (melodic) patterns”

IDyOM- Information Dynamics of Music

Hidden Markov Models

  • A Markov transition matrix is the same as a 2-gram model. So our earlier set of pitches {A C E G C E A C E G}, would be split into [AC:2; CE:3; EG:2; EA:1;] and the probabilities are modeled between single events.
  • A Hidden Markov Model (HMM) generates probabilities not from single events, but instead generates probability distributions from hidden deep structure states. The probability of each subsequent state depends only on the previous state (not future states), reflecting the temporality of musical processing
  •  An introduction to HMM

Dynamic Bayesian Networks (DBM)

  • A DBN is an extension of an HMM in the same way that the multiple viewpoint model was an extension of n-gram models. DBNs analyze “dependencies and redundancies between different musical structures, such as chord, duration, or mode” to generate predictions

Connectionist Networks- Neural networks are designed to represent how actual biological neurons work, combining probabilistic models like those listed above with practical models of neural connections, firing, and growth dynamics

  • MUSCAT- an early musical neural net pre-programmed with Western features (12 chromatic pitches, 24 diatonic Major and minor chords, 24 keys)- does very well at predicting features of tone perception and prediction
  • Self-Organizing Map- unsupervised learning of features of tonal music (this is different from MUSCAT in that it was not pre-programmed with any training data)- matched some experimental data for predicting chord relations, key relations, and tone relations
  • Simple Recurrent Networks- unsuperviseds learning of transition probabilities and structures—not as efficient as n-gram models
  • Dynamic cognitively motivated oscillator models- unsupervised learning to adapt to metrical structure—however slow adaptation to tempo changes

Neuroscientific Evidence

Increased brain response to incongruent (unexpected) stimuli within a sequence- in music this may be hearing a normal chord progression followed by an unusually placed chord

  • ERAN- early right anterior negativity
  • MMN- mismatched negativity

It’s not clear through the neuroscientific evidence whether these responses are the result of local vs hierarchical violations

Brain areas involved

  • Ventral Pre-motor Cortex, BA44 (Broadman’s area 44, the right hemisphere analouge to Broca’s area for language in the left hemisphere- perhaps both do hierarchical processing)

Michael & Wolf laid out perhaps a more accessible overview of areas where a particular predictive processing framework, hierarchical predictive processing (HPP), might lend a novel contribution to the study of music cognition and human social cognition more generally.

HPP is a predictive framework which describes the brain as having a combination of lower level and higher level models arranged, of course, in a hierarchy. Each higher-level model generates and sends predictions down stream to the model immediately below it, while each lower-level model sends sensory input upstream to the model immediately above it. The goal is to minimize prediction error between the higher-level predictions and the lower-level sensory representations. Every time a higher-level prediction comes in contact with a lower-level sensory input that *does not match* the prediction, a prediction error is sent. The higher-level model then takes that prediction error, and changes its prediction, repeating until the incoming signal and the downward prediction are sufficiently matched. Higher-level models are thought to represent changes occurring over longer time scales, such as more abstract, structural, schematic, or specific style aspects of music. Lower-level models represent change in sensory input over shorter time scales, as in immediate local events of the next note or rhythm.

Musical Preferences

  • HPP seems of little use in the understanding of musical preferences. It can’t be assumed that the preferred balance between ‘optimally predictable’ and ‘a bit of uncertainty’ between individuals is the same. The author’s dub this search for the ‘sweet spot of predictability’ the “Golidlocks fallacy”, since even the right amount of predictability in a novice trumpet players crude sounds is likely still unpleasant.

Embodying Music

  • HPP might help in furthering our understanding of embodied music cognition by providing a clear link between perception and action, where perception simply is reflected by “a graded hierarchy of models functioning according to the same basic principles” separated only by time scales, and action is “in a sense equivalent to updating of higher-level cognitive models… through active inference”

Joint Action

  • In joint music making, agents engage in recursive higher-order modeling: “agents are not only modeling the music, but they are modeling the other agent’s actions and intentions, as well as the other agent’s model of her actions and intentions”. If joint music making is construed by the brain as a coordination problem, then HPP may the perfect model to step in and try to minimize prediction (coordination) error in these complex, recursive social interactions
Posted in Spring 2017 | Leave a comment

Spring 2017 – Reading group dates – Music in the predictive brain

1-2pm, every fourth Thursday, starting 2 Feb.

Theme for our Spring 2017 reading groups: Music in the predictive brain

Posted in Spring 2017 | Leave a comment

Shannon on the reading group theme, ‘Music in the predictive brain’

Post by Shannon Proksch:

“Any introduction to music theory class will tell you that music is a complex interchange of moments of tension juxtaposed with moments of release. Through practised study, or passive listening, we learn to anticipate, expect, and predict these patterns of tension and release in the music that surrounds us—framing our emotional and social engagement with music and others in our musical world.

 “Expectation and prediction constitute central mechanisms in the perception and cognition of music” Rohrmeier & Koelsch, 2011

How does a bunch of sound in a messy sensory environment become a musical perception in our mind? How does emotion serve to regulate, or emerge from, our musical experience? How can musical rhythm provide insight into human perception?

 “Minds are ‘wired’ for expectation” Huron, 2006

“Brains…are essentially prediction machines” Clark, 2013

“A mind is fundamentally an anticipator, an expectation generator” Dennet 1996

Our reading group sessions this semester are going to cover a broad introduction to the interdisciplinary study of predictive processing in music, to help us grasp how expectation shapes our perception in music and beyond. I’ve suggested a range of papers that examine the musical brain through a look at work in empirical musicology, music psychology, neuroscience, cognitive science, and philosophy.”

Posted in Spring 2017 | Leave a comment

January 2017 – reading group is back!

Welcome to Edinburgh, Shannon Proksch! Shannon is a current MSc student on the Mind, Language, and Embodied Cognition programme at Edinburgh, with a research interest in the relationship between music and language as situated communicative acts.

Shannon suggested a reading group theme for the semester, on music in the predictive brain… We’ll meet every fourth Thursday, 1-2pm, starting February 2nd, in Room 1.21, 7 Bristo Square.  Let’s go!

 

 

Posted in Spring 2017 | Leave a comment

IMHSD seminar by Prof. Tuomas Eerola – 17 Nov 2016

Visit the Institute for Music in Human and Social Development (IMHSD) site to read about Tuomas’ seminar, and keep up to date with ongoing IMHSD talks and events!

 

Posted in Autumn 2016 | Leave a comment

Autumn 2016 – Nikki’s sabbatical

The Thursday reading group will be back on in the new year – meanwhile, Nikki has been on sabbatical. Here’s what she has been up to:

“I’ve spent some time in Durham as part of my Visiting Fellowship for the new AHRC-funded Interpersonal Entrainment in Music Performance project, working with Kelly Jakubowski (in a new post-doc role after her work on ear-worms…), Martin Clayton, Tuomas Eerola and Simone Tarsitani. One early output from this work is a jointly-authored conference paper, accepted for presentation in Ghent, Belgium later this year at ESCOM: ‘Measuring Visual Aspects of Interpersonal Interactions in Jazz Duos: A Comparison of Computational vs. Manual Annotation Methods’.

The Fellowship at Durham came out of an earlier project, for which I created a database of audio and motion-captured recordings. These featured pairs of musicians improvising together.  Following the original experiments that we carried out using these recordings, other people have taken an interest in this database. So I’ve continued to explore and process the original recordings to make them as useful as possible for other people’s research projects. It’s hard to describe exactly what this entails, but if you have spent any time editing or working with digital media and data in different formats, and if you’ve ever played a locked-room game then you will have an idea how about 100 hours of my sabbatical were spent…

Another large portion of my time went on preparing a new research project proposal. I am interested in the impact that scientific discourse around music – coming from music psychology, music neuroscience, music cognition – has on wider understandings of music within scholarship, education and the public sphere of arts and culture. Still working on this. There’s no single way to carry out a research project. I still have decisions to make about the best methodology for the job.

Alongside these tasks, I did the things that I would normally do (alongside the teaching and admin roles that a research sabbatical relieves) — I peer-reviewed other people’s journal articles, I completed the revisions on a book chapter for a Routledge text book, I drafted the first version of a new article, and I carried out my external examiner roles for programmes at Sheffield and Newcastle Universities, plus a PhD viva at Cambridge.

And I made my first ever trip to Hull, to give one of the Music department’s Newland Lectures. What a city – I mean it! It’s not somewhere that always gets a good press, but I loved it! That place has character and I thought it was beautiful.

So there you go, what I did on my sabbatical – in case you were interested.”

Posted in Autumn 2016 | Leave a comment

Summer 2016

This year has quiet for music psychology reading group meetings, but busy with many other things!

Our one-day Musician movement: capture and analysis symposium – read a short report here – led to various new links and possible collaborations locally and with our colleague, Donald Glowinski, at the University of Geneva. Nikki also began her two-year Visiting Fellowship as part of a new AHRC-funded project based at Durham University, led by Prof. Martin Clayton.

Of the regular EMPRes attendees, Dr Lauren Hadley has now graduated, as has Dr Ana Almeida; alongside writing up his PhD, Alec Cooper has been teaching half of Edinburgh to play the sitar and is about to return to India for another visit to his guru.  Yu Fen is in the process of analysing the fascinating kinematic data of ensemble conductors which she collected down the road, in the motion capture facility at the Institute for Sport, Physical Education and Health Sciences.

Some interesting new, regular groups around ECA have taken off in the past year, including the Disability Research Edinburgh group, and the Monday lunchtime Music and Philosophy reading group, led by Dr Benedict Taylor.  The Philosophy, Psychology, and Informatics Reading Group (PPIG) also continues to meet fortnightly on Wednesdays to discuss new research in the cognitive sciences.

Looking forward to 2016-17!

Posted in Summer 2016 | Leave a comment

Conference season!

Congratulations to PhD student, Yu Fen for her SEMPRE award to travel to SysMus and the satellite MoCap workshop in Jyvaskyla, Finland – and also for her well-received conference paper presentation at ICMPC14, in San Francisco!  Read the report on the ECA website here.

Posted in Summer 2016 | Leave a comment

Update, Summer 2015

Looking forward to catching up with Edinburgh music psychology research activity this summer! New PhD students, news from existing students, new plans, converging topics, and a few outputs to report!

This review article by Nikki, on why she’s pleased that more music cognition research now focuses on how musicians play together in groups — plus why she thinks there are still some misconceptions afoot in psychological research on music performance. Published in the open access journal, Frontiers in Psychology in the research topic Performance Science.

– An article currently in press for the journal, Psychomusicology: Music, Mind and Brain, based on (newly-)Dr Kirsteen Davidson-Kelly’s doctoral research, on pianists’ use of multimodal imagery to learn new repertoire and prepare for performance. Keep an eye on the Psychomusicology RSS feed (in the side bar over there –> ) for Davidson-Kelly, Schaeffer, Moran & Overy, ‘“Total Inner Memory”: Deliberate Uses of Multimodal Musical Imagery During Performance Preparation.’

– A multimedia dataset from the Improvising Duos project (Moran & Keller, 2015) is now accessible through the Edinburgh DataShare service – follow this link to watch our 10s animations of motion-captured musician duos!

– … and here are our findings from the original Improvising Duos study, published in the open access journal, PLoS ONE: Moran, Hadley, Bader & Keller (2015).

Posted in Summer 2015 | Leave a comment