Reading Group 3 – Rhythm in music and predictive processing

Greetings everyone! Remember, today we are meeting in a new room, *50 George Square room 2.30 at 1pm*

This week we are covering rhythm and predictive coding, and reviewing some empirical evidence that suggests that PC really is what the brain is doing when processing rhythm (what is heard) in relation to meter (what is predicted by the brain) in music. This may have some interesting consequences for understanding the effect of culture/expertise in musical processing, as well as joint interaction in music.

Below is a bit of a quick review of the 2009 article to aid our discussion today. A review of our discussion points, including the 2014 article, will be in this subsequent blog post. 

Vuust et al. (2009) iron out the view of predictive coding, which states that the brain is continually trying to infer the causes of its sensory input by reducing the discrepancy (prediction error) between top down anticipatory models about the world and the actual input it receives. This occurs in a recursive manner throughout multiple levels of hierarchically organized units, where information from higher levels provides prior expectations (predictions) for lower level units which feedforward prediction error. Any input that does not sufficiently match prediction at lower levels is fed forward as prediction error, such that the brain only needs to processes the discrepancies between prediction and input (a form of information-theoretic compression encoding, not unlike the processing of your computer.).

Their 2009 paper hypothesized that if PC is true, there would be neural markers which respond to violations of expectation in rhythm (rhythmic incongruities) and that there would be an effect of expertise found in this neural marker.

The anticipatory model in musical timing is found in meter, which “provides a… temporal, hierarchical expectancy structure”, which is compared to the actual heard input of rhythm, or patterns of beats. They hypothesize that participants’ neural response will be larger for larger violations of expectation, and also that expert (in this case jazz) musicians will have a larger response than non-musicians for smaller violations (as a result of greater precision in their anticipatory models, an thus greater sensitivity to small violations).

Three types of stimuli – congruent, acceptable incongruence (syncopation), alteration (unexpected, out of place incongruence)

Task– Identify whether the final snare hit was tuned up or down (this ensured that the participants focus was not on the earlier presented rhythmic incongruity)

Measurement– MMN data and P3am data. MMN responses are associated with pre-attentive acoustic models, P3am responses are associated with general and musical expectancy —- the MMN is interpreted as a representation of error processed by the brain, while the P3am response represents its subsequent evaluation

Results– Over tasks, both rhythmically skilled jazz musicians and non-musicians had higher MMN responses for increasingly incongruent stimuli (as expected if error signals are what is fed forward in the brain) and musicians had higher MMN responses to incongruencies than non-musicians (as expected if expertise modulates or refines anticipatory models). P3am was only measured in musicians for the third stimuli task, and in contrast to the MMN it was not left lateralized, indicating that activity was being processed in a larger network (consistent with evaluation by higher level models).

 

 

This entry was posted in Spring 2017. Bookmark the permalink.

One Response to Reading Group 3 – Rhythm in music and predictive processing

  1. Nikki says:

    Thanks Shannon – and others – for that focused burst of lunchtime discussion! In the ongoing relationship between music and the cognitive sciences, I think it’s business as usual in that music provides an extraordinary topic to explore the mind. Predictive coding seems to promise great insights for music cognition research.

    Alongside the application and testing of cognitive frameworks/theories, there’s got to be some close scrutiny of real-life musical experience. PC feels like it may really stretch further and take us a bit closer to more ‘authentic’ account of music cognition. But the necessary reductions and assumptions – these (to me at least) are endlessly fascinating. They reveal limitations to do with the conceptual apparatus shared – miscommunicated, sometimes – by science communities who address the topic of music cognition. Syncopation, pulse, beat, eh? Tricky to define without circularity! Or by default reference back to the ‘bar’, as though the perception of musical structure is conceived in literacy (rather than being shaped by it, sometimes).

Leave a Reply