Educational practice, musicianship and dyslexia – Dr Laurel Parsons, today! 4pm

This IMHSD seminar will take place today at 4pm in the Atrium, Alison House — A fascinating topic for music educationalists in Higher Education, and very important.

In HE we have more control over our curricula – and the potential to be quicker in our response to current research evidence – than mainstream school education.  How well do we use that when it comes to teaching (and reinforcing) the rudiments of musicianship? My own experiences in devising and delivering core musicianship training for first-year undergraduate students has shown me both the challenges and rewards of implementing adjustments for students with disabilities. This can only happen where those students have made arrangements to undergo screening/interview with the University Student Disability Service. The resulting schedule of adjustments (e.g. more time to complete work and to sit exams) need to be sensibly and sometimes creatively applied to be of value – for example, in a course that uses musical examples to gauge a students’ acquisition of aural discrimination skills it is no use to have an extra 15 minutes duration stuck on to the end of the exam.  Even where a students’ adjustments have been thoughtfully implemented, the whole process undoubtedly reveals a tension between the presumed requirements and expectations of our aural and literacy training, and the implications for all of our students’ identity and self-perceptions as a musician. I am looking forward to hearing Dr Parsons’ talk!

Speaker: Dr Laurel Parsons
Date/Time: Wednesday 3rd May 2017, 4pm
Location: Atrium, Alison House
Title: Do Our Teaching Practices Enable or Disable Musicianship? Learning from the Experiences of Post-Secondary Music Students with Dyslexia
Abstract
Post-secondary music students with dyslexia or other learning disabilities are what some special education researchers call “twice-exceptional”: gifted in one respect, but impaired in another. For these students, musicianship tests such as sight-singing or melodic transcription that demand rapid processing of music notation may pose an overwhelming challenge—one that can have a profound impact on their sense of identity as musicians. For instructors, the experiences of these students provide an opportunity to reflect on whether our pedagogical practices are enabling or disabling their skills development. More fundamentally, what messages do we send through these practices about what “musicianship” is, and what it means to “be a musician,” not just inside our institutional bubbles, but in the world? Participants are invited to bring a small mirror.
Biography
Dr. Laurel Parsons is a music theorist and award-winning instructor based in Vancouver, British Columbia. Her teaching appointments have included the University of Victoria, the University of British Columbia, Queen’s University, and the University of Oregon. She began tutoring university opera majors with dyslexia and other learning differences in 2008, and collaborated on an interdisciplinary research project at the University of British Columbia exploring the experiences of opera students with learning disabilities. Her article “Dyslexia and Post-Secondary Aural Skills Instruction” is published in Music Theory Online (2015). Dr. Parsons is also co-editor, with Brenda Ravenscroft, of Analytical Essays on Music by Women Composers (Oxford University Press, 2016), a four-volume multi-author collection providing detailed studies of compositions by women from Hildegard to the present.
Posted in Uncategorized | Leave a comment

Discussion Review- Reading Group 3 – Rhythm in music and predictive processing

Thanks for a great final reading group on predictive processing and music everyone! This week we had our usual gang of musicians and philosophers, plus a visitor from linguistics. For this session we read an experiment paper from Vuust et al. (2009) measuring neural responses to varying levels of rhythmic incongruity in expert (jazz) musicians and non-musicians, as well as Vuust & Witek’s (2014) review of predictive coding explanations of the perception of syncopation, polyrhythm, and groove. Below is a review of our discussion, and a summary of the 2009 text can be found here.

Discussing Vuust et al. (2009), we weren’t entirely comfortable with what still felt like fuzzy distinctions between rhythm, meter, beat, pulse, syncopation etc. In particular, syncopation was conceived as metric displacement which doesn’t interfere with the metric pulse, but which “breaks the metric expectancy by replacing a weak beat with a strong beat” (p82). This sounded contrary to our experience (in say, jazz, but presumably any syncopated music tradition will do) when a syncopation sometimes serves to validate our perception of the meter (or pulse, or metric pulse(?)), and feels as if it then enhances or confirms our metric expectancy. The interpretation of the MMN and P3am responses seemed quite in line with similar results in studies regarding response to syntactic incongruities in language, and fit the PC hypothesis that incongruent (unpredicted) stimuli require more neural processing in order to be explained away than do predicted stimuli. If expert musicians have more precise predictive models, then it also makes intuitive sense that they’d be more responsive to incongruencies than non-musicians.

Vuust & Witek (2014) provide a conceptual review of the predictive coding (PC) model as it applies to perception of rhythm, including syncopation, polyrhythm, and groove. Here we get a definition of rhythm as “a pattern of discrete durations…depend[ing] on the underlying perceptual mechanisms of grouping”, and meter “as the temporal framework according to which rhythm is perceived”(p1). The equal importance of top down (metrical framework) and bottom up (rhythmic input) is a feature both of PC and Dynamic Attending Theory (DAT), however DAT is meant to be much more flexible and dynamic than PC. In PC, meter is a generative, anticipatory model in the brain, however, in DAT meter is an emergent property of “the reciprocal relationship between external periodicities [rhythm] and internal attending processes” (p2). Unfortunately! We didn’t talk much about DAT, but perhaps this dynamic/entraintment/enactive-y account may help soothed some of our tension with the PC take on metrical/rhythmic vocabulary. (For some more insight into entrainment and dynamical theories in music, see Clayton et al. (2004), Clayton (2012) , Phillips-Silver et al. (2010) )

Syncopation We had some good discussion of the perception of syncopation in expert and non-musicians  in response to the 2009 text, so our next interests were piqued by the PC accounts of polyrhythm and groove.

Polyrhythm The authors analogize polyrhythms with bistable percepts of binocular rivalry. The standard example of a binocular rivalry experiment involves a participant being presented with a face and a house, independently, to each eye. One might expect that the percept (what a person sees) ends up being some morphed face-house object, averaging the sensory input being given to each eye. However, what happens in these experiments is that the participants perception switches back and forth from seeing a face to seeing a house, never having a merged percept of a sort of face-house object (See Tong et al. 1998). Presumably, this is the result of a hyper-prior relating to the spatial and temporal relationships between houses and faces, you don’t expect them to occupy precisely the same space at the same time.

Image result for face house binocular rivalry

 

Polyrhythms, the authors argue, are similar to visual bistable percepts in that we can hear either a three-beat meter or a four-beat meter (in the case of a 3 against 4, or 4 against 3 polyrhythm), but you can’t hear both at the same time. However, it felt to some of us as though if you listen to a polyrhythm, you actually hear the entire stimulus at once. Similar to looking at Rubin’s vase, where you do see both of the vases and the faces at once, but what is more salient to you depends on what you are actively attending. The authors note that polyrhythm as a bistable percept differs importantly from binocular rivalry in that musical training can have an effect on what meter you hear or attend to. Our intuition was that, perhaps to the untrained ear, the examples in A and B below might ‘sound’ the same, and might just sound like a particular (complex) rhythm pattern. rather than a superimposition of two different patterns.

In the two mentioned polyrhythm studies (Vuust et al. 2006, 2011), when participants were asked to tap along to the counter-meter, rather than the original meter, increased activation was found in Brodman’s area (BA) 40, which is known to deal with processing of bistable percepts, as well as BA 47 which deals with semantic processing in language. Musicians showed less activation than non-musicians, which is consistent with the idea that their predictive model is more precise and thus they need less processing power to interpret the predicted stimuli. In listening to the excerpt which provided the the polymetric stimuli for these studies (the soprano saxophone solo in “Lazarus Heart”, by Sting – listen at 1m40s), this seemed to us like it could be interpreted as an instance of hemiola rather than polymeter. Which led us to the question, is there a difference (neurally speaking) between the perception of a hemiola vs perception of a sustained polyrhythm/polymeter?

Groove, the authors say, is accounted for in PC as a sort of ‘Goldilocks’, just-right phenomonenon, where “the balance between the rhythm and the meter is such that the tension is sufficient to produce prediction error, but not so complex as to cause the metric model to break down”(p9). This is a good example of action-oriented perception, where the propensity to move the body in time with ‘groove’ music serves as active-inference of the causes of the sensory input (the rhythm) in comparison with the anticipatory model (meter). Witek et al. (2014) supported this notion through a web-based study where participants rated their pleasure and how much they felt like moving after hearing a collection of differently-syncopated drum breaks. The participants responses generated an ‘inverted U-shaped relationship’ (roughly a Gaussian curve) which seemed to suggest a sweet spot of syncopation and participant ratings, somewhere in between too-low-complexity degrees of syncopation (where there is little incongruence between predicted and actual input), and too-high-complexity degrees of syncopation (where there is so much incongruency that ‘the predicted model breaks down’).

So these readings conclude our introduction to a few of the ways predictive processing may shed light into our perception and cognition of music. I get the feeling that we’ve only just barely scratched the surface, and certainly look forward to seeing where continuing this train of investigation will lead!

 

 

 

Posted in Spring 2017 | Leave a comment

Reading Group 3 – Rhythm in music and predictive processing

Greetings everyone! Remember, today we are meeting in a new room, *50 George Square room 2.30 at 1pm*

This week we are covering rhythm and predictive coding, and reviewing some empirical evidence that suggests that PC really is what the brain is doing when processing rhythm (what is heard) in relation to meter (what is predicted by the brain) in music. This may have some interesting consequences for understanding the effect of culture/expertise in musical processing, as well as joint interaction in music.

Below is a bit of a quick review of the 2009 article to aid our discussion today. A review of our discussion points, including the 2014 article, will be in this subsequent blog post. 

Vuust et al. (2009) iron out the view of predictive coding, which states that the brain is continually trying to infer the causes of its sensory input by reducing the discrepancy (prediction error) between top down anticipatory models about the world and the actual input it receives. This occurs in a recursive manner throughout multiple levels of hierarchically organized units, where information from higher levels provides prior expectations (predictions) for lower level units which feedforward prediction error. Any input that does not sufficiently match prediction at lower levels is fed forward as prediction error, such that the brain only needs to processes the discrepancies between prediction and input (a form of information-theoretic compression encoding, not unlike the processing of your computer.).

Their 2009 paper hypothesized that if PC is true, there would be neural markers which respond to violations of expectation in rhythm (rhythmic incongruities) and that there would be an effect of expertise found in this neural marker.

The anticipatory model in musical timing is found in meter, which “provides a… temporal, hierarchical expectancy structure”, which is compared to the actual heard input of rhythm, or patterns of beats. They hypothesize that participants’ neural response will be larger for larger violations of expectation, and also that expert (in this case jazz) musicians will have a larger response than non-musicians for smaller violations (as a result of greater precision in their anticipatory models, an thus greater sensitivity to small violations).

Three types of stimuli – congruent, acceptable incongruence (syncopation), alteration (unexpected, out of place incongruence)

Task– Identify whether the final snare hit was tuned up or down (this ensured that the participants focus was not on the earlier presented rhythmic incongruity)

Measurement– MMN data and P3am data. MMN responses are associated with pre-attentive acoustic models, P3am responses are associated with general and musical expectancy —- the MMN is interpreted as a representation of error processed by the brain, while the P3am response represents its subsequent evaluation

Results– Over tasks, both rhythmically skilled jazz musicians and non-musicians had higher MMN responses for increasingly incongruent stimuli (as expected if error signals are what is fed forward in the brain) and musicians had higher MMN responses to incongruencies than non-musicians (as expected if expertise modulates or refines anticipatory models). P3am was only measured in musicians for the third stimuli task, and in contrast to the MMN it was not left lateralized, indicating that activity was being processed in a larger network (consistent with evaluation by higher level models).

 

 

Posted in Spring 2017 | 1 Comment

Discussion Review- Reading Group 2- Emotion in music and predictive processing

Thanks to all who came along for our second EMPRes session of 2017! It’s always nice to see more faces the second time around, that means we must be doing something right 😉 This week we heard insight from musicians, psychologists, philosophers, and a visiting researcher in cognitive neuroscience who lucky for us chimed in with some ideas regarding this week’s neuroscience paper. For this session, we read the introduction chapter to Huron’s book Sweet Anticipation (2006), Koelsch et al. (2015) neuroscientific overview of emotion with some musical examples, and Gebauer, Kringelbach and Vuust’s (2015) review of Koelsch et al.’s proposed Quartet Theory of Emotion through the lens of predictive processing in music. Below is a review of our discussion, and a summary of the texts can be found here.

*Reminder* Our next meeting is on Thursday March 30th at 1pm in 50 George Square, room 2.3o *note the room change*

ITPRA, the Western art music tradition, and cognitivism

In evaluating the first chapter of Huron’s book, we realized that the groundwork of his ITPRA model was laid out in the context of many expectation-laden events other than music. But that’s okay, since we know where the rest of his text is headed as an explanation of musical expectation.

The ITPRA model aims to explain expectation for events both at a very general, long time frame involving conscious imaginings (such as the expectation of receiving a raise at work), and at specific, short time frames (such as the expectation of the next note or chord in a sequence). With the evolutionary story provided, it is sensed that emotions play[ed] an integral role in forming accurate expectations, and thus enhancing chances for survival. However, the first paragraph of the introduction provides a very offhand account of emotions as potentially frivolous coloring of experience:

 “… emotions add subtle nuances that color our perceptions of the world. Emotions add depth to existence; they give meaning and value to life.” -Huron, 2006

Perhaps, while emotions may be beneficial in informing and evaluating expectations/predictions, they may not be altogether necessary. We thought about this a bit more when we discussed emotion’s potential role in predictive coding.

The reference to Meyer’s account of “emotional content of music aris[ing] through the composer’s choreographing of expectation” was also problematic. As pointed out by Nikki, it is important to consider that contrary to the Western art tradition of pre-composed music mediated by a performer, the composer is not always separate from the performer either in body or in time. Not all music is scripted, and improvisatory music-making occurs (is composed by) the performer at the same time that it is being performed. Russ noted that this consideration has been left out in most (all?) of the literature we’ve reviewed so far in this reading group, and it would add clarity if author’s inserted a simple statement along the lines of ‘for the purposes of the current project, we are looking at pre-composed music’ or a claim that their project applies equally well to both pre-composed and improvisatory contexts. As our resident improviser, Russ was asked whether it feels like he is choreographing expectations during an improv performance. Perhaps not in cases of free improvisations, but at say, a wedding, it would be more likely to engage in more melodic ‘performance norms—things that you would expect [at a wedding]’, such as a certain genre, form, style, etc. which ‘fits within the norm’. In other words, context matters.

Coming back to the ITPRA model itself, it does seem quite rooted in the cognitivist paradigm. Especially given there is an entire acronym given for ‘appraisal’, which grants heavy influential capabilities to cognitive evaluations.

The Quartet Theory of Human Emotions

The main conclusion to draw from this model is the reciprocal interactions between effector systems, affect systems, and conscious appraisal systems (such as language), as well as reciprocal interactions within elements of each system. Koelsch et al. provide neurological evidence for the emotional involvement of each of the brain’s four affect systems (orbitofrontal-centered, hippocampus-centered, diencephalon-centered, and brainstem-centered), as informed by the brain’s four effector systems (peripheral arousal, action tendencies, motor expression, and memory & attention). The interaction of these systems leads to an ‘emotion percept’, which consists of four components, an affective component, a sensory-interoceptive component, a motor component, and a cognitive component. The emotion percept is a feeling sensation which is a feeling sensation evoked before any bias from translation into propositional content (words). This conception of a pre-verbal ‘emotion percept’, seems similar to what Ian Cross terms ‘floating intentionality’, referring to music’s general ‘aboutness’ which is the agreed meaningfulness attributed to music without mutual agreement on specifically what that meaning is.

“music’s inexpliciteness, its ambiguity or floating intentionality may thus be regarded as a highly advantageous characteristic of its function for groups; music, then, might serve as a medium for the maintenance of human social flexibility” –Cross 2004

Predictive coding accounts of music, is emotion necessary?

In Gebauer, Kringelbach, and Vuust’s review of the Quartet Theory of Human Emotions from the lens of predictive coding, claiming that emotion could be the weight/modulator of prediction error itself, guiding behavior, action, and learning. Perhaps rather than (or in addition to) a modular influence of a global emotion (such as joy, or anger), it may be useful to consider a hierarchy of different neurons having a particular valence. Perhaps at the lowest level there would be a binary Y/N (good/bad) valence receptive field, and the next level may consist of neurons which respond to populations of lower-level neurons (in the same manner as retinal ganglion cells responding preferentially to certain orientations of lines; and even incorporating something similar to center-surround style activation to either facilitate or inhibit response). More complex emotions (what we may be cognitively aware of) would arise as a result of a hierarchy of these valence responses at differing levels of complexity.

However, Lauren brought up a good point, why is emotion even necessary in this prediction and prediction error model? It may seem like a useful and exciting way to account for emotions as modulations of prediction error, and a motivational factor for making accurate predictions, however the predictive coding model seems to work just fine without attempting to squeeze emotions into the mix. And indeed, seeking to define emotions because ‘we know we have emotions’ is a lot like examining the pineal gland searching for the soul because ‘we know we have souls’.

*Reminder* Our next meeting is on Thursday March 30th at 1pm in 50 George Square, room 2.3o *note the room change*

Our topic will be Rhythm in music and predictive processing, reading  Vuust et al (2009) + Vuust & Witek (2014) See you next time!

 

Posted in Spring 2017 | Leave a comment

The Psychology of Musical Development – 30 years on, Prof. David Hargreaves (Wed 8 Mar)

Looking at the well-thumbed copy of the 1986 edition of The Developmental Psychology of Music on my bookshelf, and looking forward to this talk tomorrow!

Prof. David Hargreaves (Roehampton University) – Atrium, Alison House. 5.15pm

“I shall reflect upon some of the changes that have taken place since the publication of my book The Developmental Psychology of Music (CUP, 1986), which has recently been completely rethought and reworked by Alexandra Lamont and I (Hargreaves and Lamont, 2017). I will review some of the changes that have taken place in music itself, and in the ways in which people engage with it; in developmental psychology and education more generally; and in music psychology. I will then go on to identify 10 theoretical models of musical development, and outline 5 key theoretical issues on which they might be assessed. Three approaches seem to have particular potential for success in the future, namely social cognitive models which focus on the self and identity; approaches based on music theory; and neuroscientific research. What might this field look like 30 more years on in 2047, the year of my 99th birthday?”

David Hargreaves is Professor of Education and Froebel Research Fellow, and has previously held posts in the Schools of Psychology and Education at the Universities of Leicester, Durham and the Open University. He is also Visiting Professor of Research in Music Education at the University of Gothenburg, Sweden, and Adjunct Professor at Curtin University, Perth, Australia. He is a Chartered Psychologist and Fellow of the British Psychological Society. He was Editor of Psychology of Music 1989-96, Chair of the Research Commission of the International Society for Music Education (ISME) 1994-6, and is currently on the editorial boards of 10 journals in psychology, music and education. In recent years he has spoken about his research at conferences and meetings in various countries on all 5 continents. He has been keynote speaker at the Annual Conference of the BPS, and gave a TEDX 2011 Warwick.  He has appeared on BBC TV and radio as a jazz pianist and composer, and is organist in the East Cambridgeshire Methodist church circuit. In 2004 he was awarded an honorary D.Phil, Doctor Honoris Causa, by the Faculty of Fine and Applied Arts in the University of Gothenburg, Sweden in recognition of his ‘most important contribution towards the creation of a research department of music education’ in the School of Music and Music Education in that University.

Posted in Spring 2017 | Leave a comment

Reading Group 2 – Emotion in music and predictive processing

Greetings everyone! The second reading group meets us with an overview of emotion and predictive processing in music. The stage is set with a general introduction to the ITPRA theory of expectation, and the evolutionary adaptation of expectation and emotion in chapter one of David Huron’s Sweet Anticipation (2006). Koelsch et al. (2015) provide an in depth, comprehensive account of the neurobiological foundations of human emotions with a generous sprinkling of evidence from the neuroscience of music, dubbed “The quartet theory of human emotions”. This is followed by a concise review from Gebauer, Kringelbach and Vuust (2015) which links the quartet theory and predictive coding as a plausible account of emotion in music. Below is a summary of the articles. I’d recommend reading the first and third texts first, then tackle the neurological article with concepts of expectation, prediction, and emotion from those texts in mind.  A review of our discussion points will be in this subsequent blog post. 

Huron notes that we have a pretty well agreed-upon understanding of musical emotion and expectation– for example, in Western music minor chords are sad, a diminished seventh can signify suspense– but the principles of these folk-psychological generalizations are by no means cross-culturally universal. If we want a clear picture of how we process emotion and expectation in music, then we need to appeal to “psychology proper”, including neuroscience, biology, evolution, and culture. Expectation is a biological adaptation making use of specific physiological structures (such as the endocrine (hormone) system), and our culture is the environment in which we learn, apply, and refine our expectations. There are obvious biological advantages to forming accurate expectations (predictions), and emotions can be thought of as the ‘motivational amplifiers’ either enhancing or diminishing those expectations in a given situation. Referencing Meyer’s Emotion & Meaning in Music, Huron goes on to introduce the ITPRA theory of expectation.

“The principle emotional content of music arises through the composer’s choreographing of expectation. “

                                                                                   -Meyer 1956

At at least a superficial level, the ITPRA model resembles the hierarchical predictive processing framework we reviewed last time, however it encompasses a more broad range than seemingly brain-centered prediction units. ITPRA consists of five response systems: Imagination and Tension (pre-outcome responses) occur before an (un)expected event’s onset, and Reaction response, Prediction response, and Appraisal (post out come responses) occur after the event onset.

  1. Imagination– Thinking and feeling about future possibilities. Simulating the future event as if it had already happened.
  2. Tension– Preparation for an anticipated event, including motor preparation (arousal) and perceptual preparation (attention) with the goal of matching the appropriate levels of arousal and attention just in time for the expected event.
    ——Event Onset——
  3. Prediction Response– Generates expectation-related emotion: if stimulus is expected, the emotional response is positively valenced; if the stimulus is unexpected, the emotional response is negatively valenced.
  4. Reaction Response– Immediate, non-conscious somatic response. Similar to reflex, but importantly, can be learned/trained over time (i.e. through schemas). Can be associated with system 1 in dual process theories of emotion.
  5. Appraisal– Slower, context-dependent, cognitive response. Takes into account complex social/biological factors.  Can be associated with system 2.

Koelsch et al. provide a very detailed look at the ‘brainy bits’ of expectation and emotion. If you’re new to neuroscience or haven’t memorized the various parts of the brain and their functions, don’t worry. We’ll just start with a basic overview.

The Quartet Theory of Human Emotions

First, Koelsch et al. differentiate between effector systems and affect systems. The effector systems are the biological systems which act in combination with the affect systems to generate four different categories of neurological affect. In their diagram above, you can see how the effector systems and affect systems have reciprocal interactions amongs themselves, with each other, as well as with the conscious appraisal system (here defined as the language system). Music, interestingly, can perhaps play a part in the conscious appraisal system (think prosody), or can rather hold a place in a nonconsious system prior to any bias which arises from translating an emotion into words.

A Quartet of Affect systems

  1. Brainstem-centered affect system- this controls ascending activation, or the subjective feeling of being energized. The brainstem controls the autonomic nervous system (ANS) as well as expression of emotions, modulation of pain, maiting behavior, and coordinating behavioral ANS activity (as in freeze, flight, or fight). The brainstem also generates both somatomotor and neuroendocrine activity in response to stimuli with emotional valence.
    • Music can already evoke autonomic and motor responses at the level of the brainstem. This may be one avenue through which music can (subconsciously or consciously) modulate our general level of excitement, say, when we want to make doing chores or working out more enjoyable (I’m thinking of jymmin).
  2. Diencephalon-centered affect system- this processes pain/pleasure regarding urges and homeostatic emotions. Koelsch et al. note that ‘the thalamus imbues sensory information with affective valence’, even before that information is consciously perceived.
    • The dopaminergic reward system can be activated by listening to music, of course explaining why music can make us feel happy through release of dopamine in the brain. This happens especially during anticipation and experience of peak emotion in music, aka when music gives you the chills.
  3. Hippocampus-centered affect system- this system regulates survival behaviors, and is associated with attachment-related affects such as love, and is involved with the social human motivation for group-inclusion.
  4. Orbitofrontal-centered affect system- this system handles a lot of activity. This is where ‘primary appraisal’— fast, automatic, nonconscious appraisal of sensory information—occurs, as well as automatic shifts of attention and ‘stimulus evaluation checks’, where sensory information is imbued with emotional valence and takes part in decision making and motivation or inhibition of behavior. The orbitofrontal cortex (OFC) also generates expectancies, and controls emotional behavior. This control is shaped by social cultural norms.
    • Music is of course mediated by social and cultural norms, so perhaps the OFC control of emotional behavior can be modulated by musical activity, not just the emotions themselves.

Koelsch et al. describe how the integration of affect and effector systems forms an emotion percept, or unverbal (pre-verbal) subjective feeling, via four components: an affective component, a sensory-interoceptive component which integrates the  physiological condition of the body, a motor component consisting of action tendencies, and a cognitive component detailing cognitive (though not necessarily conscious) appraisal.

In section four, the authors describe how their quartet theory interacts with the language system. Interestingly, they note that while emotion percepts need to be reconfigured into propositional linguistic expressions,

“Music…has the advantage of evoking a feeling sensation (i.e., an emotion percept) without this sensation being biased by the use of words…although music seems semantically less specific than language, music can be more specific when it conveys information about sensations that are problematic to express with words because music can operate prior to the reconfiguration of emotion percepts into words. “

Gebauer, Kringelbach, and Vuust now give us a short and concise commentary on how we can bring all of this emotion and expectation talk together under the framework of predictive coding. In Bayesian models of predictive coding, perception, action, emotion, and learning have the following definitions:

  • Perception – the process of minimizing prediction errors between higher-level ‘prediction units’ and lower level ‘error units’
  • Action– active engagement of the motor system to resample the environment in order to reduce prediction error
  • Emotion– weight/modulator of prediction error itself, guiding behavior, action and learning
  • Learning– long term influence on the prediction units

Through conscious and/or subconsious processing of statistical regularities in music, we form expectations (predictions) that are fulfilled or violated to varying degrees. Unexpected (unanticipated, unpredicted) events are met with heightened physiological arousal and attention, and can be further modulated by (conscious or unconscious) cognitive appraisal. Pleasure in music arises from the interplay between our expectations and the actual musical event– “the predictive motion between tension and release”. It seems that Meyer had the right idea all along, the science just needed to catch up.

The Quartet Theory of Human emotion seems to provide the neurophysiological grounding mechanisms by which both Huron’s ITPRA theory and predictive coding accounts could actually occur in the brain. What do you think? See you Thursday at 1pm! 7 Bristo Square, 1.210

Posted in Spring 2017, Uncategorized | 1 Comment

Discussion review- Reading Group 1- Introduction to music and predictive processing

Thanks to all who participated, the first EMPRes session brought about discussion from musicians, psychologists, and philosophers regarding prediction, expectation, and anticipation in music. We read Rohrmeier & Koelsch’s (2012) review of predictive information processing in music cognition and Michael and Wolf’s (2014) take on the possible impact of hierarchical predictive processing in studying music perception and performance (both summarized here). As a group, we were generally most excited about how predictive processing could illuminate the cognitive phenomena associated with joint action in music.

Requirement of a discrete, finite set of elements for predictability and combinatoriality

  • This is reminiscent of Hockett’s (1960) features of spoken language which includes duality of patterning, or the ability to form infinitely many utterances from a finite set of discrete (meaningless) units according to a series of rules. If music is a hierarchical system (in the manner of language) and has these features, then it seems plausible that the brain (or a computer) might apply similar models of hierarchical processing to each. From this arose a series of questions…

What determines the discrete units?– in Western music, sound signals are divided into 12 chromatic pitches. And this particular separation of music into discrete pitches is continually reinforced by the 5-line staff. But there are microtonal musics which contain even smaller subdivisions of pitch, or musics where a single ‘unit’ is actually a continuous sound event spanning multiple Western pitches (perhaps one unit starts in this high area and falls to a lower area). Exposure to a different corpus of music (or enculturation within a certain musical environment) yields different unit-categorizations of input, different rules, and thus imposes different expectations regarding musical events.

What determines the rules?– Explicit reference sources or explicitly learned schema may determine some rules. But most rules are learned implicitly, just like the implicit learning of linguistic grammar, from observing patterns of regularities in the surrounding musical environment. Rules within musical interaction may be explicitly agreed on (we’re going to improvise over 12-bar blues) or implicitly agreed on (the number of times a performer repeats a chorus before trading off or continuing to the end of the piece).

  • In improvisation, each performer makes decisions such as same vs new, whether to compliment or contrast, imitate or not, embellish or not, to go back to a particular rhythm/melody or not. These choices depend on the performers’ knowledge of each other ‘I’ve played with her loads of times before, and I know how she plays and what she expects me to do how she expects us to interact’.
  • Awareness of identity and personal style in performance provides a basis to form predictions of what will happen next. Even in free improvisation which is supposed to be making music without rules, performers will undoubtedly display a certain style or tendency.
  • Kate proposed a ‘game theory’ idea between the performer and audience, if the performer thinks the audience knows where she’s going next, then ‘change!’ Similarly between composer and audience, except that the composer must predict the audiences expectation on a more distant time scale, as well as account for the music’s mediation by the performer.

Prediction in perception vs production– presumably all of Rohrmeier and Koelsch’s four sources of prediction could be acquired through either (and both) perception and production of music. Perhaps producing music gives you more veridical knowledge concerning for example, the physical constraints of a certain instrument, or experiential knowledge of a certain piece in performance.

Computational Models

Information Dynamics of Music (IDyOM) vs Hierarchical Predictive Processing (HPP) We spent a lot of time going through the various computational models mentioned in regards to music, and still remain confused as to what and whether there is a difference between the IDyOM model and HPP model reviewed by Michael and Wolf. The HPP model was put forth in Clark (2013), and taken by Schaefer (2014) as a possible avenue for further research on prediction in music perception and cognition. It seems that IDyOM does not have an expectation feedback component (Pearce & Wiggins, fig. 1), whereas HPP is updated based on prediction errors and expectation feedback (when the top-down prediction is mismatched with the incoming sensory signals). And what does this all mean…? Here’s where we defaulted to the experts…

HPP: Clark (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science

HPP in Music: Schaefer, R (2014) Mental Representations in Musical Processing and their Role in Action-Perception Loops

IDyOM: Pearce & Wiggins (2011). Auditory Expectation: The Information Dynamics of Music Perception and Cognition

See you next time!

Next time we look into emotion in music: how our evolutionary history built us as predictive creatures, and how prediction underlies our experience of emotion in music. Setting the stage is the first chapter of Sweet Anticipation, followed by a bit of neurobiological theory of human emotions, and some insights from predictive coding in music.

Thursday 2 March, 1pm. Room 1.21, 7 Bristo Square. Reading: Huron (2006) Sweet Anticipation, Chapter 1 + Koelsch et al (2015) + Gebauer, Kringelbach & Vuust (2015)

Posted in Spring 2017 | Leave a comment

Reading Group 1 – Introduction to music and predictive processing

Thanks to both Nikki and Lauren for the warm welcome and for helping set up these sessions!

The first EMPRes session of 2017 provided an introduction to predictive processing in music perception and cognition. We began with Rohrmeier & Koelsch’s (2012) detailed review of existing work in predictive information processing in music cognition, including converging theoretical, behavioral, computational, and brain imaging approaches. Then we looked at a commentary by Michael and Wolf (2014) regarding the impact on music research of a specific framework of predictive processing, namely Hierarchical Predictive Processing (HPP) as put forward by Schaefer. These papers were a bit more dense than the ‘introduction’ meeting was intended to be, so I’ll lay out a summary of them here, attempting to explain some of the computational bits as well. Please feel free to comment if you have any questions, or especially if you have any answers or better explanations! A review of our discussion points will be in this subsequent blog post.

Rohrmeier & Koelsch laid out the predictable qualities of music, how our brains may be utilizing those qualities (e.g. through perceptual Gestalts, structural knowledge), behavioral evidence of prediction, followed by various computational models and neural evidence for predictive processes.

Predictable information within the music

  1. Predictability and combinatoriality requires a discrete, finite set of elements
  2. Prediction in music occurs on both lower-level processes (predicting the next note) and higher-level processes (predicting a development section in a sonata)
  3. Four sources of prediction, which may work together or be in ‘mutual conflict’:
    • Acquired style-specific syntactic/schematic knowledge
    • Sensory & low level predictions (Gestalts; ‘data-driven’ perception)
    • Veridical Knowledge (from prior knowledge of/exposure to the piece)
    • Non-sensory structures acquired through online learning (knowledge gained from current listening, e.g. motifs, statistical structures, probability profiles)
  4. Prediction in music is messy, constant parallel predictions are made in respect to not only single melodic strings, but complex harmonies, overall key structure, polyphonic and polyrhythmic sound streams, and at phrase-,movement-, or whole-composition levels. It becomes even messier when adding in texture/timbre changes, or considering more polyrhythmic, polymetrical, or complex polyphonic music of non-Western musics

Behavioral Findings

  1. Prediction effects are found in behavioral responses (identification) of unexpected musical events in the case of unexpected tones, intervals, and chords
  2. Musical priming studies (adapted from semantic priming studies in language) give evidence of implicit local knowledge, and perhaps even higher-level tonal key and temporal aspects

Computational Models- not as scary as they sound!

Why do we like them? “Predictive computational models provide a link between theoretical, behavioural, and neural accounts of music prediction”

  1. Hand crafted models such as Piston’s table of usual root progressions, show the general harmonic (root) progression expectancies based on tendencies in Western music. Your theory courses teach you to explicitly recognize these tendencies, which are implicitly learned and recognized by persons enculturated around Western music.

  1. Probabilistic models

N-gram models chop long segments up into shorter bits, and analyze those bits for statistical probabilities to predict the likelihood of the next unit.

  • Example of a 3-garm model of a sequence of pitches {A C E G C E A C E G}; the sequence will be chopped into shorter bits of three pitches, and the number of times each bit occurs [ACE: 2 (occurs two times); CEG:2 ; EGC:1; GCE:1; CEA:1; EAC:1]
  • We can use this model to predict that the notes ‘CE’ will occur after A 2/3 of the time, or after G 1/3 of the time (for this example, it’s easy to just count every instance of [_CE], 3 instances, and see 2 of those are A+CE, and 1 is G+CE)

Multiple Viewpoint Idea

  • Using information from multiple different features (viewpoints) to aid the prediction of a target feature
  • In music, using “duration or metrical position to improve the prediction of pitch class”, for example, an 8th note anacrusis at the start of a piece will likely be 5 leading to 1 (Sol, Do) — this particular prediction though, necessitates prediction based on previous exposure to a larger corpus of music, since it would be improbable to infer statistical correlations of a current piece based on only two notes

Short-term models and Long-term models

  • Short term- knowledge from the current listening; “specific repetitive and frequent patterns that are particular to the current piece and picked up during implicit online-learning”
  • Long term-knowledge from an entire corpus; “long-term acquired (melodic) patterns”

IDyOM- Information Dynamics of Music

Hidden Markov Models

  • A Markov transition matrix is the same as a 2-gram model. So our earlier set of pitches {A C E G C E A C E G}, would be split into [AC:2; CE:3; EG:2; EA:1;] and the probabilities are modeled between single events.
  • A Hidden Markov Model (HMM) generates probabilities not from single events, but instead generates probability distributions from hidden deep structure states. The probability of each subsequent state depends only on the previous state (not future states), reflecting the temporality of musical processing
  •  An introduction to HMM

Dynamic Bayesian Networks (DBM)

  • A DBN is an extension of an HMM in the same way that the multiple viewpoint model was an extension of n-gram models. DBNs analyze “dependencies and redundancies between different musical structures, such as chord, duration, or mode” to generate predictions

Connectionist Networks- Neural networks are designed to represent how actual biological neurons work, combining probabilistic models like those listed above with practical models of neural connections, firing, and growth dynamics

  • MUSCAT- an early musical neural net pre-programmed with Western features (12 chromatic pitches, 24 diatonic Major and minor chords, 24 keys)- does very well at predicting features of tone perception and prediction
  • Self-Organizing Map- unsupervised learning of features of tonal music (this is different from MUSCAT in that it was not pre-programmed with any training data)- matched some experimental data for predicting chord relations, key relations, and tone relations
  • Simple Recurrent Networks- unsuperviseds learning of transition probabilities and structures—not as efficient as n-gram models
  • Dynamic cognitively motivated oscillator models- unsupervised learning to adapt to metrical structure—however slow adaptation to tempo changes

Neuroscientific Evidence

Increased brain response to incongruent (unexpected) stimuli within a sequence- in music this may be hearing a normal chord progression followed by an unusually placed chord

  • ERAN- early right anterior negativity
  • MMN- mismatched negativity

It’s not clear through the neuroscientific evidence whether these responses are the result of local vs hierarchical violations

Brain areas involved

  • Ventral Pre-motor Cortex, BA44 (Broadman’s area 44, the right hemisphere analouge to Broca’s area for language in the left hemisphere- perhaps both do hierarchical processing)

Michael & Wolf laid out perhaps a more accessible overview of areas where a particular predictive processing framework, hierarchical predictive processing (HPP), might lend a novel contribution to the study of music cognition and human social cognition more generally.

HPP is a predictive framework which describes the brain as having a combination of lower level and higher level models arranged, of course, in a hierarchy. Each higher-level model generates and sends predictions down stream to the model immediately below it, while each lower-level model sends sensory input upstream to the model immediately above it. The goal is to minimize prediction error between the higher-level predictions and the lower-level sensory representations. Every time a higher-level prediction comes in contact with a lower-level sensory input that *does not match* the prediction, a prediction error is sent. The higher-level model then takes that prediction error, and changes its prediction, repeating until the incoming signal and the downward prediction are sufficiently matched. Higher-level models are thought to represent changes occurring over longer time scales, such as more abstract, structural, schematic, or specific style aspects of music. Lower-level models represent change in sensory input over shorter time scales, as in immediate local events of the next note or rhythm.

Musical Preferences

  • HPP seems of little use in the understanding of musical preferences. It can’t be assumed that the preferred balance between ‘optimally predictable’ and ‘a bit of uncertainty’ between individuals is the same. The author’s dub this search for the ‘sweet spot of predictability’ the “Golidlocks fallacy”, since even the right amount of predictability in a novice trumpet players crude sounds is likely still unpleasant.

Embodying Music

  • HPP might help in furthering our understanding of embodied music cognition by providing a clear link between perception and action, where perception simply is reflected by “a graded hierarchy of models functioning according to the same basic principles” separated only by time scales, and action is “in a sense equivalent to updating of higher-level cognitive models… through active inference”

Joint Action

  • In joint music making, agents engage in recursive higher-order modeling: “agents are not only modeling the music, but they are modeling the other agent’s actions and intentions, as well as the other agent’s model of her actions and intentions”. If joint music making is construed by the brain as a coordination problem, then HPP may the perfect model to step in and try to minimize prediction (coordination) error in these complex, recursive social interactions
Posted in Spring 2017 | Leave a comment

Spring 2017 – Reading group dates – Music in the predictive brain

1-2pm, every fourth Thursday, starting 2 Feb.

Theme for our Spring 2017 reading groups: Music in the predictive brain

Posted in Spring 2017 | Leave a comment

Shannon on the reading group theme, ‘Music in the predictive brain’

Post by Shannon Proksch:

“Any introduction to music theory class will tell you that music is a complex interchange of moments of tension juxtaposed with moments of release. Through practised study, or passive listening, we learn to anticipate, expect, and predict these patterns of tension and release in the music that surrounds us—framing our emotional and social engagement with music and others in our musical world.

 “Expectation and prediction constitute central mechanisms in the perception and cognition of music” Rohrmeier & Koelsch, 2011

How does a bunch of sound in a messy sensory environment become a musical perception in our mind? How does emotion serve to regulate, or emerge from, our musical experience? How can musical rhythm provide insight into human perception?

 “Minds are ‘wired’ for expectation” Huron, 2006

“Brains…are essentially prediction machines” Clark, 2013

“A mind is fundamentally an anticipator, an expectation generator” Dennet 1996

Our reading group sessions this semester are going to cover a broad introduction to the interdisciplinary study of predictive processing in music, to help us grasp how expectation shapes our perception in music and beyond. I’ve suggested a range of papers that examine the musical brain through a look at work in empirical musicology, music psychology, neuroscience, cognitive science, and philosophy.”

Posted in Spring 2017 | Leave a comment