Discussion Review- Reading Group 2- Emotion in music and predictive processing

Thanks to all who came along for our second EMPRes session of 2017! It’s always nice to see more faces the second time around, that means we must be doing something right 😉 This week we heard insight from musicians, psychologists, philosophers, and a visiting researcher in cognitive neuroscience who lucky for us chimed in with some ideas regarding this week’s neuroscience paper. For this session, we read the introduction chapter to Huron’s book Sweet Anticipation (2006), Koelsch et al. (2015) neuroscientific overview of emotion with some musical examples, and Gebauer, Kringelbach and Vuust’s (2015) review of Koelsch et al.’s proposed Quartet Theory of Emotion through the lens of predictive processing in music. Below is a review of our discussion, and a summary of the texts can be found here.

*Reminder* Our next meeting is on Thursday March 30th at 1pm in 50 George Square, room 2.3o *note the room change*

ITPRA, the Western art music tradition, and cognitivism

In evaluating the first chapter of Huron’s book, we realized that the groundwork of his ITPRA model was laid out in the context of many expectation-laden events other than music. But that’s okay, since we know where the rest of his text is headed as an explanation of musical expectation.

The ITPRA model aims to explain expectation for events both at a very general, long time frame involving conscious imaginings (such as the expectation of receiving a raise at work), and at specific, short time frames (such as the expectation of the next note or chord in a sequence). With the evolutionary story provided, it is sensed that emotions play[ed] an integral role in forming accurate expectations, and thus enhancing chances for survival. However, the first paragraph of the introduction provides a very offhand account of emotions as potentially frivolous coloring of experience:

 “… emotions add subtle nuances that color our perceptions of the world. Emotions add depth to existence; they give meaning and value to life.” -Huron, 2006

Perhaps, while emotions may be beneficial in informing and evaluating expectations/predictions, they may not be altogether necessary. We thought about this a bit more when we discussed emotion’s potential role in predictive coding.

The reference to Meyer’s account of “emotional content of music aris[ing] through the composer’s choreographing of expectation” was also problematic. As pointed out by Nikki, it is important to consider that contrary to the Western art tradition of pre-composed music mediated by a performer, the composer is not always separate from the performer either in body or in time. Not all music is scripted, and improvisatory music-making occurs (is composed by) the performer at the same time that it is being performed. Russ noted that this consideration has been left out in most (all?) of the literature we’ve reviewed so far in this reading group, and it would add clarity if author’s inserted a simple statement along the lines of ‘for the purposes of the current project, we are looking at pre-composed music’ or a claim that their project applies equally well to both pre-composed and improvisatory contexts. As our resident improviser, Russ was asked whether it feels like he is choreographing expectations during an improv performance. Perhaps not in cases of free improvisations, but at say, a wedding, it would be more likely to engage in more melodic ‘performance norms—things that you would expect [at a wedding]’, such as a certain genre, form, style, etc. which ‘fits within the norm’. In other words, context matters.

Coming back to the ITPRA model itself, it does seem quite rooted in the cognitivist paradigm. Especially given there is an entire acronym given for ‘appraisal’, which grants heavy influential capabilities to cognitive evaluations.

The Quartet Theory of Human Emotions

The main conclusion to draw from this model is the reciprocal interactions between effector systems, affect systems, and conscious appraisal systems (such as language), as well as reciprocal interactions within elements of each system. Koelsch et al. provide neurological evidence for the emotional involvement of each of the brain’s four affect systems (orbitofrontal-centered, hippocampus-centered, diencephalon-centered, and brainstem-centered), as informed by the brain’s four effector systems (peripheral arousal, action tendencies, motor expression, and memory & attention). The interaction of these systems leads to an ‘emotion percept’, which consists of four components, an affective component, a sensory-interoceptive component, a motor component, and a cognitive component. The emotion percept is a feeling sensation which is a feeling sensation evoked before any bias from translation into propositional content (words). This conception of a pre-verbal ‘emotion percept’, seems similar to what Ian Cross terms ‘floating intentionality’, referring to music’s general ‘aboutness’ which is the agreed meaningfulness attributed to music without mutual agreement on specifically what that meaning is.

“music’s inexpliciteness, its ambiguity or floating intentionality may thus be regarded as a highly advantageous characteristic of its function for groups; music, then, might serve as a medium for the maintenance of human social flexibility” –Cross 2004

Predictive coding accounts of music, is emotion necessary?

In Gebauer, Kringelbach, and Vuust’s review of the Quartet Theory of Human Emotions from the lens of predictive coding, claiming that emotion could be the weight/modulator of prediction error itself, guiding behavior, action, and learning. Perhaps rather than (or in addition to) a modular influence of a global emotion (such as joy, or anger), it may be useful to consider a hierarchy of different neurons having a particular valence. Perhaps at the lowest level there would be a binary Y/N (good/bad) valence receptive field, and the next level may consist of neurons which respond to populations of lower-level neurons (in the same manner as retinal ganglion cells responding preferentially to certain orientations of lines; and even incorporating something similar to center-surround style activation to either facilitate or inhibit response). More complex emotions (what we may be cognitively aware of) would arise as a result of a hierarchy of these valence responses at differing levels of complexity.

However, Lauren brought up a good point, why is emotion even necessary in this prediction and prediction error model? It may seem like a useful and exciting way to account for emotions as modulations of prediction error, and a motivational factor for making accurate predictions, however the predictive coding model seems to work just fine without attempting to squeeze emotions into the mix. And indeed, seeking to define emotions because ‘we know we have emotions’ is a lot like examining the pineal gland searching for the soul because ‘we know we have souls’.

*Reminder* Our next meeting is on Thursday March 30th at 1pm in 50 George Square, room 2.3o *note the room change*

Our topic will be Rhythm in music and predictive processing, reading  Vuust et al (2009) + Vuust & Witek (2014) See you next time!

 

Posted in Uncategorized | Leave a comment

The Psychology of Musical Development – 30 years on, Prof. David Hargreaves (Wed 8 Mar)

Looking at the well-thumbed copy of the 1986 edition of The Developmental Psychology of Music on my bookshelf, and looking forward to this talk tomorrow!

Prof. David Hargreaves (Roehampton University) – Atrium, Alison House. 5.15pm

“I shall reflect upon some of the changes that have taken place since the publication of my book The Developmental Psychology of Music (CUP, 1986), which has recently been completely rethought and reworked by Alexandra Lamont and I (Hargreaves and Lamont, 2017). I will review some of the changes that have taken place in music itself, and in the ways in which people engage with it; in developmental psychology and education more generally; and in music psychology. I will then go on to identify 10 theoretical models of musical development, and outline 5 key theoretical issues on which they might be assessed. Three approaches seem to have particular potential for success in the future, namely social cognitive models which focus on the self and identity; approaches based on music theory; and neuroscientific research. What might this field look like 30 more years on in 2047, the year of my 99th birthday?”

David Hargreaves is Professor of Education and Froebel Research Fellow, and has previously held posts in the Schools of Psychology and Education at the Universities of Leicester, Durham and the Open University. He is also Visiting Professor of Research in Music Education at the University of Gothenburg, Sweden, and Adjunct Professor at Curtin University, Perth, Australia. He is a Chartered Psychologist and Fellow of the British Psychological Society. He was Editor of Psychology of Music 1989-96, Chair of the Research Commission of the International Society for Music Education (ISME) 1994-6, and is currently on the editorial boards of 10 journals in psychology, music and education. In recent years he has spoken about his research at conferences and meetings in various countries on all 5 continents. He has been keynote speaker at the Annual Conference of the BPS, and gave a TEDX 2011 Warwick.  He has appeared on BBC TV and radio as a jazz pianist and composer, and is organist in the East Cambridgeshire Methodist church circuit. In 2004 he was awarded an honorary D.Phil, Doctor Honoris Causa, by the Faculty of Fine and Applied Arts in the University of Gothenburg, Sweden in recognition of his ‘most important contribution towards the creation of a research department of music education’ in the School of Music and Music Education in that University.

Posted in Spring 2017 | Leave a comment

Reading Group 2 – Emotion in music and predictive processing

Greetings everyone! The second reading group meets us with an overview of emotion and predictive processing in music. The stage is set with a general introduction to the ITPRA theory of expectation, and the evolutionary adaptation of expectation and emotion in chapter one of David Huron’s Sweet Anticipation (2006). Koelsch et al. (2015) provide an in depth, comprehensive account of the neurobiological foundations of human emotions with a generous sprinkling of evidence from the neuroscience of music, dubbed “The quartet theory of human emotions”. This is followed by a concise review from Gebauer, Kringelbach and Vuust (2015) which links the quartet theory and predictive coding as a plausible account of emotion in music. Below is a summary of the articles. I’d recommend reading the first and third texts first, then tackle the neurological article with concepts of expectation, prediction, and emotion from those texts in mind.  A review of our discussion points will be in this subsequent blog post. 

Huron notes that we have a pretty well agreed-upon understanding of musical emotion and expectation– for example, in Western music minor chords are sad, a diminished seventh can signify suspense– but the principles of these folk-psychological generalizations are by no means cross-culturally universal. If we want a clear picture of how we process emotion and expectation in music, then we need to appeal to “psychology proper”, including neuroscience, biology, evolution, and culture. Expectation is a biological adaptation making use of specific physiological structures (such as the endocrine (hormone) system), and our culture is the environment in which we learn, apply, and refine our expectations. There are obvious biological advantages to forming accurate expectations (predictions), and emotions can be thought of as the ‘motivational amplifiers’ either enhancing or diminishing those expectations in a given situation. Referencing Meyer’s Emotion & Meaning in Music, Huron goes on to introduce the ITPRA theory of expectation.

“The principle emotional content of music arises through the composer’s choreographing of expectation. “

                                                                                   -Meyer 1956

. 

At at least a superficial level, the ITPRA model resembles the hierarchical predictive processing framework we reviewed last time, however it encompasses a more broad range than seemingly brain-centered prediction units. ITPRA consists of five response systems: Imagination and Tension (pre-outcome responses) occur before an (un)expected event’s onset, and Reaction response, Prediction response, and Appraisal (post out come responses) occur after the event onset.

  1. Imagination– Thinking and feeling about future possibilities. Simulating the future event as if it had already happened.
  2. Tension– Preparation for an anticipated event, including motor preparation (arousal) and perceptual preparation (attention) with the goal of matching the appropriate levels of arousal and attention just in time for the expected event.
    ——Event Onset——
  3. Prediction Response– Generates expectation-related emotion: if stimulus is expected, the emotional response is positively valenced; if the stimulus is unexpected, the emotional response is negatively valenced.
  4. Reaction Response– Immediate, non-conscious somatic response. Similar to reflex, but importantly, can be learned/trained over time (i.e. through schemas). Can be associated with system 1 in dual process theories of emotion.
  5. Appraisal– Slower, context-dependent, cognitive response. Takes into account complex social/biological factors.  Can be associated with system 2.

Koelsch et al. provide a very detailed look at the ‘brainy bits’ of expectation and emotion. If you’re new to neuroscience or haven’t memorized the various parts of the brain and their functions, don’t worry. We’ll just start with a basic overview.

The Quartet Theory of Human Emotions

First, Koelsch et al. differentiate between effector systems and affect systems. The effector systems are the biological systems which act in combination with the affect systems to generate four different categories of neurological affect. In their diagram above, you can see how the effector systems and affect systems have reciprocal interactions amongs themselves, with each other, as well as with the conscious appraisal system (here defined as the language system). Music, interestingly, can perhaps play a part in the conscious appraisal system (think prosody), or can rather hold a place in a nonconsious system prior to any bias which arises from translating an emotion into words.

A Quartet of Affect systems

  1. Brainstem-centered affect system- this controls ascending activation, or the subjective feeling of being energized. The brainstem controls the autonomic nervous system (ANS) as well as expression of emotions, modulation of pain, maiting behavior, and coordinating behavioral ANS activity (as in freeze, flight, or fight). The brainstem also generates both somatomotor and neuroendocrine activity in response to stimuli with emotional valence.
    • Music can already evoke autonomic and motor responses at the level of the brainstem. This may be one avenue through which music can (subconsciously or consciously) modulate our general level of excitement, say, when we want to make doing chores or working out more enjoyable (I’m thinking of jymmin).
  2. Diencephalon-centered affect system- this processes pain/pleasure regarding urges and homeostatic emotions. Koelsch et al. note that ‘the thalamus imbues sensory information with affective valence’, even before that information is consciously perceived.
    • The dopaminergic reward system can be activated by listening to music, of course explaining why music can make us feel happy through release of dopamine in the brain. This happens especially during anticipation and experience of peak emotion in music, aka when music gives you the chills.
  3. Hippocampus-centered affect system- this system regulates survival behaviors, and is associated with attachment-related affects such as love, and is involved with the social human motivation for group-inclusion.
  4. Orbitofrontal-centered affect system- this system handles a lot of activity. This is where ‘primary appraisal’— fast, automatic, nonconscious appraisal of sensory information—occurs, as well as automatic shifts of attention and ‘stimulus evaluation checks’, where sensory information is imbued with emotional valence and takes part in decision making and motivation or inhibition of behavior. The orbitofrontal cortex (OFC) also generates expectancies, and controls emotional behavior. This control is shaped by social cultural norms.
    • Music is of course mediated by social and cultural norms, so perhaps the OFC control of emotional behavior can be modulated by musical activity, not just the emotions themselves.

Koelsch et al. describe how the integration of affect and effector systems forms an emotion percept, or unverbal (pre-verbal) subjective feeling, via four components: an affective component, a sensory-interoceptive component which integrates the  physiological condition of the body, a motor component consisting of action tendencies, and a cognitive component detailing cognitive (though not necessarily conscious) appraisal.

In section four, the authors describe how their quartet theory interacts with the language system. Interestingly, they note that while emotion percepts need to be reconfigured into propositional linguistic expressions,

“Music…has the advantage of evoking a feeling sensation (i.e., an emotion percept) without this sensation being biased by the use of words…although music seems semantically less specific than language, music can be more specific when it conveys information about sensations that are problematic to express with words because music can operate prior to the reconfiguration of emotion percepts into words. “

Gebauer, Kringelbach, and Vuust now give us a short and concise commentary on how we can bring all of this emotion and expectation talk together under the framework of predictive coding. In Bayesian models of predictive coding, perception, action, emotion, and learning have the following definitions:

  • Perception – the process of minimizing prediction errors between higher-level ‘prediction units’ and lower level ‘error units’
  • Action– active engagement of the motor system to resample the environment in order to reduce prediction error
  • Emotion– weight/modulator of prediction error itself, guiding behavior, action and learning
  • Learning– long term influence on the prediction units

Through conscious and/or subconsious processing of statistical regularities in music, we form expectations (predictions) that are fulfilled or violated to varying degrees. Unexpected (unanticipated, unpredicted) events are met with heightened physiological arousal and attention, and can be further modulated by (conscious or unconscious) cognitive appraisal. Pleasure in music arises from the interplay between our expectations and the actual musical event– “the predictive motion between tension and release”. It seems that Meyer had the right idea all along, the science just needed to catch up.

The Quartet Theory of Human emotion seems to provide the neurophysiological grounding mechanisms by which both Huron’s ITPRA theory and predictive coding accounts could actually occur in the brain. What do you think? See you Thursday at 1pm! 7 Bristo Square, 1.210

Posted in Uncategorized | 1 Comment

Discussion review- Reading Group 1- Introduction to music and predictive processing

Thanks to all who participated, the first EMPRes session brought about discussion from musicians, psychologists, and philosophers regarding prediction, expectation, and anticipation in music. We read Rohrmeier & Koelsch’s (2012) review of predictive information processing in music cognition and Michael and Wolf’s (2014) take on the possible impact of hierarchical predictive processing in studying music perception and performance (both summarized here). As a group, we were generally most excited about how predictive processing could illuminate the cognitive phenomena associated with joint action in music.

Requirement of a discrete, finite set of elements for predictability and combinatoriality

  • This is reminiscent of Hockett’s (1960) features of spoken language which includes duality of patterning, or the ability to form infinitely many utterances from a finite set of discrete (meaningless) units according to a series of rules. If music is a hierarchical system (in the manner of language) and has these features, then it seems plausible that the brain (or a computer) might apply similar models of hierarchical processing to each. From this arose a series of questions…

What determines the discrete units?– in Western music, sound signals are divided into 12 chromatic pitches. And this particular separation of music into discrete pitches is continually reinforced by the 5-line staff. But there are microtonal musics which contain even smaller subdivisions of pitch, or musics where a single ‘unit’ is actually a continuous sound event spanning multiple Western pitches (perhaps one unit starts in this high area and falls to a lower area). Exposure to a different corpus of music (or enculturation within a certain musical environment) yields different unit-categorizations of input, different rules, and thus imposes different expectations regarding musical events.

What determines the rules?– Explicit reference sources or explicitly learned schema may determine some rules. But most rules are learned implicitly, just like the implicit learning of linguistic grammar, from observing patterns of regularities in the surrounding musical environment. Rules within musical interaction may be explicitly agreed on (we’re going to improvise over 12-bar blues) or implicitly agreed on (the number of times a performer repeats a chorus before trading off or continuing to the end of the piece).

  • In improvisation, each performer makes decisions such as same vs new, whether to compliment or contrast, imitate or not, embellish or not, to go back to a particular rhythm/melody or not. These choices depend on the performers’ knowledge of each other ‘I’ve played with her loads of times before, and I know how she plays and what she expects me to do how she expects us to interact’.
  • Awareness of identity and personal style in performance provides a basis to form predictions of what will happen next. Even in free improvisation which is supposed to be making music without rules, performers will undoubtedly display a certain style or tendency.
  • Kate proposed a ‘game theory’ idea between the performer and audience, if the performer thinks the audience knows where she’s going next, then ‘change!’ Similarly between composer and audience, except that the composer must predict the audiences expectation on a more distant time scale, as well as account for the music’s mediation by the performer.

Prediction in perception vs production– presumably all of Rohrmeier and Koelsch’s four sources of prediction could be acquired through either (and both) perception and production of music. Perhaps producing music gives you more veridical knowledge concerning for example, the physical constraints of a certain instrument, or experiential knowledge of a certain piece in performance.

Computational Models

Information Dynamics of Music (IDyOM) vs Hierarchical Predictive Processing (HPP) We spent a lot of time going through the various computational models mentioned in regards to music, and still remain confused as to what and whether there is a difference between the IDyOM model and HPP model reviewed by Michael and Wolf. The HPP model was put forth in Clark (2013), and taken by Schaefer (2014) as a possible avenue for further research on prediction in music perception and cognition. It seems that IDyOM does not have an expectation feedback component (Pearce & Wiggins, fig. 1), whereas HPP is updated based on prediction errors and expectation feedback (when the top-down prediction is mismatched with the incoming sensory signals). And what does this all mean…? Here’s where we defaulted to the experts…

HPP: Clark (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science

HPP in Music: Schaefer, R (2014) Mental Representations in Musical Processing and their Role in Action-Perception Loops

IDyOM: Pearce & Wiggins (2011). Auditory Expectation: The Information Dynamics of Music Perception and Cognition

See you next time!

Next time we look into emotion in music: how our evolutionary history built us as predictive creatures, and how prediction underlies our experience of emotion in music. Setting the stage is the first chapter of Sweet Anticipation, followed by a bit of neurobiological theory of human emotions, and some insights from predictive coding in music.

Thursday 2 March, 1pm. Room 1.21, 7 Bristo Square. Reading: Huron (2006) Sweet Anticipation, Chapter 1 + Koelsch et al (2015) + Gebauer, Kringelbach & Vuust (2015)

Posted in Spring 2017 | Leave a comment

Reading Group 1 – Introduction to music and predictive processing

Thanks to both Nikki and Lauren for the warm welcome and for helping set up these sessions!

The first EMPRes session of 2017 provided an introduction to predictive processing in music perception and cognition. We began with Rohrmeier & Koelsch’s (2012) detailed review of existing work in predictive information processing in music cognition, including converging theoretical, behavioral, computational, and brain imaging approaches. Then we looked at a commentary by Michael and Wolf (2014) regarding the impact on music research of a specific framework of predictive processing, namely Hierarchical Predictive Processing (HPP) as put forward by Schaefer. These papers were a bit more dense than the ‘introduction’ meeting was intended to be, so I’ll lay out a summary of them here, attempting to explain some of the computational bits as well. Please feel free to comment if you have any questions, or especially if you have any answers or better explanations! A review of our discussion points will be in this subsequent blog post.

Rohrmeier & Koelsch laid out the predictable qualities of music, how our brains may be utilizing those qualities (e.g. through perceptual Gestalts, structural knowledge), behavioral evidence of prediction, followed by various computational models and neural evidence for predictive processes.

Predictable information within the music

  1. Predictability and combinatoriality requires a discrete, finite set of elements
  2. Prediction in music occurs on both lower-level processes (predicting the next note) and higher-level processes (predicting a development section in a sonata)
  3. Four sources of prediction, which may work together or be in ‘mutual conflict’:
    • Acquired style-specific syntactic/schematic knowledge
    • Sensory & low level predictions (Gestalts; ‘data-driven’ perception)
    • Veridical Knowledge (from prior knowledge of/exposure to the piece)
    • Non-sensory structures acquired through online learning (knowledge gained from current listening, e.g. motifs, statistical structures, probability profiles)
  4. Prediction in music is messy, constant parallel predictions are made in respect to not only single melodic strings, but complex harmonies, overall key structure, polyphonic and polyrhythmic sound streams, and at phrase-,movement-, or whole-composition levels. It becomes even messier when adding in texture/timbre changes, or considering more polyrhythmic, polymetrical, or complex polyphonic music of non-Western musics

Behavioral Findings

  1. Prediction effects are found in behavioral responses (identification) of unexpected musical events in the case of unexpected tones, intervals, and chords
  2. Musical priming studies (adapted from semantic priming studies in language) give evidence of implicit local knowledge, and perhaps even higher-level tonal key and temporal aspects

Computational Models- not as scary as they sound!

Why do we like them? “Predictive computational models provide a link between theoretical, behavioural, and neural accounts of music prediction”

  1. Hand crafted models such as Piston’s table of usual root progressions, show the general harmonic (root) progression expectancies based on tendencies in Western music. Your theory courses teach you to explicitly recognize these tendencies, which are implicitly learned and recognized by persons enculturated around Western music.

  1. Probabilistic models

N-gram models chop long segments up into shorter bits, and analyze those bits for statistical probabilities to predict the likelihood of the next unit.

  • Example of a 3-garm model of a sequence of pitches {A C E G C E A C E G}; the sequence will be chopped into shorter bits of three pitches, and the number of times each bit occurs [ACE: 2 (occurs two times); CEG:2 ; EGC:1; GCE:1; CEA:1; EAC:1]
  • We can use this model to predict that the notes ‘CE’ will occur after A 2/3 of the time, or after G 1/3 of the time (for this example, it’s easy to just count every instance of [_CE], 3 instances, and see 2 of those are A+CE, and 1 is G+CE)

Multiple Viewpoint Idea

  • Using information from multiple different features (viewpoints) to aid the prediction of a target feature
  • In music, using “duration or metrical position to improve the prediction of pitch class”, for example, an 8th note anacrusis at the start of a piece will likely be 5 leading to 1 (Sol, Do) — this particular prediction though, necessitates prediction based on previous exposure to a larger corpus of music, since it would be improbable to infer statistical correlations of a current piece based on only two notes

Short-term models and Long-term models

  • Short term- knowledge from the current listening; “specific repetitive and frequent patterns that are particular to the current piece and picked up during implicit online-learning”
  • Long term-knowledge from an entire corpus; “long-term acquired (melodic) patterns”

IDyOM- Information Dynamics of Music

Hidden Markov Models

  • A Markov transition matrix is the same as a 2-gram model. So our earlier set of pitches {A C E G C E A C E G}, would be split into [AC:2; CE:3; EG:2; EA:1;] and the probabilities are modeled between single events.
  • A Hidden Markov Model (HMM) generates probabilities not from single events, but instead generates probability distributions from hidden deep structure states. The probability of each subsequent state depends only on the previous state (not future states), reflecting the temporality of musical processing
  •  An introduction to HMM

Dynamic Bayesian Networks (DBM)

  • A DBN is an extension of an HMM in the same way that the multiple viewpoint model was an extension of n-gram models. DBNs analyze “dependencies and redundancies between different musical structures, such as chord, duration, or mode” to generate predictions

Connectionist Networks- Neural networks are designed to represent how actual biological neurons work, combining probabilistic models like those listed above with practical models of neural connections, firing, and growth dynamics

  • MUSCAT- an early musical neural net pre-programmed with Western features (12 chromatic pitches, 24 diatonic Major and minor chords, 24 keys)- does very well at predicting features of tone perception and prediction
  • Self-Organizing Map- unsupervised learning of features of tonal music (this is different from MUSCAT in that it was not pre-programmed with any training data)- matched some experimental data for predicting chord relations, key relations, and tone relations
  • Simple Recurrent Networks- unsuperviseds learning of transition probabilities and structures—not as efficient as n-gram models
  • Dynamic cognitively motivated oscillator models- unsupervised learning to adapt to metrical structure—however slow adaptation to tempo changes

Neuroscientific Evidence

Increased brain response to incongruent (unexpected) stimuli within a sequence- in music this may be hearing a normal chord progression followed by an unusually placed chord

  • ERAN- early right anterior negativity
  • MMN- mismatched negativity

It’s not clear through the neuroscientific evidence whether these responses are the result of local vs hierarchical violations

Brain areas involved

  • Ventral Pre-motor Cortex, BA44 (Broadman’s area 44, the right hemisphere analouge to Broca’s area for language in the left hemisphere- perhaps both do hierarchical processing)

Michael & Wolf laid out perhaps a more accessible overview of areas where a particular predictive processing framework, hierarchical predictive processing (HPP), might lend a novel contribution to the study of music cognition and human social cognition more generally.

HPP is a predictive framework which describes the brain as having a combination of lower level and higher level models arranged, of course, in a hierarchy. Each higher-level model generates and sends predictions down stream to the model immediately below it, while each lower-level model sends sensory input upstream to the model immediately above it. The goal is to minimize prediction error between the higher-level predictions and the lower-level sensory representations. Every time a higher-level prediction comes in contact with a lower-level sensory input that *does not match* the prediction, a prediction error is sent. The higher-level model then takes that prediction error, and changes its prediction, repeating until the incoming signal and the downward prediction are sufficiently matched. Higher-level models are thought to represent changes occurring over longer time scales, such as more abstract, structural, schematic, or specific style aspects of music. Lower-level models represent change in sensory input over shorter time scales, as in immediate local events of the next note or rhythm.

Musical Preferences

  • HPP seems of little use in the understanding of musical preferences. It can’t be assumed that the preferred balance between ‘optimally predictable’ and ‘a bit of uncertainty’ between individuals is the same. The author’s dub this search for the ‘sweet spot of predictability’ the “Golidlocks fallacy”, since even the right amount of predictability in a novice trumpet players crude sounds is likely still unpleasant.

Embodying Music

  • HPP might help in furthering our understanding of embodied music cognition by providing a clear link between perception and action, where perception simply is reflected by “a graded hierarchy of models functioning according to the same basic principles” separated only by time scales, and action is “in a sense equivalent to updating of higher-level cognitive models… through active inference”

Joint Action

  • In joint music making, agents engage in recursive higher-order modeling: “agents are not only modeling the music, but they are modeling the other agent’s actions and intentions, as well as the other agent’s model of her actions and intentions”. If joint music making is construed by the brain as a coordination problem, then HPP may the perfect model to step in and try to minimize prediction (coordination) error in these complex, recursive social interactions
Posted in Spring 2017 | Leave a comment

Spring 2017 – Reading group dates – Music in the predictive brain

1-2pm, every fourth Thursday, starting 2 Feb.

Theme for our Spring 2017 reading groups: Music in the predictive brain

Posted in Spring 2017 | Leave a comment

Shannon on the reading group theme, ‘Music in the predictive brain’

Post by Shannon Proksch:

“Any introduction to music theory class will tell you that music is a complex interchange of moments of tension juxtaposed with moments of release. Through practised study, or passive listening, we learn to anticipate, expect, and predict these patterns of tension and release in the music that surrounds us—framing our emotional and social engagement with music and others in our musical world.

 “Expectation and prediction constitute central mechanisms in the perception and cognition of music” Rohrmeier & Koelsch, 2011

How does a bunch of sound in a messy sensory environment become a musical perception in our mind? How does emotion serve to regulate, or emerge from, our musical experience? How can musical rhythm provide insight into human perception?

 “Minds are ‘wired’ for expectation” Huron, 2006

“Brains…are essentially prediction machines” Clark, 2013

“A mind is fundamentally an anticipator, an expectation generator” Dennet 1996

Our reading group sessions this semester are going to cover a broad introduction to the interdisciplinary study of predictive processing in music, to help us grasp how expectation shapes our perception in music and beyond. I’ve suggested a range of papers that examine the musical brain through a look at work in empirical musicology, music psychology, neuroscience, cognitive science, and philosophy.”

Posted in Spring 2017 | Leave a comment

January 2017 – reading group is back!

Welcome to Edinburgh, Shannon Proksch! Shannon is a current MSc student on the Mind, Language, and Embodied Cognition programme at Edinburgh, with a research interest in the relationship between music and language as situated communicative acts.

Shannon suggested a reading group theme for the semester, on music in the predictive brain… We’ll meet every fourth Thursday, 1-2pm, starting February 2nd, in Room 1.21, 7 Bristo Square.  Let’s go!

 

 

Posted in Spring 2017 | Leave a comment

IMHSD seminar by Prof. Tuomas Eerola – 17 Nov 2016

Visit the Institute for Music in Human and Social Development (IMHSD) site to read about Tuomas’ seminar, and keep up to date with ongoing IMHSD talks and events!

 

Posted in Autumn 2016 | Leave a comment

Autumn 2016 – Nikki’s sabbatical

The Thursday reading group will be back on in the new year – meanwhile, Nikki has been on sabbatical. Here’s what she has been up to:

“I’ve spent some time in Durham as part of my Visiting Fellowship for the new AHRC-funded Interpersonal Entrainment in Music Performance project, working with Kelly Jakubowski (in a new post-doc role after her work on ear-worms…), Martin Clayton, Tuomas Eerola and Simone Tarsitani. One early output from this work is a jointly-authored conference paper, accepted for presentation in Ghent, Belgium later this year at ESCOM: ‘Measuring Visual Aspects of Interpersonal Interactions in Jazz Duos: A Comparison of Computational vs. Manual Annotation Methods’.

The Fellowship at Durham came out of an earlier project, for which I created a database of audio and motion-captured recordings. These featured pairs of musicians improvising together.  Following the original experiments that we carried out using these recordings, other people have taken an interest in this database. So I’ve continued to explore and process the original recordings to make them as useful as possible for other people’s research projects. It’s hard to describe exactly what this entails, but if you have spent any time editing or working with digital media and data in different formats, and if you’ve ever played a locked-room game then you will have an idea how about 100 hours of my sabbatical were spent…

Another large portion of my time went on preparing a new research project proposal. I am interested in the impact that scientific discourse around music – coming from music psychology, music neuroscience, music cognition – has on wider understandings of music within scholarship, education and the public sphere of arts and culture. Still working on this. There’s no single way to carry out a research project. I still have decisions to make about the best methodology for the job.

Alongside these tasks, I did the things that I would normally do (alongside the teaching and admin roles that a research sabbatical relieves) — I peer-reviewed other people’s journal articles, I completed the revisions on a book chapter for a Routledge text book, I drafted the first version of a new article, and I carried out my external examiner roles for programmes at Sheffield and Newcastle Universities, plus a PhD viva at Cambridge.

And I made my first ever trip to Hull, to give one of the Music department’s Newland Lectures. What a city – I mean it! It’s not somewhere that always gets a good press, but I loved it! That place has character and I thought it was beautiful.

So there you go, what I did on my sabbatical – in case you were interested.”

Posted in Autumn 2016 | Leave a comment