The fourth official meeting of EMPRes focused on a topic close to my own research: Music and Language. This was explored through the influential Steinbeis and Koelsch paper from 2008. Attendees included musicologists, psychologists, neuroscientists and linguists, leading to a fruitful discussion of the broader issues of the music-language debate.
Several themes came up during this session, ranging from the specific analogies to be drawn between music and language ERPs, the division between syntax and semantics, and the value of such cross domain comparison in a general sense. These issues arose from consideration of the music-language debate from a range of different perspectives and provided a valuable insight into the field as a whole.
The demonstration of an interaction between music and language processing is a robust indicator of shared neural resources. In this sense, the study is one of the first of its kind, and approaches the music-language comparison from a particularly strong basis. However, the rationale behind examining the linguistic LAN and N400 in comparison to the musical ERAN and N5 is not entirely transparent. For example, despite review of the similarity of the ELAN and ERAN (it being stated that the ELAN ‘resembles the… ERAN’), it is the LAN rather than the ELAN that is explored. Furthermore, the role of the P600 is somewhat overlooked, despite being evident in one experimental condition. Additionally, although the second experiment of this paper differentiated the interaction between ERAN and LAN from being a simple effect of an early negativity (i.e. the MMN), it should be noted that in this case deviances were either intensity, timbre, or pitch (in the sense of A A A A *B*). This therefore did not attempt to differentiate the effect of incongruence in harmony from any other hierarchical form of musical processing, such as melody.
More generally, the willingness for comparisons to be drawn between music and language on the part of music psychologists was discussed, with findings commonly being interpreted as comparable despite significant differences. This can be seen in areas such as neuroscience – with the ERAN being interpreted as similar to the ELAN despite the fundamental difference in lateralization, as well as in cognitive experiment design – with musical incongruence being compared to linguistic incongruence. What can be learnt from such comparisons is valuable, but the significant differences that are evident between domains must be borne in mind for appropriate analysis and evaluation. As one psycholinguistics lecturer succinctly put it, devising a musical version of a linguistic phenomenon is like asking: ‘what would my car be like if it were a plane?’
Syntax vs Semantics
The relevance of applying the syntactic/semantic distinction from language to music was another topic of discussion. It was suggested that the controversial designation of particular musical ERPs as syntactic and others as semantic illustrates a case of ‘existence by definition’, with the N5 for example being defined as a semantic-type ERP because of its similarity with the linguistic N400. This leads to the assertion that music is processed semantically. However, whether music musical semantics can be defined as ‘the thing that causes the N5’ is problematic. We then delved more deeply into the definition of semantic processing in music. Given the nature of music, semantic meaning can be understood to depend on a form structural understanding, which makes syntactic and semantic processing difficult to disentangle. However, in our meeting it was proposed that although syntactical and semantic understanding in music arise from similar cognitive analyses (i.e. of structural organisation), the syntactic incongruities may in fact underlie the moments of maximum semantic information, and therefore could understood as the root of meaning in music.
Finally, discussion turned to the question of ‘normal’ musical processing. What’s ‘normal’ with regards to musical exposure, to music training, and to music appreciation? To what extent can these music-language comparisons be drawn simply because we’re taught the fundamentals of language, and then taught music based on these linguistic frameworks? Are we simply taught music in a language-like way? An interesting development of this line of thought is whether the way we process music differs when resulting from subjective experience as compared to formal training, with comparative studies being a valuable contributor to the debate. One last thought that was put forth was the relevance of hemispheric differences between music and language processing. The frequent localization of musical responses to the right hand side of the brain was linked to the local/global division between hemispheres; with the right hemisphere relating to global processing and left to local, music was suggested to convey the gist of emotions or meanings without any of the specifics. This perspective implies an element of futility in drawing too many specific comparisons between music and language, at least at the local level.
Bringing ideas from a number of fields together, this session gave rise to comprehensive engagement with a wide-ranging topic. Nonetheless, we were once more left with many questions as answers.