
Dr Carla Thackrah
doctoral research & thesis
music, sound & video portraits
2004
External (extra-musical) meaning and Music
Koelsch, S., Kasper, E., Sammler, D., Schulze, K., Gunter, T., & Friederici, A. D. (2004). Music, language and meaning: brain signatures of semantic processing. Nature Neuroscience, 7(3), 302–307. https://doi.org/10.1038/nn1197
This was the first evidence that musical information can elicit N400 responses ie. can convey extra-musical meaning. The musical excerpts were composed for the experiment, and thus not known by the participants. Therefore, musical meaning was not due to symbolic meaning, but due to indexical (e.g., “happy”) and iconic (e.g., “light”) meaning.
"Semantics is a key feature of language, but whether or not music can activate brain mechanisms related to the processing of semantic meaning is not known. We compared processing of semantic meaning in language and music, investigating the semantic priming effect as indexed by behavioral measures and by the N400 component of the event-related brain potential (ERP) measured by electroencephalography (EEG)."
Their results indicated that both music and language can prime the meaning of a word, and that music can, as language, determine physiological indices of semantic processing.
Whereas meaning is clearly a key feature of language, music theorists posit that semantic information is also an important aspect of music
(Jones, M.R. & Holleran, S., eds. Cognitive Bases of Musical Communication (American Psychological Association, Washington, D.C., 1992). Swain, J. Musical Languages (Norton, New York, 1997). Raffmann, D. Language, Music, and Mind (MIT Press, Cambridge, Massachusetts, 1993). Meyer, L.B. Emotion and Meaning in Music (Univ. of Chicago Press, Chicago, 1956).
Hevner, K. The affective value of pitch and tempo in music. Am. J. Psych. 49, 621–630 (1937). Peirce, C. The Collected Papers of C.S. Peirce (Harvard Univ. Press, Cambridge, Massachusetts, 1958). Zbikowski, L. Conceptualizing Music: Cognitive Structure, Theory, and Analysis (Oxford Univ. Press, New York, 2002).)
Most theorists distinguish at least four different aspects of musical meaning:
(i) meaning that emerges from a connection across different frames of reference suggested by common patterns or forms (e.g., sound pat- terns in terms of pitch, dynamics, tempo, timbre, etc. that resemble features of objects),
(ii) meaning that arises from the suggestion of a particular mood,
(iii) meaning that results from extramusical associations (e.g. ,any national anthem) and
(iv) meaning that can be attributed to the interplay of formal structures in creating patterns of tension and resolution.
Empirically, emotional responses to music and patterns of perceived tension and relaxation during listening to music have been described, both of which may be regarded as aspects of musical meaning
(Krumhansl, C.L. Perceptual analysis of Mozart’s piano sonata KV 282: segmentation, tension, and musical ideas. Mus. Percept. 13, 401–432 (1996). Krumhansl, C.L. An exploratory study of musical emotions and psychophysiology Can. J. Exp. Psychol. 51, 336–352 (1997).)
Most linguists, however, would reject the notion that music can transfer specific semantic concepts (Pinker, S. How the Mind Works (Norton, New York, 1997).)
Human subjects were presented visually with target words after hearing either a spoken sentence or a musical excerpt. Target words that were semantically unrelated to prime sentences elicited a larger N400 than did target words that were preceded by semantically related sentences.
Thus, their data showed that music cannot only influence the processing of words, but it can also prime representations of meaningful concepts, be they abstract or concrete, independent of the emotional content of these concepts. Our findings do not imply that music and language have the same semantics. Clearly people do not typically have the repertoire to articulate thoughts and intentions musically as well as they do linguistically However, there is ample evidence that the N400 elicited by words reflects processing of meaning information. (Koelsch et al., 2004)
2005
Emotion and Music
Koelsch, S. (2005). Investigating emotion with music: neuroscientific approaches. Annals of the New York Academy of Sciences, 1060, 412–418. https://doi.org/10.1196/annals.1360.034
Prior to this, there was little argument the music could affect the listener emotionally.
Koelsch carried out experiments to extend the experimental findings of the relationship between emotion and music. (Koelsch, 2005)
During the past years, the neuro- sciences have discovered that music is a valuable tool to investigate emotion. Important advantages of music are
(1) that music is capable of inducing emotions with a fairly strong intensity,
(2) that such emotions can usually be induced quite consistently across subjects
(3) that music can induce not only unpleasant, but also pleasant emotions (which are rather difficult to induce by static images).
Meyer in 1956 in his book Emotion and Meaning in Music , proposed a theory of musical emotions on the basis of fulfilled or suspended musical expectations. He proposed that the confirmation and violation of musical expectations produces emotions in the listener. According to this proposal, Sloboda later in 1991 found that specific musical structures lead to specific psychophysiological reactions, and he showed that new or unexpected harmonies can evoke shivers.
These researchers also showed that unexpected musical events often elicit emotional responses, and not only responses related to the processing of the structure of the music (or of other stimulus features that may systematically be perceived as more or less expected)
Brown et al, Blood and Zatorre and Koelsch all found that listening to music can elicit activity changes in limbic and paralimbic structures that have previously been implicated in emotion (amygdala, hippocampus, parahippocampal gyrus, insula, temporal poles, cingulate cortex, orbitofrontal cortex, and ventral striatum)
2005
How much time needed for accurate responses of emotion to music
Bigand, E., Filipic, S., & Lalitte, P. (2005). The time course of emotional responses to music. Annals of the New York Academy of Sciences, 1060, 429–437. https://doi.org/10.1196/annals.1360.036
This study investigated via two empirical studies, the time course of emotional responses to music. The main purpose of the analysis was to identify the point in time where these two categories of excerpts of different lengths started to be differentiated by participants. Both studies provide consistent findings that less than 1 s of music is enough to instill elaborated emotional responses in listeners.
The present data lead to several conclusions. First, they demonstrated that refined emotional responses to music occur from the very beginning of music listening. This finding is consistent with a number of others in the domain of emotion: it has been shown that responses to emotional stimuli such as human faces, human body gestures, or other stimuli of biological importance occur extremely fast.
Preliminary empirical investigations have demonstrated that basic emotions, such as happiness, anger, fear, and sadness, can be recognized in, and induced by, musical stimuli in adults and in young children. These studies converge to demonstrate a strong consistency among participants, as long as musical excerpts are chosen to convey very basic emotions.
The conclusion that music induces three or four basic emotions is, however, far from compelling for music theorists, composers, and music lovers. Indeed, they are likely to underestimate the richness of the emotional reactions to music that may be experienced in real life. An alternative approach is to stipulate that musical emotions evolve in a continuous way along two or three major psychological dimensions.
The present findings suggest that emotional responses to very short musical stimuli presumably involved cortical mediation. An analysis of the musical and psychoacoustical structures of very short excerpts (in both experiments) suggests that emotions are likely to be governed by features in both compositional structure (harmony) and performances that all are highly cultural. Musical emotions induced by very short excerpts are too refined to be simply derived from basic emotional properties of sound. We argue that these responses required a cognitive appraisal.
Taken in combination with other findings reported on music cognition that show that cognitive processing of subtle musical structure occurs extremely fast, the present findings provide evidence that both cognitive and emotional processes are very fast-acting processes that seem to occur automatically in acculturated listeners. The fact that these findings were obtained for the musically trained as well as for the musically untrained provided further evidence that music is a highly relevant sound structure of the environment, and that processing it does not require an intensive explicit training. Finally, our data suggest that emotional responses are quasi-immediate as soon as music is played. Of course this does not mean that musical emotion does not change as music goes by. On the contrary, it is likely that emotional experiences accumulated from the very beginning of the piece contribute to colour and to intensify the emotions. (Bigand, Filipic, & Lalitte, 2005)
2006
Environmental sounds elicit meaning
Orgs, G., Lange, K., Dombrowski, J.-H., & Heil, M. (2006). Conceptual priming for environmental sounds and words: An ERP study. Brain and Cognition, 62(3), 267–272. https://doi.org/10.1016/j.bandc.2006.05.003
In sum the present findings add more evidence to the notion that the N400 is sensitive not only to linguistic stimuli and that conceptual processing of environmental sounds may be similar to conceptual processing of words, even if the words are presented in the visual modality
As with other studies using non-verbal material, the question arises whether the actual non-verbal stimulus or its subsequent vocalization lead to a context effect. However at least in the word/sound condition in which the N400-effect starts as early as 200ms post stimulus vocalization of an environmental sound seems rather unlikely, as vocalization would have to occur within these 200ms after presentation of the sound.(Orgs, Lange, Dombrowski, & Heil, 2006)
2008
Meaning and music with short (1sec) music samples
Daltrozzo, J., Schön, D., & Scho, D. (2008). Conceptual processing in music as revealed by N400 effects on words and musical targets.pdf. Journal of Cognitive Neuroscience, 21(10), 1882–1892. https://doi.org/10.1162/jocn.2009.21113
The Koelsch et al study of 2004 used 10 second excerpts of music. This later study used instead 1 second lengths.
Their attempt was to replicate the Koelsch et al. (2004) N400 effect to word targets using a very similar two-alternative relatedness judgment task and shorter musical contexts showed very close accuracies: How much time does one need to extract concepts from music? The N400 effect observed indicates that a musical context of 1 sec is able to influence the processing of a following word. It was concluded that the brain was using the timbre of the sound.
It is noteworthy that Orgs et al. (2006) succeeded in finding an N400 effect to environmental sounds lasting 300 msec. Therefore, our results call for further experiments to replicate the present findings with even shorter excerpts, with the goal to estimate what is the minimal duration of a musical excerpt for the communication of a concept. The research of this minimal piece of musical information has already been a subject of interest and was earlier referred to as a ‘‘museme.’’ For instance, the fact that a museme could be as short as a few hundred of milliseconds points to the fact that the grammatical structure of music might not be necessary in order to convey some concepts. With this, we do not claim that musical grammar (harmony) cannot convey concepts in music, rather that there must be other musical aspects that convey concepts in a very short lapse of time.
It is difficult to know what precisely may explain our results and those in the previously cited studies (Bigand et al., 2005; Peretz et al., 1998). A probable candidate is the timbre, or, from a more psychological and global perspective, the energy, tension, and arousal carried by a sound, or series of sounds (Sloboda, 2005). The processing of timbre is particularly fast.
The present study confirms that the processing of words is influenced by its conceptual relatedness with a musical context even if the duration of this context lasts only for 1 sec. Furthermore, the data show, for the first time, that concepts carried by words can influence the processing of a following musical excerpt and suggest that 250 msec might be enough to communicate musical concepts.
(Daltrozzo, Schön, & Scho, 2008)
2009
Emotions in music across cultures
Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., … Koelsch, S. (2009). Universal Recognition of Three Basic Emotions in Music. Current Biology, 19(7), 573–576. https://doi.org/10.1016/j.cub.2009.02.058
It has long been debated which aspects of music perception are universal and which are developed only after exposure to a specific musical culture.
Results show that the Mafas recognized happy, sad, and scared/fearful Western music excerpts above chance, indicating that the expression of these basic emotions in Western music can be recognized universally
It is likely that the sensory dissonance produced by the spectral manipulation was at least partly responsible for this effect, suggesting that consonance and permanent sensory dissonance universally influence the perceived pleasantness of music.
For the tempo, both Westerners and Mafas were more likely to classify pieces with higher tempo as happy and pieces with lower tempo as scared/fearful, whereas for sad pieces, no correlation with tempo was observed. The categorization of pieces was also significantly influenced by the mode of the piece, in both groups. Both Westerners and Mafas classified the majority of major pieces as happy, the majority of pieces with indefinite mode as sad, and most of the pieces in minor as scared/fearful. (T. Fritz et al., 2009)
2010
Meaning conveyed by sounds that were impossible to identify
Schön, D., Ystad, S., Kronland-Martinet, R., & Besson, M. (2010). The evocative power of sounds: conceptual priming between words and nonverbal sounds. Journal of Cognitive Neuroscience, 22(5), 1026–1035. https://doi.org/10.1162/jocn.2009.21302
Two experiments were conducted to examine the conceptual relation between words and non-meaningful sounds. The originality of their approach was to create sounds whose sources were, in most cases, impossible to identify (i.e., “acousmatic sounds”) in order to reduce the influence of linguistic mediation. In order to reduce the role of linguistic mediation (ie conscious labelling of the sounds and therefore giving them a linguistic basis), sounds were recorded in such a way that it was highly unlikely to identify the source that produced them and therefore the participants could not attach a linguistic label to them.
Results showed that, in both experiments, participants were sensitive to the conceptual relation between the two items. They were able to correctly categorize items as related or unrelated with good accuracy.
Several studies have been published on conceptual processing with music (Daltrozzo & Schön, 2009; Frey et al., 2009; Koelsch et al., 2004).
Indeed, the fact that conceptual processing can take place for a single sound, independently of its source, is also of interest for the understanding of the meaning of music.
However, if a single sound, out of a musical context, can generate meaning, we should question the possibility that, in music, elementary units, much shorter than motifs or themes, may also convey part of the musical meaning, via the property of the “sound matter” they carry at each single lapse of time. With respect to this hypothesis, and extending the work of Koelsch et al. (2004), we recently used a similar design to show that 1 sec of music can communicate concepts and influence the processing of a following target word (Daltrozzo & Schön, 2009).
The meaning of music will, therefore, be the result of a rather complex process, taking into account the structural properties of music, the personal and cultural background of the listener, the aesthetic and emotional experience, and also the structure or matter of the sounds whereof a given excerpt is composed. (Schön, Ystad, Kronland-Martinet, & Besson, 2010)
​
2011
Affect of emotion in conveying meaning in music
Steinbeis, N., & Koelsch, S. (2011). Affective priming effects of musical sounds on the processing of word meaning. Journal of Cognitive Neuroscience, 23(3), 604–621. https://doi.org/10.1162/jocn.2009.21383
This study constitutes an extension of previous work on the meaning of music (Koelsch et al., 2004) by showing that emotion is a specific route to meaning in music.
Under certain circumstances, music appears to be capable of conveying semantically meaningful concepts (Koelsch et al., 2004). However, to date, more rigorous empirical demonstrations of the mechanisms underlying the communication of meaning have been lacking. The present study investigates the previously proposed role of emotional expression in music in communicating meaning.
They drew a distinction between the kind of processes that lead to the recognition of emotions expressed in music and emotions elicited in the listener in response to the music.
Recent evidence even suggests that certain emotions portrayed in music may be universally recognized, as Westerners and people totally unfamiliar with Western music show a statistically significant degree of agreement when classifying Western pieces as happy, sad, or scary (Fritz et al., 2009). Discussions on how music can give rise to meaning
have outlined several pathways for this to occur, such as by means of extra-musical associations, the mimicry of real- life features or occurrences, as well as tension-resolution patterns and emotional expression (Koelsch et al., 2004; Meyer, 1956). Whereas there is evidence for the first three (Steinbeis & Koelsch, 2008a; Koelsch et al., 2004), a direct link between emotional features and meaning has not been established.
Recent models on music processing and its links to emotion perception and meaning have advanced the notion that each and every musical feature is capable of expressing an emotion, which are recognized as such, and which in turn can activate associated meaningful concepts (Koelsch & Siebel, 2005). The present study explored three such musical features to test this hypothesis: consonance/dissonance, mode (major/minor), and timbre. Is a specific musical feature capable of expressing affect, which is perceived as such by the listener and which has an influence on subsequent word processing at the semantic level. Primes were chords manipulated either in their consonance/dissonance, their mode (major/minor), or their timbre.
The three experiments showed that the emotion expressed by various musical features (e.g., acoustic roughness, pitch intervals, and timbre) is capable of interfering with subsequent processing of affective word meaning and, therefore, suggests that individual musical features communicate signals which are processed as affectively meaningful. This appears to be understood regardless of musical training and the recognition of which may be the result of the acoustic analysis of affect, which can be applied to speech and acoustic signals generally.
These data provide the first evidence that several individual features of the musical input are capable of communicating meaning, albeit on a basic affective level. It is very likely that this ability can be extended to other musical features, such as melody and rhythm. This evidence constitutes an extension of previous work on the meaning of music (Koelsch et al., 2004) by showing that emotion is a specific route to meaning in music, albeit the current conclusion rests on the basis of very low-level musical features and not whole pieces of music. The experiments therefore represent one of the first systematic analyses of which single aspects constituting music can communicate meaning. Although this is presently restricted to basic emotional categories, future work may focus on extending this to a wider and more subtle range of emotional shades and semantic connotations, taking the psychoacoustic information contained in the musical signal into account.
(Steinbeis & Koelsch, 2011)
2011
Meaning conveyed by sounds of different timbres
Painter, J. G., & Koelsch, S. (2011). Can out-of-context musical sounds convey meaning? An ERP study on the processing of meaning in music. Psychophysiology, 48(5), 645–655. https://doi.org/10.1111/j.1469-8986.2010.01134.x
There has been much debate over whether music can convey extra-musical meaning. The experiments presented here investigated whether low level musical features, specifically the timbre of a sound, have a direct access route to meaningful representations. Short musical sounds with varying timbres were investigated with regard to their ability to elicit meaningful associations, and the neural mechanisms underlying the meaningful processing of sounds were compared to those underlying the semantic processing of words. The results show that even short musical sounds outside of a musical context are capable of conveying meaning information, but that sounds require more elaborate processing than other kinds of meaningful stimuli.
Whether meaning in music exists and how this may be defined as compared to linguistic semantics has been a matter of much debate (see, e.g., Meyer, 1956)
In the extensive theoretical discussion on this topic, two kinds of musical meaning are being discussed. They are sometimes referred to as intra-musical meaning and extra-musical meaning.
There are empirical studies that support the claim that music can activate meaningful representations by evoking emotions (i.e Steinbeis & Koelsch, 2008).
Others have investigated meaningful representations activated by music independent of emotion (e.g., Koelsch et al., 2004).
One way of communicating extra-musical concepts through music is emotion, and meaning in music has thus far mostly been investigated through the pathway of emotional expression (Koelsch et al., 2004; Steinbeis & Koelsch, 2008)
(Koelsch et al., 2004) suggest that music can activate representations of meaningful concepts, and that musical information can systematically influence the semantic processing of words as indicated by the N400. This confirms the hypothesis that musical information can convey extra-musical meaning information. Music can thus activate extra-musical concepts and access semantic memory in a similar fashion to other domains, such as language. One question that has yet to be answered is which elements of music such as rhythm, harmony, melody, timbre, etc. are most important in conveying meaning, and whether only the combination of all of these elements in a piece of music or a single element alone can elicit associations. Timbre seems to be an ideal tool to investigate this question, owing to its multi-dimensional nature.
The findings of Experiment 1 demonstrate that the perception of a sound, even when presented outside of a musical context, can significantly influence the meaningful processing of a subsequent word or sound (as indexed by the N400). Similarly, the perception of a word can influence the subsequent processing of another word or sound.
This shows that single sounds even when presented outside of a musical context can activate representations of meaningful concepts. Basic level features of musical sounds, such as pitch height, pitch chroma, loudness, roughness, and timbre are extracted in early processing stages. All of these low- level features of music are thought to have a direct link to meaningful representations.
In conclusion, the present study shows that single sounds can activate representations of meaningful concepts in a similar fashion to chords and musical excerpts. No musical context is necessary to activate these representations. However, the task was found to have a great influence on the presence of an N400 effect. (Painter & Koelsch, 2011)
2011
Overview of studies re music sound and meaning
Koelsch, S. (2011). Towards a neural basis of processing musical semantics. Physics of Life Reviews, 8(2), 89–105. https://doi.org/10.1016/j.plrev.2011.04.004
The data presented in this article shows that music can communicate meaning, notably not only meaning related to emotion, or affect, but iconic, indexical, and symbolic meaning (with regard to extra-musical meaning), as well as intra-musical meaning. The data also show that musical meaning is at least partly processed with the same mechanisms as meaning in language. Therefore, the notion that language and music are strictly separate domains with regard to the processing of meaning is not tenable anymore.
The electrophysiological data show that, while listening to music, the human mind constantly, and automatically attributes meaning to musical information, and the processing of musical meaning appears to be reflected electrically in the N400 (a classical index of semantic information processing during language perception) and the N5 (which specifically interacts with the N400)
Kroelsch uses two tests - the N400 to confirm that musical stimuli can elicit extra-musical meaning, whereas the N5 can be elicited due to the processing of intra-musical meaning. Notably, whereas the N400 can be elicited by both linguistic and musical stimuli, the N5 has so far only been observed for the processing of meaning in music.
Extra-musical meaning emerges from the interpretation of musical information with reference to the extra-musical world; Leonard Meyer [48] referred to this class of musical meanings as designative meaning. Extra-musical meaning comprises three dimensions: musical meaning due to
(1) iconic,
(2) indexical, and
(3) symbolic sign qualities of music.
Iconic musical meaning emerges from (the interpretation of) musical patterns or forms (e.g., musical sound patterns) that resemble sounds of objects, qualities of objects, or even qualities of abstract concepts. For example, a musical passage may sound “like a bird”, “like a thunderstorm”, “like wideness”, etc., and acoustic events may sound “warm”, “round”, “sharp”, “colourful”, etc.
Indexical musical meaning emerges from (action-related) sound patterns that index the presence of a psychological state of an individual, for example the presence of an emotion, or the presence of an intention (Susanne Langer used the term “iconic” for what is referred to here as “indexical musical meaning” [44,45]).
Symbolic musical meaning emerges from explicit (or conventional) extra-musical associations (e.g. any national anthem) This dimension of musical meaning is culturally enactive, emphasizing that symbolic qualities of musical practice are shaped by (and shape) culture.
(Further studies investigated the processing of musical meaning using only single chords or single tones: One study [68] used an affective priming paradigm with single chords (presented auditorily) and words (presented visually). In any case, this study by Steinbeis and Koelsch [68] revealed that a single musical stimulus (a chord that is more
or less pleasant) can influence the semantic processing of a word (in this study presumably due to a chord’s indexical qualities). (Steinbeis N, Koelsch S. Comparing the processing of music and language meaning using EEG and FMRI provides evidence for similar and distinct neural representations. PLoS One 2008;3(5)))
In summary, the mentioned studies show that musical information (musical excerpts, single chords, and single tones), can systematically prime representations of meaningful concepts (as indicated on modulatory effects on the N400 elicited by words). Moreover, the studies show that musical excerpts, single chords, and single tones can elicit N400 effects that are modulated by the semantic fit with a preceding word. The N400 effects are due to extra-musical meaning, that is, meaning emerging from musical information referring to the extra-musical world of concepts.
As well as this, listeners do not only interpret musical information expressed by another individual, but also the effects evoked by the music in themselves. Kroelsch call this Musicogenic meaning in this article.
Physical - Individuals tend to move to music (singing, dancing)
Emotional - Leonard Meyer stated that emotional responses due to tension-resolution patterns emerging from the fulfilment or violation of expectancies based on the structure of musical information have a quality of meaning for the listener [48]. Emotional responses to irregular (unexpected) chord functions have also been shown empirically: Steinbeis et al. [72] showed that music-syntactically irregular chords induce tension (as indicated by behavioural data) and evoke increased sweat production on the palms of the hands, as indicated by electrodermal activity (such increased sweat production is due to increased autonomic activity of the sympathetic branch of the vegetative nervous system).
Self-related musicogenic meaning - Musical information can also be related to one’s self. That is, m
So, the reported N400 studies indicate that brain processes related to meaning can be activated by musical information with regard to extra-musical meaning information, and the reported N5 studies suggest that semantic processes also emerge from (intra-musical) harmonic integration.
How does intra-musical meaning work?
Musical meaning can emerge from large scale structural relations (such as relations between phrases, parts, movements, etc.) intra-musical meaning can emerge from the logic of musical structures; e.g., “musical ideas fit together – as complementary, or as variations, or as repetitions – so that there is a development or progress of ideas, and the work comes to a close. (Koelsch, 2011)
2011
Review of above article
Slevc, L. R., & Patel, A. D. (2011). Meaning in music and language: Three key differences. Comment on “Towards a neural basis of processing musical semantics” by Stefan Koelsch. Physics of Life Reviews, 8(2), 110–111. https://doi.org/10.1016/j.plrev.2011.05.003
A critique of the above article notes that music has limitations in the meaning it can convey - it can't be as specific as language -
- it can't be propositional ie. can't combine to create several layers of meaning
- it can't communicate specifically as language can
However: while musical meaning lacks the specificity, the compositionality, and the communicative motivation of linguistic semantics. Yet these limitations of musical semantics may be the very things that give music much of its power. The ambiguity and flexibility of musical meaning allows music to mean different things to different people, different things at different times, or even to mean many things at once (cf. [2]). This semantic flexibility and fluidity creates a form of meaning that is part of the uniqueness and importance of music. (Slevc & Patel, 2011)
2013
Meaning not conveyed by music cross culturally but understanding is strongly intra-cultural
Fritz, T. H., Schmude, P., Jentschke, S., Friederici, A. D., & Koelsch, S. (2013). From Understanding to Appreciating Music Cross-Culturally. PLoS ONE, 8(9). https://doi.org/10.1371/journal.pone.0072500
Previous research with the Mafa has indicated that listeners who were naive to Western music could recognize emotional expressions in Western music (‘‘indexical’’ meaning) (Fritz et al, 2008)
And evidence from semantic priming studies indicates that music can prime representations of meaningful concepts: It was shown that the N400 event related potential, which is considered to be an electrophysiological index of semantic information processing, was modulated by musical information preceding a target word (Koelsch et al 2004)
But it has long been debated which aspects of music perception are universal and which are developed only after exposure to a specific musical culture. Here we investigated whether ‘‘iconic’’ meaning in Western music, emerging from musical information resembling qualities of objects, or qualities of abstract concepts, can be recognized cross-culturally. To this end we acquired a profile of semantic associations (such as, for example, fight, river, etc.) to Western musical pieces from each participant, and then compared these profiles across cultural groups. Results show that the association profiles between Mafa, an ethnic group from northern Cameroon, and Western listeners are different, but that the Mafa have a consistent association profile, indicating that their associations are strongly informed by their enculturation.
Their results showed that the pattern of meaning association in Mafa listeners is different from that of Western listeners. However it displayed a pattern of association that was consistent within the Mafa group. This demonstrates that the types of associations evoked by Western music are systematically shaped by enculturation and, unlike associations evoked by basic emotional expressions in music, vary strongly between cultures. (T. H. Fritz, Schmude, Jentschke, Friederici, & Koelsch, 2013)
2015
How Extra information (text or story) affects the intensity of emotions when listening to music
​
Vuoskoski, J. K., & Eerola, T. (2015). Extramusical information contributes to emotions induced by music. Psychology of Music, 43(2), 262–274. https://doi.org/10.1177/0305735613502373
This study tested how extra musical information can affect the emotions induced when listening to a piece of music. The results suggested that contextual information about a musical piece may indeed influence the emotional effects of that piece, as the sad narrative description appeared to intensify the sadness induced by the sad-sounding piece. The narrative descriptions may have enhanced emotion induction via the visual imagery mechanism (suggested by Juslin & Västfjäll, 2008), as 80% of participants in both groups reported thinking about imagery related to the narrative descriptions provided.
Although instrumental music is able to effectively communicate affective (e.g., Gabrielsson, 2009) and semantic (e.g., Janata, 2004; Koelch et al., 2004) meaning without any explicit, extramusical information, one could argue that contextual information might significantly alter the interpretation of this meaning (see, e.g., Thompson, Russo, & Quinto, 2008) and thus influence the intensity and type of emotions induced. Simultaneously presented visual material has been found to influence the interpretation of emotions communicated by music (Thompson, Graham, & Russo, 2005; Thompson et al., 2008), but it is still unclear how prior, contextual information about a musical piece might contribute to the emotions induced by that piece. We propose that music’s ability to convey semantic meaning (see e.g., Koelch et al., 2004) and the human tendency to make sense of our experiences through the construction of narratives (see, e.g., Polkinghorne, 1988) is what gives rise to visual imagery in the context of music listening.
The findings of the present study suggest that music can induce significant levels of sadness
via the visual imagery mechanism, and that the content of imagery can be manipulated through extramusical information. It demonstrated that emotionally congruent contextual information about a musical piece has the potential to intensify the emotions induced by that piece – possibly via the visual imagery mechanism. This study also showed that narrative descriptions about the original context of a musical piece promoted music-related visual (or narrative) imagery related to those descriptions. This finding could be interpreted as giving support to the notion that music-induced imagery emerges from a narrative mode of listening. It can be concluded that contextual information appears to be a salient and inextricable part of the music listening experience, integrated with musical cues to form a coherent whole (Vuoskoski & Eerola, 2015)