Skip to Main Content
 

Global Search Box

 
 
 
 

ETD Abstract Container

Abstract Header

Three Studies of Emotional Cues in Instrumental Music Inspired by Acoustical Cues in Vocal Affect

Trevor, Caitlyn M

Abstract Details

2016, Master of Arts, Ohio State University, Music.
Musicians commonly regard the human voice as a model for emotional expressiveness. Similarly, modern psychological research suggests that the human voice offers a useful model for understanding how sounds represent or convey emotions (e.g. Juslin & Sloboda, 2011). This thesis reports on three studies, each of which is inspired by different features of vocal emotion. That is, the three studies investigate whether instrumental music exhibits or emulates these features. Study 1 was motivated by the observation that a darker timbre is an acoustical characteristic of sad voice (Scherer, Johnstone & Klasmeyer, 2003). Given that open strings generate a brighter timbre than stopped strings (Schelleng, 1973), composers writing nominally sad music might choose keys and notes that prohibit the use of open strings. Specifically, the proportion of potentially-open-to-stopped strings was compared between a sample of slow minor-mode movements and matched major-mode movements. Study 2 was inspired by certain acoustical characteristics of laughter. First, to verify the possibility of hearing laughter from an instrument other than voice, participants adjusted the speed and duty cycle of looped tones to produce the most laughter-like sound. Next, the study examined whether the acoustical characteristics of laughter appear in real music by comparing amounts of staccato and rhythmically isochronous passages found in musical compositions of a comedic genre (humoresques, badineries, and Scherzos) and in similar-tempo works by the same composers. Study 3 was motivated by the observation that high emotionality (e.g. fear, rage, excitement) often results in speaking within the upper pitch register (Scherer et al., 2003). To be able to identify pitch register, one must know the range of the speaking voice, something that research indicates humans can accurately perceive (Honorof & Whalen, 2005). The study first tested whether range information is similarly perceptible for instruments, specifically for cello. Participants were asked to identify which tones, of pairs, were played in high playing positions. Next, the study tested whether melodies played in high playing positions portray greater emotionality. In a 2AFC paradigm, listeners chose which of a pair of recordings of a melody - one each played in a high or low pitch register - they perceived as more emotionally expressive. Contrary to the hypothesis of study 1, examination of a sample of quartet movements by Haydn, Mozart and Beethoven failed to exhibit the conjectured relationship between darker timbre and the use of stopped versus open strings. Study 2 results were mixed. The adjusted tempos and articulations were consistent but slower and longer than those of actual human laughter. Additionally, the nominally humorous works were found to contain more staccato passages. However, these were not more likely to be isochronous. Study 3 also produced mixed results. Participants were reasonably able to identify which note was played in a high playing position. However, participants selected melodies played in a high register as more expressive only for recordings by Cellist A. The opposite results occurred for recordings by Cellist B.Musicians commonly regard the human voice as a model for emotional expressiveness. Similarly, modern psychological research suggests that the human voice offers a useful model for understanding how sounds represent or convey emotions (e.g. Juslin & Sloboda, 2011). This thesis reports on three studies, each of which is inspired by different features of vocal emotion. That is, the three studies investigate whether instrumental music exhibits or emulates these features. Study 1 was motivated by the observation that a darker timbre is an acoustical characteristic of sad voice (Scherer, Johnstone & Klasmeyer, 2003). Given that open strings generate a brighter timbre than stopped strings (Schelleng, 1973), composers writing nominally sad music might choose keys and notes that prohibit the use of open strings. Specifically, the proportion of potentially-open-to-stopped strings was compared between a sample of slow minor-mode movements and matched major-mode movements. Study 2 was inspired by certain acoustical characteristics of laughter. First, to verify the possibility of hearing laughter from an instrument other than voice, participants adjusted the speed and duty cycle of looped tones to produce the most laughter-like sound. Next, the study examined whether the acoustical characteristics of laughter appear in real music by comparing amounts of staccato and rhythmically isochronous passages found in musical compositions of a comedic genre (humoresques, badineries, and Scherzos) and in similar-tempo works by the same composers. Study 3 was motivated by the observation that high emotionality (e.g. fear, rage, excitement) often results in speaking within the upper pitch register (Scherer et al., 2003). To be able to identify pitch register, one must know the range of the speaking voice, something that research indicates humans can accurately perceive (Honorof & Whalen, 2005). The study first tested whether range information is similarly perceptible for instruments, specifically for cello. Participants were asked to identify which tones, of pairs, were played in high playing positions. Next, the study tested whether melodies played in high playing positions portray greater emotionality. In a 2AFC paradigm, listeners chose which of a pair of recordings of a melody - one each played in a high or low pitch register - they perceived as more emotionally expressive. Contrary to the hypothesis of study 1, examination of a sample of quartet movements by Haydn, Mozart and Beethoven failed to exhibit the conjectured relationship between darker timbre and the use of stopped versus open strings. Study 2 results were mixed. The adjusted tempos and articulations were consistent but slower and longer than those of actual human laughter. Additionally, the nominally humorous works were found to contain more staccato passages. However, these were not more likely to be isochronous. Study 3 also produced mixed results. Participants were reasonably able to identify which note was played in a high playing position. However, participants selected melodies played in a high register as more expressive only for recordings by Cellist A. The opposite results occurred for recordings by Cellist B.
David Huron (Advisor)
Marc Ainger (Committee Member)
Anna Gawboy (Committee Member)
101 p.

Recommended Citations

Citations

  • Trevor, C. M. (2016). Three Studies of Emotional Cues in Instrumental Music Inspired by Acoustical Cues in Vocal Affect [Master's thesis, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. http://rave.ohiolink.edu/etdc/view?acc_num=osu1460991596

    APA Style (7th edition)

  • Trevor, Caitlyn. Three Studies of Emotional Cues in Instrumental Music Inspired by Acoustical Cues in Vocal Affect. 2016. Ohio State University, Master's thesis. OhioLINK Electronic Theses and Dissertations Center, http://rave.ohiolink.edu/etdc/view?acc_num=osu1460991596.

    MLA Style (8th edition)

  • Trevor, Caitlyn. "Three Studies of Emotional Cues in Instrumental Music Inspired by Acoustical Cues in Vocal Affect." Master's thesis, Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1460991596

    Chicago Manual of Style (17th edition)