Hello!

If you would like to participate in discussions, please sign in or register.

In this Discussion

Most Popular This Week

    Due to changing needs and technologies, the SMT Executive Board has decided to retire SMT Discuss (effective Nov. 9, 2021). Posts will be preserved for archival purposes, but new posts and replies are no longer permitted.

    Is there a way you can quantify consonance and dissonance?

    Hello there,

    I was wondering if any of you could help me with an issue I'm having. I have been using pitch-class sums to represent how dark or light a scale feels. So for example, C major, when you add all of its values together from pitch-class space, would have a sum of 38. C Phrygian, on the other hand, would have a sum of 34. Now I thought this might be because the smaller intervallic distances, when related to the root of the scale, would be causing the sum to be lower in the case of C Phrygian, therefore the scale would be shown to be darker. But the whole process seems rather dubious to me.

    So I tried other ways of representing a scale's darkness or lightness by mapping them onto the circle of fifths. C major had most of its intervals on the 'brighter' fifths side whereas C Phrygian had most of its intervals mapped on the 'darker' fourths side. But again it doesn't feel like a strong way to represent it academically.

    I think I was going to try a third way of cross-referencing with the harmonic series. The method here was to take a 12-tet approximated harmonic series (sorry JI fans) and make the assumption that intervals that find themselves further away from a fundamental of C have a weaker pull to said fundamental thus making it more dissonant/ dark. For example, the interval b9 would be the 9th interval to appear in this 12-tet approximated harmonic series, whereas a natural 2 would be the 5th interval to appear making it more consonant than the aforementioned b9. This would also make C Phrygian darker than C major as many of its intervals would be further away from the C fundamental than the intervals in the C major scale. But again I'm unsure of this method, mainly because it uses an approximated harmonic series. 

    So basically what I'm asking is, is there a way you can represent consonance and dissonance/ brightness and darkness in an objective and quantifiable way?

     

    Sign In or Register to comment.

    Comments

    • 6 Comments sorted by Votes Date Added
    • I made a harmonic-series-based consonance/dissonance measure very much like what you're describing in my dissertation. http://www.robertkelleyphd.com/dissertation.htm The relevant section is 4.2. I hope that this helps!

      Robert.
    • Have you considered looking at the interval vectors of pitch class sets?
    • Paul Sherrill presented a theory of modal brightness at the SMT conference in 2019. You can find his powerpoint slides here, and you should contact him.

    • Thanks, for such excellent information! 

      Hi Carson, I haven't actually considered this yet as I'm quite new to set theory is there any sources you can point me towards that could help me apply interval vectors this way? 

    • Having read your question and the answers provided here, I still don't quite understand what all this is about. Consonance and dissonance are mainly acoustical phenomena. 'Quantifying' them seems to me to mean numerically defining a level of consonance or dissonance. Isn't this what Helmholz tries in his famous curves of figs 60A and B in Sensations of Tone (p. 193 in the 3d edition, 1895)? Helmholtz himself recognizes that these curves rest on several arbitrary hypotheses and attempts have been made since to produce better results, but the overall idea is basically correct, I think.

      But then you speak of 'brightness' (or 'lightness') and 'darkness', without clearly indicating how you relate them to consonance and dissonance. And your own suggestions (pitch-class sums, circle of fifths, harmonics approximated in 12-tet) do not deal with sounds anymore, but with 'notes'. The difference that I'd make between sounds and notes is that the first are acoustic phenomena, while the second are but abstract – semiotic – imagess of the first. Consonance and dissonance are heavily dependent on tuning (and on timbre), and considering this is avoided when speaking of notes.

      Paul Sherrill presents the diatonic scales in an order following the cycle of fifths, from Lydian to Locrian (slide 2 of his handout), and indicating them as of decreasing brightness or increasing darkness in that order. This reminds me of a classification of the same scales in the same order by Yizhak Sadaï from 'surmajor' (Lydian) to 'subminor' (Locrian). But would you think that the level of 'majority' or 'minority' is the same thing as 'brightness' or 'darkness'? Isn't that a trifle too reminiscent of major=joyful and minor=sad (or even worse, major=masculine, minor=feminine)?

      As to 'sum brightness' (Sherrill's slide 5), isn't it the way of thinking that leads to consider Locrian as the prime form of the diatonic scale? Each of the subsequent sums are obtained by moving one degree away from the fundamental, in a way that maintains the diatonicity of the scales. And the result follows the cycle of fifths: Locrian 33, Phrygian 34, Aeolian 35, Dorian 36, Mixolydian 37, Ionian 38 and Lydian 39. The result is the same as before, only the way of explaining changes.

      Taking new harmonics as they appear in the series boils down to the succession of odd whole numbers, 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21 and 23 (if you decide to limit the succession to 12 numbers). Reducing them to one octave produces 1 (prime), 3/2 (fifth), 5/4 (major third), 7/4 ('natural' seventh), 9/8 (tone), 11/8 (~fourth), 13/8 (minor sixth), 15/8 (major seventh), 17/16 (semitone), 19/16 (minor third), 21/16 (?) and 23/16 (tritone). One might argue that 6/5 would be better than 19/16 as approximation of the minor third. And 4/3, the 'true' fourth, of course is way more consonant than 11/8 (the 12-tet 'fourth'). I think that one could find 18th-century descriptions of decreasing consonance along the same line (Euler?) – without any reference to 12-tet. 7/4 in particular, the 'natural' seventh, has been described as more consonant, even if less useful musically, than the 'true' minor seventh (9/5).

      I think, in short, that if you want a simple way to approximatively quantify the level of consonance/dissonance, the best way is to turn back to numeric ratios and to say that simple ratios produce a higher level of consonance than complex ones. This has been said perhaps since Antiquity, and even although we may all be conscious of how approximate it is, it remains a good (or at least a simple) approximation. 

       

       

    • > So basically what I'm asking is, is there a way you can represent consonance and dissonance/brightness and darkness in an objective and quantifiable way?

      Of course, characterizing consonance/dissonance (henceforth C&D) is a long-standing problem in both music theory and in music perception. My background is in music perception and cognition, so be aware that my comments here reflect that perspective.

      There is a long history of perceptual experiments where listeners are asked to judge variously the consonance, dissonance, pleasantness, smoothness, roughness, or ugliness of different sonorities.

      First, you get slightly different results for musician and non-musician listeners. For example, non-musicians tend to judge perfect fifths as less consonant than musicians, and judge major thirds as more consonant than musicians.

      Van de Greer, Levelt, and Plomp (1962) also showed that you get different results depending on what question you ask. Their research suggests that at least three different phenomena are going on at the same time.

      The history of the pertinent experimental research suggests that C&D is influenced by at least 11 different factors, ranging from low-level sensory phenomena to high-level cultural phenomena.

      At the auditory periphery, classic work was conducted by Plomp & Level (1965) which linked dissonance (a component now referred to as "sensory dissonance") to spectral interference within the critical band.

      At the other end, there is a phenomenon known as "processing fluency" which is a generalization of an earlier phenomenon called the "mere exposure effect." Basically, people prefer familiar sounds. (We also prefer familiar smells, foods, even familiar fonts.) So if you are brought up in an equally-tempered sonic environment, you'll tend to prefer equally-tempered intervals over (say) just intonation intervals. It's a matter of exposure.

      The brain rewards laziness, so the reason why we prefer familiar stimuli is that they literally require less metabolic resources to comprehend or process. It's also the reason why listeners tend to judge sounds played with familiar instrumental timbres as less dissonant than the same sonority played with unfamiliar timbres.

      Also sonorities employing richer timbres (more spectral components) are also perceived as more dissonant.

      C&D is influenced by pitch register, so a major third played at the bottom of the keyboard sounds like a clangorous mess compared with the same interval played in the middle or upper registers.

      Absolute interval size also plays a role so compound intervals aren't judged the same. For example, a perfect fifth and a perfect twelfth do not have the same perceived consonance.

      As one might suppose, loudness plays an important role: the same sonority will be judged as much more dissonant when played forte versus piano.

      Also, dissonance judgments are related to density. A major 7th dyad will sound less dissonant if you add more tones to form a major-major-seventh chord. (This observation poses a challenge for spectral theories of dissonance since adding spectral components should always increase dissonance.)

      Voice leading/streaming also plays a role. Bregman (1990) noted that if two lines are perceived as strongly independent, then the intervals formed between the two lines will be heard as less dissonant. (This means, for example, that two instrumentalists playing a duet will produce a slightly more dissonant effect the closer they are located in space. Similarly, recordings will sound slightly less dissonant when reproduced in stereo compared with mono.)

      For a long time it was thought that C&D could be explained by tonal fusion (the tendency for sounds to be heard as one). Hence, unisons, octaves, and fifths were presumed to be more consonant than intervals less prone to perceptual fusion. However, the experimental evidence isn't consistent with this claim.  Carl Stumpf proposed this idea way back at the end of the 19th century, but he himself recognized later in his career that this account must be wrong.

      If one regards C&D as a formal concept rather than a perceptual phenomenon, then there is certainly historical precedent for using simple integer frequency ratios. But the experimental evidence has long ago discarded that idea. Simple integer frequency ratios are critical in the phenomenon of tonal fusion, and tonal fusion does influence consonance perceptions, but in the world of auditory perception, tonal fusion is a distinct percept from C&D.

      The long & short of it is that C&D remains a mess. A simple bibliography of just the empirical research on C&D would include several hundred journal articles. It is such a mess, and the existing research is so voluminous and complicated that most researchers are understandably intimidated by the topic. Consequently, little pertinent research is conducted these days.

      Incidentally, C&D is a mess, not because of poor quality research, but because the phenomenon has turned out to be extraordinarily complicated.

      MEASUREMENT

      At this point we might turn to the issue of measuring C&D. First, a couple of points regarding measurement.

      Now just because we don't understand a phenomenon doesn't mean that we can't try to measure it. Science regularly endeavors to quantify phenomena that aren't fully understood--like pollution, nutrition, cancer, etc. The key is to avoid reifying quantities. When we measure someone's "empathy" (say by using Davis's Interpersonal Reactivity Index), we can't conclude that we have captured someone's true level of "empathy" (whatever that is). These are crude estimates of some concept that we don't fully understand.

      Most people think of "measurement" as about "precision." However, statisticians and scientists don't think of measurement that way. They think of measurement as a way of reducing uncertainty. Even crude or imperfect measures can be useful.  The key is not that they are precise, but that they manage to reduce uncertainty.

      Let me give you an example. Few measures have been criticized more than IQ -- and rightly so. But consider this. In the past, a common ingredient in paint was lead. However, several decades ago, lead was banned from paint because the lead was found to be detrimental to health, especially the health of children. Animal studies implicated lead as bad for brain development. But how could researchers be sure that the lead in household paints could be having a detrimental effect? After all, paint just sits on walls; it is rare that a child will eat a paint chip. The quantities of lead that are released to the air are miniscule. Surely, lead paint is not a significant health concern.

      It was discovered that the IQs of children who lived in houses with lead paint was significantly lower than the IQs for children from matched socio-economic backgrounds who lived in houses without lead paint.

      Similarly, differences in IQ also proved to be essential in discovering the terrible effects of methyl mercury on children's mental development. IQ may be only a "crude" estimate of a person's mental functioning, but measuring IQ has proved to be invaluable in improving the quality of health for millions of people. How would researchers have ever discovered these toxic effects without measuring something like IQ?

      Once again, the problems arise when we reify measurements--where we mistakenly believe that we are truly capturing what is meant by "intelligence." In empirical work, one discovers that even seemingly simple concepts turn out to be fuzzy. How many notes are there in some notated score? Is a notated trill one note? If many notes, how many? If a note is contained within repeat signs, does that make it two notes? Forget about trying to define "music." We run into problems just trying to define something as simple as a "note."

      The key is to recognize that measurements (ALL measurements) are imperfect operational estimates that ultimately fail to fully capture the concept of interest.  But that doesn't mean crude measurements are useless.

      MEASURING CONSONANCE & DISSONANCE

      Having said that, we might now consider some of the existing measurement methods for C&D. The first good effort was made by Akio Kameoka and Mamoru Kuriyagawa (1969a/b). In effect, they first replicated Plomp and Levelt's results with Japanese listeners, and then they translated Plomp & Levelt's model into a quantifiable measurement method. The input is a collection of frequencies and amplitudes representing a given sonorous moment, and the output is an estimated dissonance value.

      Another worthy effort was that of Hutchinson and Knopoff (1978, 1979).  There are others as well.

      These are complicated models, not easy to implement or use. Moreover, my student, Keith Mashinter (2006), discovered some worrying discrepancies in both the Kameoka & Kuriyagawa and Hutchinson & Knopoff calculation algorithms. I do not recommend using them.

      For those of us interested in doing score-based music analysis, the extant acoustical models are not practical. I expect for many applications, theorists would be interested in some sort of estimate based on simple pitch collections. Perhaps even pitch-classes rather than pitches.

      If you think that listener judgments are appropriate for establishing some sort of measurement basis, then folks might be interested in my interval-class consonance index (Huron, 1994). Here it is:

      Interval Class   Consonance

      m2/M7            -1.428

      M2/m7            -0.582

      m3/M6            +0.594

      M3/m6            +0.386

      P4/P5             +1.240

      A4/d5               -0.453

      This index is simply based on normalized data from three classic consonance and dissonance experiment (Malmberg, 1918; Kameoka & Kuriyagawa, 1969a; and Hutchinson & Knopoff, 1979). From each experiment I pooled the data for complementary intervals. I then averaged the results for all three experiments and normalized the data so that the average value is zero and the standard deviation is 1.

      My consonance index ignores known effects such as pitch register, timbre, loudness, voice leading, etc. Of course the index is based on perceptual data collected from western-enculturated listeners from a particular period in history.

      For theorists, it nevertheless provides a convenient tool since the relative consonance can be estimated for a given sonority by multiplying the interval vector by the above values and summing the results (see Huron, 1994).

      Incidentally, using this tool, pitch collections like major, minor, augmented and diminished triads are ordered in ways that theorists would consider intuitively sensible. Moreover, of the thousands of possible pitch collections, scale collections like the common pentatonic scale, the major scale, the blues scale, and the Japanese Ritsu mode turn out to exhibit notably high or even optimum consonance values.

      For a readable account of modern consonance and dissonance research, perhaps readers might want to look at chapter 5 from my book, Voice Leading: The Science Behind a Musical Art. (MIT Press, 2016).

      BRIGHTNESS

      Finally, it's appropriate to address Ollie's question regarding brightness/darkness. It makes intuitive sense that dissonant sounds might be regarded as sounding "darker," but I'm doubtful of the effect size. Much more important for judgments of brightness/darkness is simply the pitch register. Listener do indeed judge lower frequency sounds as darker. If you have access to sound recordings and some sound-analysis software like Praat, a useful measure is the so-called spectral centroid. That's simply the average frequency of all spectral components weighted by their intensity. But if you're working from scores, a perfectly reasonable operational measure would be the average pitch for a sonority.

      All the best with your project.

      David Huron

       

      Bregman, A. B. (1990). Auditory scene analysis: the perceptual organization of sound. Cambridge: MIT Press.

      Cazden, N. (1945). Musical consonance and dissonance: A cultural criterion. Journal of Aesthetics and Art Criticism, 6(1), 3-11.

      Cazden, N. (1980). The definition of consonance and dissonance. International Review of the Aesthetics and Sociology of Music. 2, 123–168.

      Geer, J. P. Van de, Levelt, W. J. M., & Plomp, R. (1962). The connotation of musical consonance. Acta Psychologica, 20, 308-319.

      Glasberg, B. R., & Moore, B. C. J. (1990). Derivation of auditory filter shapes from notched-noise data. Hearing Research. 47, 103–138.

      Greenwood, D. D. (1961a). Auditory masking and the critical band. Journal of the Acoustical Society of America. 33, 484–501.

      Greenwood, D. D. (1961b). Critical bandwidth and the frequency coordinates of the basilar membrane. Journal of the Acoustical Society of America. 33, 1344–1356.

      Greenwood, D. D. (1991). Critical bandwidth and consonance in relation to cochlear frequency-position coordinates. Journal of the Acoustical Society of America. 54, 64–208.

      Hubbard, D.W. (2010). How to Measure Anything. 2nd edition. Hoboken, NJ: John Wiley & Sons.

      Huron, D. (1991). Tonal consonance versus tonal fusion in polyphonic sonorities. Music Perception. 9(2), 135–154.

      Huron, D. (1994). Interval-class content in equally tempered pitch-class sets: Common scales exhibit optimum tonal consonance. Music Perception, 11(3), 289-305.

      Huron, D. (2016). Voice Leading: The Science Behind a Musical Art. Cambridge, Massachusetts: MIT Press.

      Huron, D., & Sellmer, P. (1992). Critical bands and the spelling of vertical sonorities.  Music Perception, 10(2), 129-149.

      Hutchinson, W. and Knopoff, L. (1978). The acoustic component of Western consonance. Interface. 7(1), 1–29.

      Hutchinson, W. and Knopoff, L. (1979). The significance of the acoustic component of consonance in Western triads. Journal of Musicological Research. 3, 5–22.

      Kameoka, A. and Kuriyagawa, M. (1969a). Consonance theory, part I: Consonance of dyads. Journal of the Acoustical Society of America. 45(6), 1451–1459.

      Kameoka, A. and M. Kuriyagawa. (1969b). Consonance theory, part II: Consonance of complex tones and its computation method. Journal of the Acoustical Society of America. 45(6), 1460–1469.

      Malmberg, C.F. (1918). The perception of consonance and dissonance. Psychological Monographs, 25(2), 93-133.

      Mashinter, K. (2006). Calculating sensory dissonance: Some discrepancies arising from the models of Kameoka & Kuriyagawa, and Hutchinson & Knopoff. Empirical Musicology Review, 1(2), 65-84.

      Parncutt, R. (1989). Harmony: A Psychoacoustical Approach. Springer-Verlag.

      Plomp, R. and Levelt, W. J. M. (1965). Tonal consonance and critical bandwidth. Journal of the Acoustical Society of America. 38, 548–560.

      Simpson, J. (1994). Cochlear modeling of sensory dissonance and chord roots. Master’s Thesis, Systems Design Engineering. University of Waterloo.

      Tenney, J. (1988). A History of Consonance and Dissonance. New York: Excelsior.

      Terhardt, E. (1974). Pitch, consonance and harmony. Journal of the Acoustical Society of America, 55, 1061-1069.

      Vos, J. (1986). Purity ratings of tempered fifths and major thirds. Music Perception. 3(3), 221–258.

      Zwicker, E., Flottorp, G., and Stevens, S. S. (1957). Critical bandwidth in loudness summation. Journal of the Acoustical Society of America. 29, 548–557.

      Zwicker, E. (1974). On a psychoacoustical equivalent of tuning curves. Facts and models in hearing