Page 2 of 3 FirstFirst 123 LastLast
Results 31 to 60 of 84

Thread: why can humans intuitively know when a musical tone frequency is doubled/halved?

  1. #31
    Join Date
    Jun 2005
    Posts
    13,860
    Quote Originally Posted by Logix128 View Post
    it's just a vibration that makes its way inside of our ear. correct me if i'm wrong but there's no such thing as "sound" unless there's a brain to interpret the sound waves. machines can't decipher what something sounds like, they can only analyze the patterns of vibrations and output vibrations of specific patterns... "sound" is an absolute phenomenon of life
    The experience of anything, sounds or images or smells or emotions, I think is a phenomenon of life. So in a way you’re right. We are also doing something like creating frequency patterns and then interpreting them. A machine can listed to a person speaking and output it as text, but of course, at the moment it does not experience it as a sound or voice, since it doesn’t experience anything in the sense that we do.


    Sent from my iPhone using Tapatalk
    As above, so below

  2. #32
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by grant hutchison View Post
    It's all down to the medial geniculate nucleus, which sits in the thalamus on the auditory pathway, and which "collapses" incoming signals separated by octave multiples into similar output. It's not unique to humans - trained rhesus monkeys match pitch chroma in the same way.
    We should include the hair cells in the picture. https://en.wikipedia.org/wiki/Hair_cell

    The evolutionary Just So story is that complex harmonic sounds, like animal calls and human vowel sounds, easily become ambiguous in pitch, by an octave, if some of the odd-number harmonics are attenuated. So collapsing even-numbered harmonics so that they're perceived as "the same tone" undoes that potential signalling problem.
    To elaborate on that answer, most physical structures that produce sound as a result of some applied force ( an air flow, being hit by something etc.) produce a wave that is a superposition of waves of different frequencies. To detect the sound from a given physical structure, it is advantageous to be able to detect several of the frequencies and recognize frequencies that often occur together as being similar. Usually there is a wave of the largest amplitude that is the "fundamental" pitch. The frequencies of the next highest amplitude are often an octave above and/or an octavie below the fundamental frequency.

    The above remarks deal with origin of the sound. The reception of the sound also involves physical structures such as the hair cells, so there is also the consideration of how the hair bundles can vibrate just considering them as physical structures. If a wave of a given shape passes through the fluid surrounding them, they cannot necessarily vibrate in way that exactly reproduces that wave. I don't know if there have been detailed studies about how the hair bundles vibrate. It is known that that hair cells contribute to the phenomena of "critical bands" , as described in section 4.1.6.2 of http://digitalsoundandmusic.com/chapters/ch4/.

  3. #33
    Join Date
    Jul 2005
    Posts
    18,099
    Quote Originally Posted by tashirosgt View Post
    We should include the hair cells in the picture. https://en.wikipedia.org/wiki/Hair_cell
    Hair cells are of course right at the start of the auditory pathway, but they're not the neurological location of the "octave collapse" that makes notes an octave apart feel similar to our brains.

    Grant Hutchison

  4. #34
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by grant hutchison View Post
    Hair cells are of course right at the start of the auditory pathway, but they're not the neurological location of the "octave collapse" that makes notes an octave apart feel similar to our brains.
    The final step in a calculation is, by definition, where the final calculation is made. So my interpretation of your statement about the medial geniculate nucleus is that experiments presenting sounds to a subject show a great variety in neural activity "upstream" of the medial geniculate nucleus when two sounds an octave apart are presented. In the medial geniculate nucleus we observe neural activity that is "similar" (in some respect and by some algorithm for measuring similarity) when two sounds an octave apart are presented. Is that correct? Or is the function of the medial geniculate nucleus deduced in a more indirect way? - perhaps people with damage to that part of the brain don't sense the similarity of octaves.

  5. #35
    Join Date
    Jul 2005
    Posts
    18,099
    The earliest neurophysiological studies, using electrodes stuck into animal nerves and brains, showed there was a primary tonotopic organization to the auditory pathway - high-frequency hair cells in the cochlea connected to a different bit of the auditory cortex from low-frequency hair cells, and you could generate a map of the auditory cortex in terms of the sound frequencies it reacted to. But back in the '60s it became evident, from simple microanatomical studies, that the medial geniculate nucleus was oddly laminated. Applying neurophysiology to those laminations using microelectrodes revealed that they were ordered by octave - chroma was spatially mapped within each lamina, with octave jumps between lamina - so neurons that respond specifically to C4 are stacked between (and connected to) neurons that respond to C3 and C5. That's pretty striking stuff - the purely tonotopically sorted input from the cochlea is suddenly layered up into octaves when it arrives in the medial geniculate, before being relayed onwards to the auditory cortex. (IIRC, there's newer evidence that the auditory cortex itself actually contains links between octaves, too - but the medial geniculate is where the first organization takes place.)

    Newer evidence involves high-resolution fMRI of intact humans, and I haven't been following that. But I find it disproportionately pleasing that there's this very obvious neurological architecture in our brain stems which reflects something fundamental that we perceive about the world.

    Grant Hutchison

  6. #36
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by grant hutchison View Post
    Newer evidence involves high-resolution fMRI of intact humans, and I haven't been following that. But I find it disproportionately pleasing that there's this very obvious neurological architecture in our brain stems which reflects something fundamental that we perceive about the world.
    It's remarkable that the brain analyzes sound in terms of frequencies. Expressing a sound as a superposition of frequencies is a purely mathematical idea. Most sounds aren't produced by a set of objects, each vibrating at one frequency.

  7. #37
    Join Date
    Mar 2007
    Location
    Falls Church, VA (near Washington, DC)
    Posts
    8,816
    Quote Originally Posted by tashirosgt View Post
    It's remarkable that the brain analyzes sound in terms of frequencies. Expressing a sound as a superposition of frequencies is a purely mathematical idea. Most sounds aren't produced by a set of objects, each vibrating at one frequency.
    But many are produced by objects that vibrate at a fundamental frequency combined with overtones.

  8. #38
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by Hornblower View Post
    But many are produced by objects that vibrate at a fundamental frequency combined with overtones.
    Yes, that's the argument for why evolution should produce a detection system that looks for both the fundamental frequency and the stronger overtones. My point is that "fundamental frequency" and "overtones" are essentially mathematical ideas. Usually they are not physically implemented by distinct objects. One string vibrates with a wave form that is mathematically the sum of several pure frequencies, but the meaning of "sum" in this situation is an arithmetical sum. "Sum" doesn't have the general meaning of "combined effect" - as implied in statements like "Discontent with new road construction added to the burden of new taxes and caused Mayor Grumbo to loose the election."

  9. #39
    Join Date
    Jul 2005
    Posts
    18,099
    Quote Originally Posted by tashirosgt View Post
    It's remarkable that the brain analyzes sound in terms of frequencies. Expressing a sound as a superposition of frequencies is a purely mathematical idea.
    Well, it's how sounds are transduced by the cochlea - the incoming complex sound wave is parsed into vibrations of the basilar membrane with maxima at different distances along the cochlear curve. If you laid the cochlea flat and plotted oscillation amplitude against distance, you'd have a frequency spectrum of the incoming sound. The hair cells at each point along the membrane are firing in proportion to the oscillations they experience, so what passes centrally down the auditory nerve is a depiction of the frequency spectrum of the received sound waves.

    Grant Hutchison

  10. #40
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,743
    The way we see and hear we can presume were advantageous during evolution. in light we interpret the three colour cones as a spectrum of colours while in hearing we separate out sounds into notes and we can hear chords and intervals. In principle it could have been the other way around but these interpretations served us well. That is as near to the why question as I can get. Some animals have more cones but we do not know how they choose to form them in their minds. In light we chose the visible light we see while in some modern scenarios it would be useful to have the IR and UV cones too, especially if we could choose them sometimes. In hearing many animals seem to hear well below and above our limits, again we could sometimes use those abilities. The octave and other pleasant intervals are our interpretations of pitch.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  11. #41
    Join Date
    Jun 2005
    Posts
    13,860
    Quote Originally Posted by tashirosgt View Post
    It's remarkable that the brain analyzes sound in terms of frequencies. Expressing a sound as a superposition of frequencies is a purely mathematical idea. Most sounds aren't produced by a set of objects, each vibrating at one frequency.
    It’s not purely mathematical, because as I wrote earlier, when you listen to a very low note, you can very clearly hear the frequency of the air vibrating. If you play the lowest sounds on an organ you can distinctly hear the fundamental vibration.


    Sent from my iPhone using Tapatalk
    As above, so below

  12. #42
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by Jens View Post
    It’s not purely mathematical, because as I wrote earlier, when you listen to a very low note, you can very clearly hear the frequency of the air vibrating. If you play the lowest sounds on an organ you can distinctly hear the fundamental vibration.


    Sent from my iPhone using Tapatalk
    It's been pointed out that the hearing of different frequencies is due to the construction of your ear and brain, not the some division in the construction of the things making the the sound. So I regard evolution as a process that created the approach of analyzing a wave into a sum of pure frequencies. Yes, the procedure of this mathematical decomposition is physically implemented, if that's what you mean. It's physically implemented on the detection end of the process, not in the physical processes that create sound. Generally speaking when you hear a fundamental together with overtones, you are not hearing the result of different objects, each vibrating at a (pure) frequency.

  13. #43
    Join Date
    Dec 2005
    Posts
    1,358
    Colour vision in humans is based on the combination of three narrow-band-pass filters within the visible range, namely red, green and blue. In this sense one can say that colour vision (i.e. frequency or wavelength detection) is discrete and non-continuous. Hearing, on the other hand, is strictly an analogue variation of cycles per second.

    As an analogy, it is easier for humans to imagine number 16 when they know numbers 1, 2, 3, 4, 5, 6, 7 & 8, but difficult for them to imagine number 16 when they only know numbers 1, 4 & 7.

    clop

  14. #44
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,790
    Quote Originally Posted by clop View Post
    Colour vision in humans is based on the combination of three narrow-band-pass filters within the visible range, namely red, green and blue. In this sense one can say that colour vision (i.e. frequency or wavelength detection) is discrete and non-continuous.
    I don't think that is accurate. The three receptors have a pretty wide range and overlap to a large extent.


    From: https://en.wikipedia.org/wiki/Cone_cell

  15. #45
    Join Date
    Dec 2005
    Posts
    1,358
    Quote Originally Posted by Strange View Post
    I don't think that is accurate. The three receptors have a pretty wide range and overlap to a large extent.


    From: https://en.wikipedia.org/wiki/Cone_cell
    Thank you. I like that graph.

    The S and M peaks seem pretty far apart to me?

  16. #46
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by clop View Post
    Hearing, on the other hand, is strictly an analogue variation of cycles per second.
    See section 4.1.6.2 of http://digitalsoundandmusic.com/chapters/ch4/
    When a complex sound arrives at the basilar membrane, each critical band acts as a kind of bandpass filter, responding only to vibrations within its frequency spectrum. In this way, the sound is divided into frequency components.

  17. #47
    Join Date
    Dec 2006
    Location
    Canberra
    Posts
    2,174
    Quote Originally Posted by tashirosgt View Post
    hearing of different frequencies is due to the construction of your ear and brain, not some division in the construction of the things making the the sound.
    I don't think that is true. https://en.wikipedia.org/wiki/Overtone says "when a resonant system such as a blown pipe or plucked string is excited, a number of overtones may be produced along with the fundamental tone... It will oscillate at several of its modal frequencies at the same time. So when a note is played, this gives the sensation of hearing other frequencies (overtones) above the lowest frequency (the fundamental). Timbre is the quality that gives the listener the ability to distinguish between the sound of different instruments. The timbre of an instrument is determined by which overtones it emphasizes. That is to say, the relative volumes of these overtones to each other determines the specific "flavor", "color" or "tone" of sound of that family of instruments."

    The octave is the main overtone. As explained above, and in the rest of that very interesting article including its reference to Fourier analysis, overtones are a mechanical resonant harmonic product of the sound itself, not just a function of our hearing system.

    https://en.wikipedia.org/wiki/Harmonic_series_(music) further explains the nature of musical resonance.

    The importance of whole fractions in music provides harmonic consonance rather than dissonance, with music being in tune when different sound frequencies are in fraction relation. The major scale is given by the following fractions: 1; 9/8; 5/4; 4/3; 3/2; 5/3; 15/8; 2. The lowest whole number equivalents are: 24, 27, 30, 32, 36, 40, 45, 48. The equal tempered scale, designed to enable all keys to be used, replaces these whole fractions with intervals based on the semitone calculated at the twelfth root of two.

  18. #48
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by Robert Tulip View Post
    I don't think that is true. https://en.wikipedia.org/wiki/Overtone says "when a resonant system such as a blown pipe or plucked string is excited, a number of overtones may be produced along with the fundamental tone...
    That has to do with how we may analyze a complex waveform as a sum of simpler waves - and how our hearing system is constructed to perform such an analysis. My point is that a resonant system usually does not consist of separate pieces, each vibrating at pure frequency. So analyzing a waveform as a sum of other waves is just one mathematical option.

    By analogy, if we are given a sequence of numbers 2, 4, 4, 4, 2 we may imagine it to be constructed by summing the sequence 1, 2, 3, 2, 1 with the sequence 1, 2, 1, 2, 1 and, objectively speaking it is indeed such a sum. However, this mathematical analysis does not demonstrate that a natural phenomena producing the sequence 2,4,4,4,2 generated it by generating those two sequences and adding them together. For example, 2,4,4,4,2 is also the sum of 1,0,0,0,1 and 1,4,4,4,1.

  19. #49
    Join Date
    Jun 2005
    Posts
    13,860
    Quote Originally Posted by tashirosgt View Post
    That has to do with how we may analyze a complex waveform as a sum of simpler waves - and how our hearing system is constructed to perform such an analysis. My point is that a resonant system usually does not consist of separate pieces, each vibrating at pure frequency. So analyzing a waveform as a sum of other waves is just one mathematical option.
    I’m not sure exactly what you are trying to point out. I realize and think others realize that a sound is just a single waveform that can be mathematically analyzed (our ears essentially perform a Fourier transform). But I’m not sure why that is important to this discussion.


    Sent from my iPhone using Tapatalk
    As above, so below

  20. #50
    Join Date
    Mar 2007
    Location
    Falls Church, VA (near Washington, DC)
    Posts
    8,816
    Quote Originally Posted by tashirosgt View Post
    That has to do with how we may analyze a complex waveform as a sum of simpler waves - and how our hearing system is constructed to perform such an analysis. My point is that a resonant system usually does not consist of separate pieces, each vibrating at pure frequency. So analyzing a waveform as a sum of other waves is just one mathematical option.

    By analogy, if we are given a sequence of numbers 2, 4, 4, 4, 2 we may imagine it to be constructed by summing the sequence 1, 2, 3, 2, 1 with the sequence 1, 2, 1, 2, 1 and, objectively speaking it is indeed such a sum. However, this mathematical analysis does not demonstrate that a natural phenomena producing the sequence 2,4,4,4,2 generated it by generating those two sequences and adding them together. For example, 2,4,4,4,2 is also the sum of 1,0,0,0,1 and 1,4,4,4,1.
    My bold. Has anyone suggested otherwise in this thread?

  21. #51
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by Hornblower View Post
    My bold. Has anyone suggested otherwise in this thread?
    I don't know. When people object to the idea that analysis of a sound into individual individual frequencies is NOT a purely mathematical concept, what do they have in mind as an alternative?

  22. #52
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by Jens View Post
    I’m not sure exactly what you are trying to point out. I realize and think others realize that a sound is just a single waveform that can be mathematically analyzed (our ears essentially perform a Fourier transform). But I’m not sure why that is important to this discussion.
    My original observation was that it is remarkable that the hearing system evolved to use Fourier analysis as its method of analyzing sound - seeing as Fourier analys is not the only way to represent a sound wave. Since (I think) participants generally agree that the hearing system does analyse sound by representing it as the superposition of frequencies, it seems an important question to ask WHY that method was selected by evolution.

  23. #53
    Join Date
    Jun 2005
    Posts
    13,860
    Quote Originally Posted by tashirosgt View Post
    I don't know. When people object to the idea that analysis of a sound into individual individual frequencies is NOT a purely mathematical concept, what do they have in mind as an alternative?
    Before I make any comments about this, could I ask if the grammar is correct? It seems that you are saying that it is not a mathematical concept, and you are wondering how someone could think it is just a mathematical concept? Or do you mean the opposite. I’m just worried you might have accidentally added an extra “not. “


    Sent from my iPhone using Tapatalk
    As above, so below

  24. #54
    Join Date
    Jun 2005
    Posts
    13,860
    Quote Originally Posted by tashirosgt View Post
    My original observation was that it is remarkable that the hearing system evolved to use Fourier analysis as its method of analyzing sound - seeing as Fourier analys is not the only way to represent a sound wave.
    To be honest, I don’t think Fourier analysis is not a way to represent sound. It is simply a way of breaking a complex waveform into the individual components. A sound is simply frequency and amplitude and time. So it is a waveform. It’s not that you can represent it as a waveform, it is a waveform.


    Sent from my iPhone using Tapatalk
    As above, so below

  25. #55
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,743
    Also the ear mechanism, as explained above by Grant is a conical horn with discrete hairs which individually resonate to specific frequency. The brain gets direct signals of those, it does not need to do fourier analysis of a wave as does a microphone.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  26. #56
    Join Date
    Oct 2009
    Location
    a long way away
    Posts
    10,790
    Quote Originally Posted by tashirosgt View Post
    My point is that a resonant system usually does not consist of separate pieces, each vibrating at pure frequency.
    It may do. But it may consist of one piece that resonates at multiple frequencies. In the simplest case, the fundamental frequency responds to a standing wave with the same wavelength as the string/pipe/whatever. Harmonics correspond to standing waves where multiple cycles fit in that length. Other frequencies correspond to other resonant modes. The frequencies don't come from nowhere, so I don't think it is a completely arbitrary choice to analyse the signal in terms of those frequencies. And, of course, it is not the only method used.

    (But a lot of this discussions depends on what one means by what is "really" happening or "just a model"; and then you are into the area of the "reality" thread.)

  27. #57
    Join Date
    Mar 2007
    Location
    Falls Church, VA (near Washington, DC)
    Posts
    8,816
    Quote Originally Posted by profloater View Post
    Also the ear mechanism, as explained above by Grant is a conical horn with discrete hairs which individually resonate to specific frequency. The brain gets direct signals of those, it does not need to do fourier analysis of a wave as does a microphone.
    A microphone does not do Fourier analysis. It just repeats what it hears. We the people do Fourier analysis (or whatever mathematical technique is useful for the task at hand) and verify the similarity between the incoming acoustic oscillation and the outgoing electrical oscillation.

  28. #58
    Join Date
    Mar 2007
    Location
    Falls Church, VA (near Washington, DC)
    Posts
    8,816
    Quote Originally Posted by plant View Post
    It is not obvious to me why a musical note- lets say “C” sounds ‘the same’ as a frequency double or half.
    This doesn’t occur with vision ( except that a ‘color wheel’ seems continuous with violet seeming to nicely shift back into red. )
    Presumably this encoding is ‘accidental’ from an evolutionary perspective?
    Are there people who don’t perceive this relationship between sound frequency?
    Is there any evidence that other mammals e.g. dolphins/whales/ apes perceive intervals differently?
    One could imagine an experiment where a ‘reward’ is given when 2 tones are played with multiples of the same frequency etc.

    It seems the notion of ‘harmony’ might be a cultural phenomenon rather than a biological one.. but the perception of ‘an octave’ seems biological?

    Does anyone have any thoughts about this?
    To answer your title question, I don't think we know instinctively that an octave is produced by doubled frequency. As a young child I instinctively recognized a sensory similarity, but it was only years later in reading textbook articles that I learned about the frequency relationship, as inferred by Pythagoras from his experiments with oscillators that were slow enough for him to time with some certainty, and then extrapolate to musical instrument strings.

  29. #59
    Join Date
    Apr 2011
    Location
    Norfolk UK and some of me is in Northern France
    Posts
    8,743
    Quote Originally Posted by Hornblower View Post
    A microphone does not do Fourier analysis. It just repeats what it hears. We the people do Fourier analysis (or whatever mathematical technique is useful for the task at hand) and verify the similarity between the incoming acoustic oscillation and the outgoing electrical oscillation.
    Yes What I meant was that if you have a microphone giving you a soundwave and you want to extract frequencies from that you need a Fourier analysis but the construction of the ear gives direct frequency outputs.
    sicut vis videre esto
    When we realize that patterns don't exist in the universe, they are a template that we hold to the universe to make sense of it, it all makes a lot more sense.
    Originally Posted by Ken G

  30. #60
    Join Date
    Jun 2009
    Posts
    1,875
    Quote Originally Posted by grant hutchison View Post
    Well, it's how sounds are transduced by the cochlea -
    I don't detect that you disagree with my statement. Presumably the cochlea evolved to perform a frequency analysis. Evolution selected that type of mathematical analysis as opposed to different type of mathematical analysis.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •