Филологические науки/9. Этно-, социо- и психолингвистика.

к.филол.н. Вандышева А.В., Иванченко Т.Ю.

Академия маркетинга и социально-информационных технологий г.Краснодар

Sound effect upon brain

The present study examines neurological processes in brain caused by words and sounds. In this paper we make a survey of some research results that have identified certain mechanisms for interpreting sounds and speech in human brain.

There are certain words which seem to attract a certain blessing in life. Some attract power, some bring release from difficulties, and some give courage and strength.

What makes a word powerful? Is it the meaning, the vibration, the way it is used, or the knowledge of the teacher who teaches the pupil to repeat it? The answer to such questions is that some words have power because of their meaning, others because of the vibration they produce, others for their influence upon the various centers.

Words have power to vibrate through different parts of man's body. There are words that echo in the heart, and there are others that do so in the head, and again others that have power over the body. By certain words definite emotions can be quickened or calmed. There is also a science of syllables, which has its own particular effect.

So, the question arises: How does our brain perceive words? The study of neurological affect of the sound showed that human brain reacts upon the clear sounds in a certain way. Positron-electron tomography measuring the level of glucose absorption revealed that sounds initiate cellular hyperactivity in the right-brain.

Psychoacoustics, a newly emerging field of human potential technology, promises to radically affect human behavior through its study of sound, language, and music and their effects on the brain/mind. Only recently have we begun to understand the physiological effects of sound and music on the brain.

Tom Kenyon claims that acoustic stimulation of the brain is accomplished via the auditory pathways which are routed into the auditory cortex. The Reticular Activating System (RAS) is also activated through the spinoreticular fibers located in laminae of the spinal gray matter. While the RAS is not equipped to deal with specific sensory information, it is well suited for controlling arousal. Any strong stimulation, such as sound, activates the RAS, thereby diffusely activating the entire cerebral cortex, the seat of “higher” thought.

The human brain is hard-wired to find combinations of integer harmonic frequencies pleasing. Combined integer harmonic sounds are two or more separate tones, heard at the same time, where their frequencies are related by a simple integer ratio. For instance, the two frequencies 1000 Hz and 2000 Hz heard together would be a combined integer harmonic because 2000 Hz is exactly twice 1000 Hz. Human speech uses such simple harmonic tones to construct the sounds in words. In speech the harmonic ratios are typically numbers like 2/5 , 1/2, 1/3 etc. These tones heard together are called formants (M. Townsend). Formants are discrete sounds within a word, equating to phonemes in phonetics. Instead of hearing the two tones combined a single musical note, our brain interprets the sound as a discrete sound within a word instead. So, for instance, the 'O' sound might typically consist of a 500 Hz and 1000 Hz frequency combination.

The only information we get from our ears is the amplitude, frequency and time of arrival of sounds. It is left entirely to our brains to interpret what the sounds are, relying mainly on experience, context and expectation. Townsend insists that the brain generally interprets sounds in one of three 'modes'. In one mode it interprets a sound as random noise. In another mode the same sound appears to be music. In the third mode, the same sound becomes speech.

Interpretation of a sound by brain is largely a matter of expectation. If we hear tones from a musical scale, particularly set to a fixed rhythm, we are likely to hear it is as music. If we hear sounds with the typical frequency range and rhythms of speech we will probably try to interpret the sound as words. If we do not hear a sound as music or speech, we will hear it in its raw state, as a mixture of frequencies.

If we are listening to someone in a noisy situation we may not hear all the words. Our brains will 'fill in' the gaps with likely words, sometimes wrong, based on expectation. We will actually hear and remember 'filled in' words even if they are wrong. The words we hear are produced in our brains, not our ears.

In the phoneme restoration effect, someone is played a recording of a spoken sentence where one word is replaced by white noise of the same duration. And yet, people still 'hear' the missing word. Their brain has inserted it using context and expectation. In the verbal transformation effect, someone is played a word repeatedly. After many repeats, the word turns into another with a similar sound structure ('truce' may transform to 'truth', for instance). These effects, together with other scientific evidence, demonstrate that the brain decides what it hears based on experience, context and expectation.

Experiments have revealed that almost any simple noise, like white noise, can sound like speech if the person listening to it is in 'speech mode'. The more voice-like features in the noise (such as frequencies and rhythm), the more people will interpret it as words. If there are peaks in the frequency spectrum of the noise that happen, by chance, to form a harmonic ratio, as in formants, there is a much higher chance it will sound like speech. If there are variations in the overall amplitude of the sound giving a rhythm, similar to words in human speech, that will also greatly increase the chances of its being interpreted as a voice. Also, if the spectrum envelope of the sound (the overall frequency range) is restricted to that typical of a human voice, the illusion of speech is increased. The actual frequencies of the harmonics and the spectrum envelope don't have to be identical to normal human speech. Research has shown that people still understand speech even when it has been frequently shifted.

 Noise with these sort of characteristics is called by Townsend 'formant noise'. Though the apparent formants may make no sense (as they are noise, not words), our brains will work hard to turn the result into recognizable words. That's because they use a 'top-down' process to processing speech, trying to fit likely words to the apparent formants present. It explains why, with formant noise, you never 'hear' partial words. The words come from your brain, not the sound, and are made to fit the noise. In the same way, whole phrases can emerge. You may need to listen to formant noise several times to fix the phrase as your brain tries various likely alternatives. If someone tells you beforehand what the 'words' are meant to be, you will often hear it straight away.

In conversation, humans recognize words primarily from the sounds they hear. However, scientists have long known that what humans perceive goes beyond the sounds and even the sights of speech. The brain actually constructs its own unique interpretation, factoring in both the sights and sounds of speech.

For example, when combining the acoustic patterns of speech with the visual images of the speaker's mouth moving, humans sometimes reconstruct a syllable that is not physically present in either sight or sound. Although this illusion suggests spoken syllables are represented in the brain in a way that is more abstract than the physical patterns of speech, scientists haven't understood how the brain generates abstractions of this sort.

Researchers at the University of Chicago have identified brain areas responsible for this perception. One of these areas, known as Broca's region, is typically thought of as an area of the brain used for talking rather than listening.

Uri Hasson, lead author of the study and a post-doctoral scholar at the university's Human Neuroscience Laboratory, explains that when the speech sounds do not correspond exactly to the words that are mouthed, the brain often conjures a third sound as an experience -- and this experience may often vary from what was actually spoken. He gives an example with the syllable “pa” pronounced by person’s voice. The experiment has shown that the person's lips mouth the word 'ka"' One would think you might hear 'pa' because that is what was said. But in fact, with the conflicting verbal and visual signals, the brain is far more likely to hear 'ta,' an entirely new sound.

This demonstration is called the McGurk effect (named after Harry McGurk, a developmental psychologist from England who first noticed this phenomenon in the 1970s). In the current study, scientists used functional magnetic resonance imaging (graphic depiction of brain activity) to demonstrate that Broca's region is responsible for the type of abstract speech processing that underlies this effect.

Although we experience speech as a series of words like print on a page, the speech signal is not as clear as print, and must be interpreted rather than simply recognized, Hasson explains.

He says this paper provides a glimpse into how such interpretations are carried out in the brain. These types of interpretations might be particularly important, when the speech sounds are unclear, such as when conversing in a crowded bar, listening to an unfamiliar accent, or coping with hearing loss.

In all these cases, understanding what is said requires interpreting the physical speech signal to determine what is said. And scientists now know the Broca's region is plays a major role in this process.

R. Näätänen from University of Helsinki, states the contribution of the mismatch negativity (MMN), and its magnetic equivalent MMNm, to our understanding of the perception of speech sounds in the human brain. MMN data indicate that each sound, both speech and nonspeech, develops its neural representation corresponding to the percept of this sound in the neurophysiological substrate of auditory sensory memory. The accuracy of this representation, determining the accuracy of the discrimination between different sounds, can be probed with MMN separately for any auditory feature (e.g., frequency or duration) or stimulus type such as phonemes. Furthermore, MMN data show that the perception of phonemes, and probably also of larger linguistic units (syllables and words), is based on language-specific phonetic traces developed in the posterior part of the left-hemisphere auditory cortex. These traces serve as recognition models for the corresponding speech sounds in listening to speech. MMN studies further suggest that these language-specific traces for the mother tongue develop during the first few months of life. Moreover, MMN can also index the development of such traces for a foreign language learned later in life. MMN data have also revealed the existence of such neuronal populations in the human brain that can encode acoustic invariances specific to each speech sound, which could explain correct speech perception irrespective of the acoustic variation between the different speakers and word context.

Scientists at the University of Rochester have discovered that the hormone estrogen plays a pivotal role in how the brain processes sounds.

Raphael Pinaud, assistant professor of brain and cognitive sciences at the University of Rochester and lead author of the study said they had discovered estrogen “doing something totally unexpected”. The findings of this study indicate that estrogen plays a central role in how the brain extracts and interprets auditory information. It does this on a scale of milliseconds in neurons, as opposed to days, months or even years in which estrogen is more commonly known to affect an organism. Pinaud, along with Lisa Tremere, a research assistant professor of brain and cognitive sciences, and Jin Jeong, a postdoctoral fellow in Pinaud's laboratory, demonstrated that increasing estrogen levels in brain regions that process auditory information caused heightened sensitivity of sound-processing neurons, which encoded more complex and subtle features of the sound stimulus. Pinaud's team also shows that estrogen is required to activate genes that instruct the brain to lay down memories of those sounds.

Pinaud’s research revealed a dual role played by estrogen. It was discovered that estrogen modulates the gain of auditory neurons instantaneously, and it initiates cellular processes that activate genes that are involved in learning and memory formation.

Pinaud’s theory opens prospects for investigating how neurons adapt their functionality when encountering new sensory information and how these changes may ultimately enable the formation of memories; and for exploring the specific mechanisms by which estrogen might impact these processes.

Литература:

1. Easton, J. New Brain Mechanism Identified For Interpreting Speech. The University of Chicago Medical Center. - Dec., 19. - 2007. [Electronic resource]. – Mode access: http://www.uchospitals.edu/news/2007/20071219-brain.html

2. Estrogen controls how the brain processes sound. [Electronic resource]. – Mode access: http://www.physorg.com/news160765483.html .-  May 5, 2009.

3. Kenyon, T. Theoretical Constructs of ABR Technology. [Electronic resource]. – Mode access: http://tomkenyon.com/theoretical-constructs-of-abr-technology .

4. Näätänen , R. The perception of speech sounds by the human brain as reflected by the mismatch negativity and its magnetic equivalent [Text] / R. Näätänen  // Psychophysiology. -  #38 . – 2001. – pp. 1–21.- Cambridge University Press. - Society for Psychophysiological Research.

Townsend M. EVP formant noise theory electronic voice fenomena. [Electronic resource]. – Mode access: http://www.assap.org/newsite/htmlfiles/Articles.html