Neuroscience and Karma ► 11. Hearing, Speaking and Writing

Posted: 07.07.2015

1. Sense of Hearing

All living organisms do not possess the sense of hearing. Only the five-sensed (pancendriya) organims - humans and subhumans - posses it. The human ear consists of three main parts. Sound waves directed by the external ear strike the ear-drum (tympanic membrane) in the middle ear and cause it to vibrate. The vibrations are transmitted and set up waves of motion in the fluid in the labyrinth of the internal ear. The motion of the fluid excites sensory cells in the labyrinth which transmit impulses to the acoustic centre in the brain. The receptors for bearing are more than 20,000 stiff hair-like fibres which can vibrate like reeds of a harmonica and relay their impulses along the auditory nerve to the brain. Like the messages of the other sense-organs, the sounds detected by the ears are not meaningful untill they are analysed and interpreted in the brain.

The basic arrangement of the auditory pathways in our brain is similar to that for vision. Information from the ear is relayed through various lower stages and then to the cortex through a set of cells in the thalamus. The primary cortical receiving area for hearing is along the upper border of the temporal lobe. The receptors for sound in the cochlear organ of the ear are tuned to respond to particular frequencies. Sounds that do not seem significant enough to merit attention by the higher brain are filtered out. For instance, a parent may wake instantly to the sound of a baby's crying, yet sleep soundly through the nimble of trucks and their sounds. Relevant features include constant frequency, amplitude-modulation, frequency-modulation and noise-bursts. All human beings are born with some sets of feature detectors suitable for language. The capacity to recognize the variants of speech-sounds is altered in the course of learning of a language. The English vowel sounds that are so easily distinguished in 'red' and 'raid' cannot be separated by many Indian speakers. Conversely Indians require six different 't' sounds, which are mostly indistinguishable to English speakers.

2. Speech

The production of speech is finely organised. The main parts of the mechanism for the production of speech are the larynx (sound-box) and the supra laryngeal apparatus - the pharynx, tongue, palate and lips. The energy for speech transmission comes from the puffs of air expelled from the lungs. Air we breathe is converted by the valve action of the larynx into the sound-waves of speech. Vocal cords, that vibrate break the air into minute oscillating puffs having a regular pitch. They flow through vocal track of nose and mouth. Brain-controlled muscles alter the shape of the tract walls, cause the soft palate to lift, shutting off air to nose, prompt tongue to change shape and position; lips to purse or spread, channelling the air to crash against, roar over or hiss between the teeth. The frequency (pitch) of the sound is varied by the speaker (between about 60-350 Hz) by changing the pressure of the air and varying the tension on the vocal cords by muscles in the larynx. Variations in this frequency are an important agent in many languages. Nearly all the detailed information is encoded by the supra laryngeal tract. This acts as an acoustic filter and by varying its.shape, the speaker alters the sound produced, just as do the length and openings of an organ pipe. The actual contractions that alter the shape of the vocal tract are due to about 15 muscles. Speaking involves selecting those movements of the muscles that produce the conventional sounds of a language in certain conventional patterns of words and phrases.

3. Brain and Speech

No one can doubt that the brain is involved in the act of speaking, that is, it is a result of the activities of the brain. Speech is essentially the product of a person and the concept of a person must include his brain. For, the continuity of the personality depends upon the store of programs' records in the brain. If the dictionary is not in the brain, where is it? And with the dictionary must surely also be the grammar, and the system that uses both to produce meaningful speech. One of the areas specialized for speech, called Wernicke's area, is in the left temporal lobe. It enables us to comprehend speech. Another area, known as Brocas area in the folds of the frontal lobe, lies next to the area that coordinates movement of the tongue, lips, palate and vocal cords. It controls the flows of words from brain to month. Every minute, two hundred syllables are exquisitely synchronised—"the most brilliant technical achievement of the human brain".

When words are heard, the sounds pass to the auditory area of the cortex in neurological codes to the adjacent Wernicke's area, where they are unscrambled to understandable patterns of words. If the words are repeated, they shift forward to Broca's area. Once there, they rouse the nearby motor area, controlling the movement of speech-muscles. A third area, the angular gyrus bridges the gaps between the speech we hear and the language we read and write. It transforms speech-sounds into the visual messages needed to write, what we hear and converts visual messages from reading into the sound-patterns required to recite poetry.

The process of understanding/decoding speech largely depends upon a set of anticipations and expectancies. The analysis of the process of speech-decoding has allowed the production of blueprints for machines that could recognize speech. Recognizing speech, like seeing and other perceptual acts is an active process of reconstruction, not a mere passive reception. There may be some expectancies that are common to all mankind, especially if we include gesture as a part of speech. We all recognize the meaning of loud aggressive speech or the soft words and smiles of love. However, most of the decoding of speech depends on the store of a priori knowledge about the language. Every speaker or listener carries in his cortex a vast store of information about any language he uses. This includes the complete inventory of phonemes and words, the rules for forming syllables from phonemes and sentences from words.

4. Language - Definition and Structure

We might define language as "a species' specific system of intentional communication between individuals". It involves encoding of some desired message by selecting appropriate items from a mutually known set of signs, transmission of these, as by sound gesture or scent, and decoding by the recipient as evidenced by some response. The transmitter, intends either to produce some action by the listener or to have some effect upon him by provision of information. To do this, he uses the equipment with which he is provided, whether he is a baby crying or a professor lecturing. Even when a person is 'thinking to himself, he still has at least some trace of intention, either to solve a problem or to fulfil some desire in his day-dreaming.

This definition of language is broad enough to cover all species. Human language differs from all other systems of communication in that it allows the recombination of symbols to provide for effective transmission of a range of message so large that many call it infinite. We can talk about (almost) anything. All our means of communication, from crying onwards, probably follow certain rules of structure. By these principles and rules, the brain selects one sound and rejects another for transmission, or recognizes the intended meaning when it hears them.

Why should the organism recognize certain objects or communicate about them? The answer is: we recognize and speak about those situations that are relevant to ourselves. The brain operations that do this must be those that compute the appropriate relationships. The clue to the operations may be that the information from the senses is laid out on the surface of the brain as a series of maps. Brain activity is a process with an aim. The relations, that the brain computes and the rules by which it does this, are likely to have a large inherited component and there is strong evidence that human beings are genetically programmed for speech.

Where do sentences come from? Neuroscientists believe vocabulary is stored in many parts of the brain. Each connected to the language centre, because wherever there is brain-damage, there is usually a naming disorder. When the connection between Broca's and Wernicke's areas are damaged, the patient may understand other people and produce meaningful thoughts. But the thoughts are expressed in meaningless language. If the angular gyrus is damaged, a person may be able to repeat the words he hears but not those he reads.

Neuroscientists think that children under the age of ten, or until puberty, have a capacity to develop language in both hemispheres of the brain. If language-centres in the left hemispheres are injured, the right hemisphere takes over compensating for the loss.

5. Language Universals

Psycholinguists theorize that very deep and restrictive principles that determine the nature of human language are rooted in the human mind. These principles account for the creative aspect of language enabling human beings to continually compose new sentences instead of repeating a fixed number of phrases. The human brain is genetically programmed for language development. Thus learning language means that maturing parts of the brain enable children to recognize basic regularities in the speech they hear around them. These regularities are language universals. But expecting a child to learn a language without the experience of talking to others is like trying to start a car without switching on the ignition.

The most important universal feature of all is the creativity or productivity of language. The fact that we can construct and understand an indefinitely large number of messages is the basis of the freedom of the individual to be different from others. This freedom is in turn the basis of the great adaptability of humans and of their cultures.

6. The Origin of Language

If there are universal features in human language, it seems likely that it arose once only, within a single population, or at least that one system has outlived all the others. Some people suggest that human language first became possible as a result of adopting the upright posture, perhaps as much as 10 million years ago.

Since it is now shown that part of the basis of human speech is inherited in the DNA, there must have been evolution of it by gradual natural selection.

Share this page on: