The Language Instinct: How the Mind Creates Language
Child: Mamma isn’t boy, he a girl.
Mother: That’s right.
Child: And Walt Disney comes on Tuesday.
Mother: No, he does not.
Brown also checked whether children might learn about the state of their grammars by noticing whether they are being understood. He looked at children’s well-formed and badly formed questions and whether their parents seemed to have answered them appropriately (that is, as if they understood them) or with non sequiturs. Again, there was no correlation; What you can do? may not be English, but it is perfectly understandable.
Indeed, when fussy parents or meddling experimenters do provide children with feedback, the children tune it out. The psycholinguist Martin Braine once tried for several weeks to stamp out one of his daughter’s grammatical errors. Here is the result:
Child: Want other one spoon, Daddy.
Father: You mean, you want THE OTHER SPOON.
Child: Yes, I want other one spoon, please, Daddy.
Father: Can you say “the other spoon”?
Child: Other…one…spoon.
Father: Say…“other.”
Child: Other.
Father: “Spoon.”
Child: Spoon.
Father: “Other…Spoon.”
Child: Other…spoon. Now give me other one spoon?
Braine wrote, “Further tuition is ruled out by her protest, vigorously supported by my wife.”
As far as grammar learning goes, the child must be a naturalist, passively observing the speech of others, rather than an experimentalist, manipulating stimuli and recording the results. The implications are profound. Languages are infinite, childhoods finite. To become speakers, children cannot just memorize; they must leap into the linguistic unknown and generalize to an infinite world of as-yet-unspoken sentences. But there are untold numbers of seductive false leaps:
mind minded; but not find finded
The ice melted He melted the ice; but not David died He died David
She seems to be asleep She seems asleep; but not She seems to be sleeping She seems sleeping
Sheila saw Mary with her best friend’s husband Who did Sheila see Mary with? but not Sheila saw Mary and her best friend’s husband Who did Sheila see Mary and?
If children could count on being corrected for making such errors, they could take their chances. But in a world of grammatically oblivious parents, they must be more cautious—if they ever went too far and produced ungrammatical sentences together with the grammatical ones, the world would never tell them they were wrong. They would speak ungrammatically all their lives—though a better way of putting it is that that part of the language, the prohibition against the sentence types that the child was using, would not last beyond a single generation. Thus any no-feedback situation presents a difficult challenge to the design of a learning system, and it is of considerable interest to mathematicians, psychologists, and engineers studying learning in general.
How is the child designed to cope with the problem? A good start would be to build in the basic organization of grammar, so the child would try out only the kinds of generalizations that are possible in the world’s languages. Dead ends like Who did Sheila see Mary and?, not grammatical in any language, should not even occur to a child, and indeed, no child (or adult) we know of has ever tried it. But this is not enough, because the child also has to figure out how far to leap in the particular language being acquired, and languages vary: some allow many word orders, some only a few; some allow the causative rule to apply freely, others to only a few kinds of verb. Therefore a well-designed child, when faced with several choices in how far to generalize, should, in general, be consecutive: start with the smallest hypothesis about the language that is consistent with what parents say, then expand it outward as the evidence requires. Studies of children’s language show that by and large that is how they work. For example, children learning English never leap to the conclusion that it is a free-word-order language and speak in all orders like give doggie paper; give paper doggie, paper doggie give; doggie paper give, and so on. Logically speaking, though, that would be consistent with what they hear if they were willing to entertain the possibility that their parents were just taciturn speakers of Korean, Russian, or Swedish, where several orders are possible. But children learning Korean, Russian, and Swedish do sometimes err on the side of caution and use only one of the orders allowed in the language, pending further evidence.
Furthermore, in cases where children do make errors and recover, their grammars must have some internal checks and balances, so that hearing one kind of sentence can catapult another out of the grammar. For example, if the word-building system is organized so that an irregular form listed in the mental dictionary blocks the application of the corresponding rule, hearing held enough times will eventually drive out holded.
These general conclusions about language learning are interesting, but we would understand them better if we could trace out what actually happens from moment to moment in children’s minds as sentences come in and they try to distill rules from them. Viewed up close, the problem of learning rules is even harder than it appears from a distance. Imagine a hypothetical child trying to extract patterns from the following sentences, without any innate guidance as to how human grammar works:
Jane eats chicken.
Jane eats fish.
Jane likes fish.
At first glance, patterns jump out. Sentences, the child might conclude, consist of three words: the first must be Jane, the second either eats or likes, the third chicken or fish. With these micro-rules, the child can already generalize beyond the input, to the brand-new sentence Jane likes chicken. So far, so good. But let’s say the next two sentences are
Jane eats slowly.
Jane might fish.
The word might gets added to the list of words that can appear in second position, and the word slowly is added to the list that can appear in third position. But look at the generalizations this would allow:
Jane might slowly.
Jane likes slowly.
Jane might chicken.
Bad start. The same ambiguity that bedevils language parsing in the adult bedevils language acquisition in the child. The moral is that the child must couch rules in grammatical categories like noun, verb, and auxiliary, not in actual words. That way, fish as a noun and fish as a verb would be kept separate, and the child would not adulterate the noun rule with instances of verbs and vice versa.
How might a child assign words into categories like noun and verb? Clearly, their meanings help. In all languages, words for objects and people are nouns or noun phrases, words for actions and change of state are verbs. (As we saw in Chapter 4, the converse is not true—many nouns, like destruction, do not refer to objects and people, and many verbs, like interest, do not refer to actions or changes of state.) Similarly, words for kinds of paths and places are prepositions, and words for qualities tend to be adjectives. Recall that children’s first words refer to objects, actions, directions, and qualities. This is convenient. If children are willing to guess that words for objects are nouns, words for actions are verbs, and so on, they would have a leg up on the rule-learning problem.
But words are not enough; they must be ordered. Imagine the child trying to figure out what kind of word can occur before the verb bother. It can’t be done:
That dog bothers me. [dog, a noun]
What she wears bothers me. [wears, a verb]
Music that is too loud bothers me. [loud, an adjective]
Cheering too loudly bothers me. [loudly, an adverb]
The guy she hangs out with bothers me. [with, a preposition]
The problem is obvious. There is a certain something that must come before the verb bother, but that something is not a kind of word; it is a kind of phrase, a noun phrase. A noun phrase always contains a head noun, but that noun can be followed by all kinds of stuff. So it is hopeless to try to learn a language by analyzing sentences word by word. The child
must look for phrases.
What does it mean to look for phrases? A phrase is a group of words. For a sentence of four words, there are eight possible ways to group the words into phrases: {That} {dog bothers me}; {That dog} {bothers me}; {That} {dog bothers} {me}, and so on. For a sentence of five words, there are sixteen possible ways; for a sentence of six words, thirty-two ways; for a sentence of n words, 2n-1—a big number for long sentences. Most of these partitionings would give the child groups of words that would be useless in constructing new sentences, such as wears bothers and cheering too, but the child, unable to rely on parental feedback, has no way of knowing this. Once again, children cannot attack the language-learning task like a logician free of preconceptions; they need guidance.
This guidance could come from two sources. First, the child could assume that parents’ speech respects the basic design of human phrase structure: phrases contain heads; role-players are grouped with heads in the mini-phrases called X-bars; X-bars are grouped with their modifiers inside X-phrases (noun phrase, verb phrase, and so on); X-phrases can have subjects. To put it crudely, the X-bar theory of phrase structure could be innate. Second, since the meanings of parents’ sentences are usually guessable in context, the child could use the meanings to help set up the right phrase structure. Imagine that a parent says The big dog ate ice cream. If the child has previously learned the individual words big, dog, ate, and ice cream, he or she can guess their categories and grow the first twigs of a tree:
In turn, nouns and verbs must belong to noun phrases and verb phrases, so the child can posit one for each of these words. And if there is a big dog around, the child can guess that the and big modify dog, and connect them properly inside the noun phrase:
If the child knows that the dog just ate ice cream, he or she can also guess that ice cream and dog are role-players for the verb eat. Dog is a special kind of role-player, because it is the causal agent of the action and the topic of the sentence; hence it is likely to be the subject of the sentence and therefore attaches to the “S.” A tree for the sentence has been completed:
The rules and dictionary entries can be peeled off the tree:
S NP VP
NP (det) (A) N
VP V (NP)
dog: N
ice cream: N
ate: V; eater = subject, thing eaten = object
the: det
big: A
This hypothetical time-lapse photography of the mind of a child at work shows how a child, if suitably equipped, could learn three rules and five words from a single sentence in context.
The use of part-of-speech categories, X-bar phrase structure, and meaning guessed from context is amazingly powerful, but amazing power is what a real-life child needs to learn grammar so quickly, especially without parental feedback. There are many benefits to using a small number of innate categories like N and V to organize incoming speech. By calling both the subject and object phrases “NP,” rather than, say, Phrase #1 and Phrase #2, the child automatically can apply hard-won knowledge about nouns in subject position to nouns in object position, and vice versa. For example, our model child can already generalize and use dog as an object without having heard an adult do so, and the child tacitly knows that adjectives precede nouns not just in subjects but in objects, again without direct evidence. The child knows that if more than one dog is dogs in subject position, more than one dog is dogs in object position. I conservatively estimate that English allows about eight possible phrasemates of a head noun inside a noun phrase, such as John’s dog; dogs in the park; big dogs; dogs that I like, and so on. In turn, there are about eight places in a sentence where the whole noun phrase can go, such as Dog bites man; Man bites dog; A dog’s life; Give the boy a dog; Talk to the dog; and so on. There are three ways to inflect a noun: dog, dogs, dog’s. And a typical child by the time he or she is in high school has learned something like twenty thousand nouns. If children had to learn all the combinations separately, they would need to listen to about 140 million different sentences. At a rate of a sentence every ten seconds, ten hours a day, it would take over a century. But by unconsciously labeling all nouns as “N” and all noun phrases as “NP,” the child has only to hear about twenty-five different kinds of noun phrase and learn the nouns one by one, and the millions of possible combinations become available automatically.
Indeed, if children are blinkered to look for only a small number of phrase types, they automatically gain the ability to produce an infinite number of sentences, one of the quintessential properties of human grammar. Take the phrase the tree in the park. If the child mentally labels the park as an NP and also labels the tree in the park as an NP, the resulting rules generate an NP inside a PP inside an NP—a loop that can be iterated indefinitely, as in the tree near the ledge by the lake in the park in the city in the east of the state…In contrast, a child who was free to label in the park as one kind of phrase and the tree in the park as another kind would be deprived of the insight that the phrase contains an example of itself. The child would be limited to reproducing that phrase structure alone. Mental flexibility confines children; innate constraints set them free.
Once a rudimentary but roughly accurate analysis of sentence structure has been set up, the rest of the language can fall into place. Abstract words—nouns that do not refer to objects and people, for example—can be learned by paying attention to where they sit inside a sentence. Since situation in The situation justifies drastic measures occurs inside a phrase in NP position, it must be a noun. If the language allows phrases to be scrambled around the sentence, like Latin or Warlpiri, the child can discover this feature upon coming across a word that cannot be connected to a tree in the expected place without crossing branches. The child, constrained by Universal Grammar, knows what to focus on in decoding case and agreement inflections: a noun’s inflection might depend on whether it is in subject or object position; a verb’s might depend on tense, aspect, and the number, person, and gender of its subject and object. If the hypotheses were not confined to this small set, the task of learning inflections would be intractable—logically speaking, an inflection could depend on whether the third word in the sentence referred to a reddish or bluish object, whether the last word was long or short, whether the sentence was being uttered indoors or outdoors, and billions of other fruitless possibilities that a grammatically unfettered child would have to test for.
We can now return to the puzzle that opened the chapter: Why aren’t babies born talking? We know that part of the answer is that babies have to listen to themselves to learn how to work their articulators, and have to listen to their elders to learn communal phonemes, words, and phrase orders. Some of these acquisitions depend on other ones, forcing development to proceed in a sequence: phonemes before words, words before sentences. But any mental mechanism powerful enough to learn these things could probably do so with a few weeks or months of input. Why does the sequence have to take three years? Could it be any faster?
Perhaps not. Complicated machines take time to assemble, and human infants may be expelled from the womb before their brains are complete. A human, after all, is an animal with a ludicrously large head, and a woman’s pelvis, through which it must pass, can be only so big. If human beings stayed in the womb for the proportion of their life cycle that we would expect based on extrapolation from other primates, they would be born at the age of eighteen months. That is the age at which babies in fact begin to put words together. In one sense, then babies are born talking!
And we know that babies’ brains do change considerably after birth. Before birth, virtually all the neurons (nerve cells) are formed, and they migrate into their proper locations in the brain. But head size, brain weight, and thickness of the cerebral cortex (gray matter), where the synapses (junctions) subserving mental computation are found, continue to increase rapidly in the year after birth. Long-distance connections (white matter) are not complete until nine months, and they continue to grow their speed-inducing myelin i
nsulation throughout childhood. Synapses continue to develop, peaking in number between nine months and two years (depending on the brain region), at which point the child has fifty percent more synapses than the adult! Metabolic activity in the brain reaches adult levels by nine to ten months, and soon exceeds it, peaking around the age of four. The brain is sculpted not only by adding neural material but by chipping it away. Massive numbers of neurons die in utero, and the dying continues during the first two years before leveling off at age seven. Synapses wither from the age of two through the rest of childhood and into adolescence, when the brain’s metabolic rate falls back to adult levels. Language development, then, could be on a maturational timetable, like teeth. Perhaps linguistic accomplishments like babbling, first words, and grammar require minimum levels of brain size, long-distance connections, and extra synapses, particularly in the language centers of the brain (which we will explore in the next chapter).
So language seems to develop about as quickly as the growing brain can handle it. What’s the rush? Why is language installed so quickly, while the rest of the child’s mental development seems to proceed at a more leisurely pace? In a book on evolutionary theory often considered to be one of the most important since Darwin’s, the biologist George Williams speculates: