Seeing Voices
If SignFont, or some other form of written Sign, were adopted by the deaf, it might lead them to a written literature of their own, and serve to deepen their sense of community and culture. This prospect, interestingly, was perceived by Alexander Graham Bell: “Another method of consolidating the deaf and dumb into a distinct class would be to reduce the sign-language to writing, so that the deaf-mutes would have a common literature distinct from the rest of the world.” But this was seen by him in an entirely negative light, as predisposing towards “the formation of a deaf variety of the human race.”
33. This was equally the case with Bernard Tervoort’s remarkable thesis on Dutch Sign Language, published in Amsterdam in 1952. This important early work was totally ignored at the time.
34. Besides the immense number of grammatical modulations that signs can undergo (there are literally hundreds of these, for example, for the root sign LOOK), the actual vocabulary of Sign is far larger and richer than any existing dictionary represents. Sign languages are evolving almost explosively at this time (this is especially true of the newest ones, like Israeli Sign). There is a continual proliferation of neologisms: some of these represent borrowings from English (or whatever the surrounding spoken language), some are mimetic depictions, some are ad hoc inventions, but most are created by the remarkable range of formal devices available within the language itself. These have been especially studied by Ursula Bellugi and Don Newkirk.
35. Visual images are not mechanical, or passive, like photographic ones; they are, rather, analytical constructions. Elementary feature-detectors—for vertical lines, horizontal lines, angles, etc.—were first described by David Hubel and Torsten Wiesel. And at a higher level the image must be composed and structured with the aid of what Richard Gregory has called a “visual grammar” (see “The Grammar of Vision,” in Gregory, 1974).
A question which has been raised by Bellugi and others is whether sign language has the same generative grammar as speech, the same deep neural and grammatical basis. Since the “deep structure” of language, as envisioned by Chomsky, has an essentially abstract or mathematical nature, it could, in principle, be mapped equally well onto the surface structure of a sign language, a touch language, a smell language, whatever. The modality of the language, as such, would not (necessarily) present any problem.
A more fundamental question, raised above all by Edelman, is whether any innate or rule-bound basis is needed for language development at all; whether the brain/mind might not proceed in a quite different fashion, creating the linguistic categories and relationships it needs, as (in Edelman’s terms) it creates perceptual categories, without prior knowledge, in an “unlabelled” world (Edelman, 1990).
36. The question of whether some nonhuman species have language, languages that make “infinite use of finite means,” remains a confused and contentious one. As a neurologist I have been intrigued by descriptions of aphasia in monkeys, which suggest that the neural primordia of language, at least, evolved before man (see Heffner and Heffner, 1988).
Chimpanzees are unable to speak (their vocal apparatus is geared only for relatively crude sounds), but are able to make signs quite well, to acquire a vocabulary of several hundred signs. In the case of pygmy chimpanzees, indeed, such signs (or “symbols”) may be used spontaneously and passed on to other chimps. There is no doubt that these primates can acquire and use and transmit a gestural code. They may also make simple metaphors or creative couplings of signs (this has been observed in many chimps, including Washoe and Nim Chimsky). But does this, properly speaking, constitute a language? In terms of syntactic competence and generative grammar, it seems doubtful if chimpanzees can be said to have genuine language capacity. (Although Savage-Rumbaugh feels there may be a proto-grammar; see Savage-Rumbaugh, 1986).
37. (See Chomsky, 1968, p. 26.) The intellectual history of such a generative, or “philosophical” grammar, and of the concept of “innate ideas” in general, has been fascinatingly discussed by Chomsky—one feels that he needed to discover his own precursors in order to discover himself, his own place in an intellectual tradition; see especially his Cartesian Linguistics and his Beckman lectures, published as Language and Mind. The great era of “philosophical grammar” was in the seventeenth century, and its high point was the Port-Royal Grammar in 1660. Our present linguistics, Chomsky feels, might have arisen then, but its development was aborted by the rise of a shallow empiricism. If the notion of an underlying native propensity is extended from language to thought in general, then the doctrine of “innate ideas” (that is, structures of mind which, when activated, organize the form of experience) may be traced back to Plato, thence to Leibniz and Kant. Some biologists have felt this concept of innateness essential to explain the forms of organic life, most notably the ethologist Konrad Lorenz, whom Chomsky quotes in this context (Chomsky, 1968, p. 81):
Adaptation of the a priori to the real world has no more originated from “experience” than adaptation of the fin of the fish to the properties of water. Just as the form of the fin is given a priori, prior to any individual negotiation of the young fish with the water, and just as it is this form that makes possible this negotiation, so it is also the case with our forms of perception and categories in their relationship to our negotiation with the real external world through experience.
Others see experience not merely as kindling but as creating the forms of perception and categories.
38. Chomsky, 1968, p. 76.
39. The notion of a “critical age” for acquiring language was introduced by Lenneberg: the hypothesis that if language were not acquired by puberty it would never be acquired thereafter, at least not with real, native-like proficiency. Questions of critical age hardly arise with the hearing population, for virtually all the hearing (even the retarded) acquire competent speech in the first five years of life. It is a major problem for the deaf, who may be unable to hear, or at least make any sense out of, their parents’ voices, and who may also be denied any exposure to Sign. There is evidence, indeed, that those who learn Sign late (that is, after the age of five) never acquire the effortless fluency and flawless grammar of those who learn it from the start (especially those who acquire it earliest, from their deaf parents).
There may be exceptions to this, but they are exceptions. It may be accepted, in general, that the preschool years are crucial for the acquisition of good language, and that indeed, first exposure to language should come as early as possible—and that those born deaf should go to nursery schools where Sign is taught. It might be said that Massieu, at the age of thirteen and nine months, was still within this critical age, but clearly Ildefonso was far beyond this. Their very late acquisition of language could be explained simply by an unusual retention of neuronal plasticity; but a more interesting hypothesis is that the gestural systems (or “home signs”) set up by Ildefonso and his brother, or by Massieu and his deaf siblings, could have functioned as a “proto-language,” inaugurating, so to speak, a linguistic competence in the brain, which was only fired to full activity with exposure to genuine sign language many years later. (Itard, the physician-teacher of Victor, the Wild Boy, also postulated a critical period for language acquisition in order to explain his failure to teach Victor speech production and perception.)
40. See Corina, 1989.
41. See Lévy-Bruhl, 1966.
42. Since most research on Sign at present takes place in the United States, most of the findings relate to American Sign Language, although others (Danish, Chinese, Russian, British) are also being investigated. But there is no reason to suppose these are peculiar to ASL—they probably apply to the entire class of visuospatial languages.
43. As one learns Sign, or as the eye becomes attuned to it, it is seen to be fundamentally different in character from gesture, and is no longer to be confused with it for a moment. I found the distinction particularly striking on a recent visit to Italy, for Italian gesture (as everyone knows) is large and exuberant and operatic, whereas Italian Si
gn is strictly constrained within a conventional signing space, and strictly constrained by all the lexical and grammatical rules of a signed language, and not in the least “Italianate” in quality: the difference between the para-language of gesture and the actual language of Sign is evident here, instantly, to the untutored eye.
44. See Liddell and Johnson, 1989, and Liddell and Johnson, 1986.
45. Stokoe, 1979.
46. Again, Stokoe describes some of this complexity:
When three or four signers are standing in a natural arrangement for sign conversation … the space transforms are by no means 180-degree rotations of the three-dimensional visual world but involve orientations that non-signers seldom if ever understand. When all the transforms of this and other kinds are made between the signer’s visual three-dimensional field and that of each watcher, the signer has transmitted the content of his or her world of thought to the watcher. If all the trajectories of all the sign actions—direction and direction-change of all upper arms, forearm, wrist, hand and finger movement, all the nuances of all the eye and face and head action—could be described, we would have a description of the phenomena into which thought is transformed by a sign language.… These superimpositions of semantics onto the space-time manifold need to be separated out if we are to understand how language and thought and the body interact.
47. “We currently analyze three dimensional movement using a modified Op-Eye system, a monitoring apparatus permitting rapid high-resolution digitalization of hand and arm movements.… Optoelectronic cameras track the positions of light-emitting diodes attached to the hands and arms and provide a digital output directly to a computer, which calculates three-dimensional trajectories” (Poizner, Klima, and Bellugi, 1987, p. 27). See fig. 2.
48. Though unconscious, learning language is a prodigious task—but despite the differences in modality, the acquisition of ASL by deaf children bears remarkable similarities to the acquisition of spoken language by a hearing child. Specifically, the acquisition of grammar seems identical, and this occurs relatively suddenly, as a reorganization, a discontinuity in thought and development, as the child moves from gesture to language, from prelinguistic pointing or gesture to a fully grammaticized linguistic system: this occurs at the same age (roughly twenty-one to twenty-four months) and in the same way, whether the child is speaking or signing.
49. It has been shown by Elissa Newport and Ted Supalla (see Rymer, 1988) that late learners of Sign—which means anyone who learns Sign after the age of five—though competent enough, never master its full subtleties and intricacies, are not able to “see” some of its grammatical complexities. It is as if the development of special linguistic-spatial ability, of a special left hemisphere function, is only fully possible in the first years of life. This is also true for speech. It is true for language in general. If Sign is not acquired in the first five years of life, but is acquired later, it never has the fluency and grammatical correctness of native Sign: some essential grammatical aptitude has been lost. Conversely, if a young child is exposed to less-than-perfect Sign (because the parents, for example, only learned Sign late), the child will nonetheless develop grammatically correct Sign—another piece of evidence for an innate grammatical aptitude in childhood.
50. The prescient Hughlings-Jackson wrote a century ago: “No doubt, by disease of some part of the brain the deaf-mute might lose his natural system of signs which are of some speech-value to him,” and thought this would have to affect the left hemisphere.
51. The kinship of speech aphasia and sign aphasia is illustrated in a recent case reported by Damasio et al. in which a Wada test (an injection of sodium amytal into the left carotid artery—to determine whether or not the left hemisphere was dominant) given to a young, hearing Sign interpreter with epilepsy brought about a temporary aphasia of both speech and Sign. Her ability to speak English started to recover after four minutes; the sign aphasia lasted a minute or so longer. Serial PET scans were done throughout the procedure and showed that roughly similar portions of the left hemisphere were involved in speech and signing, although the latter seemed to require larger brain areas, in particular the left parietal lobe, as well.
52. There is considerable evidence that signing may be useful with some autistic children who are unable or unwilling to speak; Sign may allow such children a degree of communication which had seemed unimaginable (Bonvillian and Nelson, 1976). This may be in part, so Rapin feels, because some autistic children may have specific neurological difficulties in the auditory sphere, but much greater intactness in the visual sphere.
Though Sign cannot be of help with the aphasic, it may help the retarded and senile with very limited or eroded capacities for spoken language. This may be due in part to the graphic and iconic expressiveness of Sign, and in part to the relative motor simplicity of its movements, compared with the extreme complexity and vulnerability of the mechanism for speech.
53. There may be other ways of establishing such a formal space, as well as a great enhancement of visual-cognitive function generally. Thus with the spread of personal computers in the past decade, it has become possible to organize and move logical information in (computer) “space,” to make (and rotate, or otherwise transform) the most complex three-dimensional models or figures. This has led to the development of a new sort of expertise—a power of visual imagery (especially imagery of topological transforms) and visual-logical thinking which was, in the precomputer age, distinctly rare. Virtually anyone can become a visual “adept” in this way—at least, anyone under the age of fourteen. It is much more difficult to achieve visual-computational fluency after this age, as it is much more difficult to achieve fluent language. Parents find again and again that their children can become computer whizzes where they cannot—another example, perhaps, of “critical age.” It seems probable that such enhancements of visual-cognitive and visual-logical functions requires an early shift to a left hemisphere predominance.
54. Novel—yet potentially universal. For as in Martha’s Vineyard, entire populations, hearing and deaf alike, can become fluent native signers. Thus the capacity—the neuronal apparatus—to acquire spatial language (and all the nonlinguistic spatial capacities that go with this) is clearly present, potentially, in everyone.
There must be countless neuronal potentials that we are born with which can develop or deteriorate according to demand. The development of the nervous system, and especially of the cerebral cortex is, within its genetic constraints, guided and molded, sculpted, by early experience. Thus the capacity to discriminate phonemes has a huge range in the first six months of life, but then becomes restricted by the actual speech to which infants are exposed, so that Japanese infants become unable, for example, to discriminate anymore between an “l” or an “r,” and American infants, similarly, between various Japanese phonemes. Nor are we short on neurons; there is no danger that developing one potential will “use up” a limited supply of neurons and prevent the development of other potentials. There is every reason to have the richest possible environment, linguistically as well as in every other way, during the critical early period of brain plasticity and growth.
55. This linguistic use of the face is peculiar to signers, is quite different from the normal, affective use of the face, and, indeed, has a different neural basis. This has been shown very recently in experimental studies by David Corina. Pictures of faces, with expressions that could be interpreted as “affective” or “linguistic” were presented, tachistoscopically, to the right and left visual fields of deaf and hearing subjects. Hearing subjects, it was apparent, processed these in the right hemisphere, but deaf subjects showed predominance of the left hemisphere in “decoding” linguistic facial expressions.
The few cases studied of the effects of brain lesions in deaf signers upon facial recognition show a similar dissociation between the perception of affective and linguistic facial expressions. Thus, with left hemisphere lesions in signing subjects, the linguistic “propositions?
?? of the face may become unintelligible (as part and parcel of an overall Sign aphasia), but its expressiveness, in the ordinary sense, is fully preserved. With right hemisphere lesions, conversely, there may be an inability to recognize faces or their ordinary expressions (a so-called prosopagnosia), even though they are still perceived as “propositionizing,” fluently, in Sign.
This dissociation between affective and linguistic facial expressions may also extend to their production: thus one patient with a right hemisphere lesion studied by Bellugi’s group was able to produce linguistic facial expressions where required, but lacked ordinary affective facial expressions.
56. The ancient insight that the loss of hearing may cause a “compensation” of sight cannot be ascribed simply to the use of Sign. All deaf—even the postlingually deaf, who stay in the world of speech—achieve some heightening of visual sensibility, and a move toward a more visual orientation in the world, as David Wright describes:
I do not notice more but notice differently. What I do notice, and notice acutely because I have to, because for me it makes up almost the whole of the data necessary for the interpretation and diagnosis of events, is movement where objects are concerned; and in the case of animals and human beings, stance, expression, walk, and gesture.… For example, as somebody waiting impatiently for a friend to finish a telephone conversation with another knows when it is about to end by the words said and the intonation of the voice, so does a deaf man—like a person queuing outside a glass-panelled call-box—judge the moment when the good-byes are being said or the intention formed to replace the receiver. He notices a shift of the hand cradling the instrument, a change of stance, the head drawing a fraction of a millimetre from the earphone, a slight shuffling of the feet, and that alteration of expression which signals a decision taken. Cut off from auditory clues he learns to read the faintest visual evidence.