视频1 视频21 视频41 视频61 视频文章1 视频文章21 视频文章41 视频文章61 推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37 推荐39 推荐41 推荐43 推荐45 推荐47 推荐49 关键词1 关键词101 关键词201 关键词301 关键词401 关键词501 关键词601 关键词701 关键词801 关键词901 关键词1001 关键词1101 关键词1201 关键词1301 关键词1401 关键词1501 关键词1601 关键词1701 关键词1801 关键词1901 视频扩展1 视频扩展6 视频扩展11 视频扩展16 文章1 文章201 文章401 文章601 文章801 文章1001 资讯1 资讯501 资讯1001 资讯1501 标签1 标签501 标签1001 关键词1 关键词501 关键词1001 关键词1501 专题2001
语言学入门级文章(英文)
2025-10-02 19:20:42 责编:小OO
文档
·第1章手机电子书 http://www.sjshu.com

语言学入门级文章(全英文)

The History of Linguistics

by Frederick J. Newmeyer of the University of Washington

Historical Linguistics

The modern field of linguistics dates from the beginning of the 19th century. While ancient India and Greece had a remarkable grammatical tradition, throughout most of history linguistics had been the province of philosophy, rhetoric, and literary analysis to try to figure out how human language works. But in 1786, an amazing discovery was made: There are regular sound correspondences among many of the languages spoken in Europe, India, and Persia. For example, the English 'f' sound often corresponds to a 'p' sound in, among others, Latin and Sanskrit, an important ancient language of India:

ENGLISH LATIN SANSKRIT

father pater pitar

full plenus purnas

for per pari

Scholars realized that these correspondences--found in thousands of words-- could not be due to chance or to mutual influence. The only reliable conclusion was that these languages are related to one another because they come from a common ancestor. Much of 19th century linguistics was devoted to working out the nature of this parent language, spoken about 6,000 years ago, as well as the changes by which 'Proto-Indo-European', as we now call it, developed into English, Russian, Hindi, and its other modern descendants.

This program of historical linguistics continues today. Linguists have succeeded in grouping the 5,000 or so languages of the world into a number of language families sharing a common ancestor.

The Study of Language Structure

At the beginning of the 20th century, attention shifted to the fact that not only language change, but language structure as well, is systematic and governed by regular rules and principles. The attention of the world's linguists turned more and more to the study of grammar --in the technical sense of the term the organization of the sound system of a language and the internal structure of its words and sentences. By the 1920s, the program of 'structural linguistics', inspired in large part by the ideas of the Swiss linguist Ferdinand de Saussure, was developing sophisticated methods of grammatical analysis. This period also saw an intensified scholarly study of languages that had never been written down. It had by then become commonplace, for example, for an American linguist to spend several years working out the intricacies of the grammars of Chippewa, Ojibwa, Apache, Mohawk, or some other indigenous language of North America.

The last half-century has seen a deepening of understanding of these rules and principles and the growth of a widespread conviction that despite their seeming diversity, all the languages of the world are basically cut from the same cloth. As grammatical analysis has become deeper, we have found more fundamental commonalities among the languages of the world. The program initiated by the linguist Noam Chomsky in 1957 see

s this fact as a consequence of the human brain being 'prewired' for particular properties of grammar, thereby drastically limiting the number of possible human languages. The claims of this program have been the basis for a great deal of recent linguistic research, and have been one of the most important centers of controversy in the field. Books and journal articles routinely present evidence for or against the idea that central properties of language are innate.

Language Use: Studies of Meaning

There is also a long tradition in the study of what it means to say that a word or sentence 'means' a particular thing and how these meanings are conveyed when we communicate with each other. Two popular ideas about what meanings are go back to the ancient Greeks: One is that meanings are mental representations of some sort; another is that the meaning of an expression is purely a function of how it is used. Both ideas have launched research programs that are active today. They have been joined by a third approach, building on work by philosophers such as Gottlob Frege and Bertrand Russell, which applies formal methods derived from logic and attempts to equate the meaning of an expression with reference and the conditions under which it might be judged to be true or false. Other linguists have been looking at the cognitive principles underlying the organization of meaning, including the basic metaphoric processes that some claim to see at the heart of grammar. And still others have been examining the ways that sentences are tied together to form coherent discourse.

Language Use: The Social Side of Language

In the past 50 years, there has been increasing attention to the social side of language as well as the mental. The subfield of sociolinguistics has come of age in part as a consequence of post World War II social movements. The national liberation movements active in third world countries after the war posed the question of what would be their official language(s) after independence, a pressing question, since almost all of them are multilingual. This led to scholarly study of the language situation in the countries of the world. In addition, the movements for minority rights in the United States and other Western countries has led to a close examination of social variation that complements earlier work in geographical variation. Scholars have turned the analytical tools of linguistics to the study of nonstandard varieties like African American Vernacular English and Chicano Spanish. And the women's movement has led many linguists to investigate gender differences in speech and whether our language has to perpetuate sexual inequality.

Suggested Readings

Harris, Randy A. 1993. The linguistics wars. Oxford: Oxford University Press.

Lepschy, Giulio C. 1972. A survey of structural linguistics. London: Faber and Faber.

Newmeyer, Frederick J. 1986. The politics of linguistics. Chicago: University of Chicago Press.

Robins, R. H. 1979. A short

history of linguistics. London: Longman. 2nd edn.

·第2章

An Overview

Geoff Nunberg of Xerox PARC, Palo Alto, CA, and Stanford University

Tom Wasow of Stanford University

[Prerequisites for Language Use][The Domain of Linguistics]

[Language as a Formal System][Language as a Human Phenomenon]

[Language as a Social Phenomenon][Applications of Linguistics]

An Example of Language Use

Pat: Why did the chicken cross the road?

Chris: I give up.

Pat: To get to the other side.

Most of us heard this joke when we were small children and find nothing remarkable in the ability to engage in such exchanges. But a bit of reflection reveals that even such a mundane use of language involves an amazing combination of abilities.

Think about it: Pat makes some vocal noises, with the effect that Chris entertains thoughts of a scenario involving a fowl and a thoroughfare. This leads to an exchange of utterances, possibly laughter, and the conviction by both parties that Pat has 'told a joke'.

to top

Prerequisites for Language Use

What does it take to make communication through language succeed? Here are just a few of the many things that are necessary for the exchange above

Pat's first two words 'why did' sound exactly the same as 'wide id'. Breaking the stream of sounds into words requires that Chris pays attention to the wider context and knows what makes sense and what doesn't.

Words like 'chicken' and 'cross' have lots of meanings (consider, for example, one gangster saying to another, 'You won't cross me because you're chicken'). To conjure up the image of a bird and a highway, Chris must identify the right choices for these.

Pat has to know to say 'cross', not 'crossed' or 'crossing' in this context.

The order of words could not be 'Why the chicken did cross the road?' or any of lots of other conceivable orders.

Chris's utterance ('I give up') is entirely conventional, signalling recognition that Pat is posing a riddle, and that Chris is ready to hear the punchline. The recognition that the first sentence was a riddle again depends on its relation to the wider context and cultural knowledge.

The punchline is not a complete sentence; Chris must recognize that it means that the chicken crossed the road in order to get to the other side.

In order to get the joke, Chris must know that answers to such 'why' questions normally involve some longer-term objective.

The Domain of Linguistics

Linguistics, the study of language, concerns itself with all aspects of how people use language and what they must know in order to do so. As a universal characteristic of the species, language has always held a special fascination for human beings, and the history of linguistics as a systematic field of study goes back almost three thousand years.

Modern linguists concern themselves with many different facets of language, from the physical properties of the sound waves in utterances to the intentions of speakers towards others in conversa

tions and the social contexts in which conversations are embedded. The branches of linguistics are concerned with how languages are structured, how languages are used, and how they change.

to top

Language as a Formal System

Linguistic structure can be studied at many different levels. The sounds of language can be investigated by looking at the physics of the speech stream and by studying the physiology of the vocal tract and auditory system. A more psychological approach is also possible, namely considering what physical properties of the vocal tract or muscalature are used to make linguistic distinctions, and how the sounds of languages pattern.

Words, phrases, and sentences have internal structure. Many words are made up of smaller meaningful units, such as stems and suffixes; for example, stem 'happy' + suffix '-ly'. Linguists investigate the different ways such pieces can be put together to form words, a study called morphology. Likewise, words cluster together into phrases, which combine to make sentences, and linguists explore the rules governing such combinations. The scientific study of word structure and sentence structure is what modern linguists mean by the term grammar; this is quite different from the sort of 'normative' grammar instruction aimed to teach 'proper usage' common in primary and secondary school, which linguists call prescriptivism. Words and sentences are used to convey meanings.

Linguists study this too, seeking to specify precisely what words mean, how they combine into sentence meanings, and how these combine with contextual information to convey the speaker's thoughts. The first two of these areas of investigation are called semantics, and the third is called pragmatics.

to top

Language as a Human Phenomenon

Even the most formal and abstract work on linguistic structure is colored by the awareness that language is a uniquely human phenomenon. It is lodged in human brains; it is passed on from one generation to the next; it is intimately bound up with the forms of human thought. Unlike a specialized system like arithmetic, it serves a vast range of communicative needs--from getting your neighbor to keep the weeds down, to reporting simple facts, telling jokes, making declarations of love, or praying to a deity. And of course it functions in the midst of complex societies, not just as a means of communication, but as a marker of social identity--a sign of membership in a social class, ethnic group, or nation. It isn't surprising, then, that linguistic research shares some concerns with just about every one of the human sciences, from psychology and neurology to literary study, anthropology, sociology, and political science.

All languages change. In other words, languages have histories, and a complete understanding of a linguistic structure often involves examining variation and change in the language under investigation. This is extremely difficult in most cases, because the vast majority of languages ha

ve had no writing systems until very recently.

Important as historical explanations and evidence are in linguistics, they are not necessary for competent language use--and most speakers don't know anything about them. Hence, most linguistic explanations focus on what speakers must know in order to speak and understand language the way all normal humans do. There are many facets to the study of language and brain. It encompasses both child language acquisition and how adults produce and process language.

One particularly fascinating question is whether our language shapes the way we perceive the world and if so, how? In particular, can there be thinking without language? Such questions have fascinated people for thousands of years, but only in recent times have researchers been in a position to examine them scientifically and to investigate how languages can reflect or reinforce particular ways of looking at the world and the world-views of particular cultures.

Linguists document the remarkable diversity of means of expression employed in the languages of the world. At the same time, though, researchers have come to understand that many of the features of language are universal, both because there are universal aspects to human experience and because language has a built-in biological basis. This latter subject belongs to the subfield called neurolinguistics, which studies how language is realized in the human brain. The connection can be revealed through experiment or by studying the way brain damage can lead to disruptions of language function in disorders like dyslexia or aphasia. Or it can be revealed in more subtle ways, like the slips of the tongue that people make, which can shed light on the mental circuitry of language in something like the way a computer malfunction can shed light on how it is programmed or how its hardware was designed. It can also be revealed by the changes that can take place in language and by the limitations which make some changes impossible.

to top

Language as a Social Phenomenon

The social life of language begins with the smallest and most informal interactions. Every conversation is a social transaction, governed by rules that determine how sentences are put together into larger discourses--stories, jokes, or whatever--and how participants take turns speaking and let each other know that they are attending to what is being said. The organization of these interactions is the subject of the subfield called discourse analysis.

Another, related, area of study concerns the literary uses of language, which involve the particular rules that shape poetic structure or the organization of forms like the sonnet or the novel, and which often make special use of devices like metaphor--though to be sure, linguists have discovered too that metaphor and figurative language are essential elements of everyday forms of speech.

At a larger level, the field of sociolinguistics is also concerned with the way the divisions o

f societies into social classes and ethnic, religious, and racial groups are often mirrored by linguistic differences. Of particular interest here, too, is the way language is used differently by men and women.

In most parts of the world, communities use more than one language, and the phenomenon of bilingualism or multilingualism has a special interest for linguists. Multilingualism raises particular psychological questions: How do two or more languages coexist within an individual mind? How do bilingual individuals decide when to switch from one language to another? It also raises questions at the level of the community, where the question of which language to use is determined by tacit understandings, and sometimes by official rules and regulations that may invoke difficult questions about the relation of language to nationality. In many nations, including the US, there are currently important debates about establishing an official language.

Multilingual communities are interesting to linguists for another reason: Languages that come into contact can influence each other in various ways, sometimes converging in grammar or other features. Under certain social conditions, a mix of languages can give rise to 'new' languages called pidgins and creoles, which have a particular interest for linguists because of the way they shed light on language structure and function. Often, though, the result when languages come into contact is that one becomes dominant at the expense of the other, especially when the contact pits a widely used language of a powerful community against a local or minority language. Modern communications have accelerated this process, to the point where the majority of the languages currently used in the world are endangered, and may disappear within a few generations--a situation that causes linguists concerns that go beyond the purely academic.

to top

Applications of Linguistics

Linguistics can have applications wherever language itself becomes a matter of practical concern. Strictly speaking, then, the domain of applied linguistics is not a single field or subfield, but can range from the research on multilingualism the teaching and learning of foreign languages to studies of neurolinguistic disorders like aphasia and of various speech and hearing defects. It includes work in the area of language planning, like the efforts to devise writing systems for languages in the post-colonial world, and the efforts to standardize terminologies for various technical domains, or to revitalize endangered languages.

Examples of the applications of linguistics can be multiplied indefinitely. The techniques of discourse analysis have been applied to the problem of avoiding air accidents due to miscommunication and to the problems of communication between members of different ethnic groups. And linguists are increasingly called on in legal proceedings that turn on questions of precise interpretation, a development that has given rise to

a new field of study of language and law.

Probably the oldest forms of applied linguistics are the preparation of dictionaries and the field of interpretation and translation, all of which have been greatly influenced by the advent of the computer. The applications of computers to language have not been limited to these areas, though; they extend to the development of interfaces that enable people to interact with computers using ordinary language, of systems capable of understanding speech and writing, and of techniques that allow people to retrieve information more effectively from text databases or from the Web. Not surprisingly, then, an increasing number of linguists are working in high-tech industries.

·第3章

Computers and Language

by Jerry R. Hobbs of SRI International, Menlo Park, CA

Perspectives in Computational Linguistics

Computational linguists study natural languages, such as English and Japanese, rather than computer languages, such as Fortran, Snobol, C++, or Java. The field of computational linguistics has two aims:

The technological. To enable computers to be used as aids in analyzing and processing natural language.The psychological. To understand, by analogy with computers, more about how people process natural language.

From the technological perspective, there are, broadly speaking, three uses for natural language in computer applications:

Natural language interfaces to software. For example, demonstration systems have been built that let a user with a microphone ask for information about commercial airline flights--a kind of automated travel agent.Document retrieval and information extraction from written text. For example, a computer system could scan newspaper articles or some other class of texts, looking for information about events of a particular type and enter into a database who did what to whom, and when and where.

Machine translation. Computer systems today can produce rough translations of texts from one language, say, Japanese, to another language, such as English. Computational linguists adopting the psychological perspective hypothesize that at some abstract level, the brain is a kind of biological computer, and that an adequate answer to how people understand and generate language must be in terms formal and precise enough to be modeled by a computer.

Problems in Computational Linguistics

From both perspectives, a computational linguist will try to develop a set of rules and procedures, e.g. to recognize the syntactic structure of sentences or to resolve the references of pronouns.

One of the most significant problems in processing natural language is the problem of ambiguity. In

(1) I saw the man in the park with the telescope.

it is unclear whether I, the man, or the park has the telescope. If you are told by a fire inspector,

(2) There's a pile of inflammable trash next to your car. You are going to have to get rid of it.

whether you interpret the word 'it' as referring to the pi

pile of trash or to the car will result in dramatic differences in the action you take. Ambiguities like these are pervasive in spoken utterances and written texts. Most ambiguities escape our notice because we are very good at resolving them using our knowledge of the world and of the context. But computer systems do not have much knowledge of the world and do not do a good job of making use of the context.

Approaches to Ambiguity

Efforts to solve the problem of ambiguity have focused on two potential solutions: knowledge-based and statistical.

In the knowledge-based approach, the system developers must encode a great deal of knowledge about the world and develop procedures to use it in determining the sense of texts. For the second example above, they would have to encode facts about the relative value of trash and cars, about the close connection between the concepts of 'trash' and 'getting rid of', about the concern of fire inspectors for things that are inflammable, and so on. The advantage of this approach is that it is more like the way people process language and thus more likely to be successful in the long run. The disadvantages are that the effort required to encode the necessary world knowledge is enormous, and that known procedures for using the knowledge are very inefficient.

In the statistical approach, a large corpus of annotated data is required. The system developers then write procedures that compute the most likely resolutions of the ambiguities, given the words or word classes and other easily determined conditions. For example, one might collect Word-Preposition-Noun triples and learn that the triple is more frequent in the corpus than the triples and . The advantages of this approach are that, once an annotated corpus is available, it can be done automatically, and it is reasonably efficient. The disadvantages are that the required annotated corpora are often very expensive to create and that the methods will yield the wrong analyses where the correct interpretation requires awareness of subtle contextual factors.

Suggested Readings

Allen, James F. 1995. Natural language understanding. Benjamin Cummings Pub. Co. 2nd edn. - The most comprehensive textbook on computational linguistics.

Computational Linguistics, Vol. 19, No. 1, March 1993: Special issue on using large corpora: I. - A good recent collection on statistical approaches to natural language processing.

Grosz, Barbara J., Karen Sparck Jones, and Bonnie Lynn Webber. 1986. Readings in natural language processing. Santa Monica, CA: Morgan Kaufmann. - A good collection of early papers in the field.

Pereira, Fernando C. N., and Barbara J. Grosz (eds). Artificial intelligence, Vol. 63, Nos 1-2: Special volume on natural language processing. - A good recent collection of papers primarily in the knowledge-based approach to natural language processing.

The principal organization for computat

ional linguistics is the Association for Computational Linguistics.

·第4章

Language Diversity

by Bernard Comrie of the University of Southern California

Differences among Languages

We are all aware that different languages have different words for the same concept, as when English 'dog' shows up in Spanish as perro or in Japanese as inu. And we are all aware that different languages are pronounced in different ways, so that the strongly trilled 'r' of Spanish perro is alien to most varieties of English. But equally important is the fact that languages differ from one another in grammar.

A straightforward illustration of this can be seen by comparing the way in which different languages order the various parts of a sentence. In an English sentence, the usual order is for the subject to come first, then the verb, then the object. In Japanese, by contrast, the usual order is first subject, then object, then verb:

gakusei ga hon o katta.

student subj. book obj. bought

The student bought the book.

In Welsh, the usual order is for the verb to come first, followed by the subject, followed in turn by the object:

prynodd y myfyriwr y llyfr

bought the student the book

The student bought the book.

A few languages have the order object-verb-subject, exactly the opposite to English, e.g. Hixkaryana, spoken by some 400 people on a tributary of the Amazon River in Brazil:

toto yahosIye kamara

man grabbed jaguar

The jaguar grabbed the man.

These differences would surely be soon noticed by anyone involved to any depth with the languages in question. But there are other differences in grammar between languages that are much more subtle. Let us take the English sentence I saw you, and came here. The first part of the sentence (before the 'and') is a complete sentence in its own right--the subject of the verb 'saw' is overt, appearing as the word 'I'. But the second part of the sentence is not complete in itself; its subject is missing. However, as speakers of English we have no hesitation in interpreting the second part to mean 'I came here', and not to mean 'you came here', although there is no logical reason, other than the requirements of English grammar, that this second interpretation should be excluded.

Do all languages behave like English in this respect? No. In Dyirbal, an almost extinct Australian Aboriginal language of northeast Queensland, the sentence Ngadya nginuna buran, baninyu looks very much like the English sentence. Indeed, the first part of the sentence, before the comma, does mean 'I saw you'. However, the second part is interpreted to mean 'you came here', not 'I came here':

ngadya nginuna buran, baninyu

I you saw came here

I saw you and you came here.

Dyirbal is just as strict in insisting on this interpretation as English is in insisting on the other interpretation: Both languages have strict conventions that are followed by speakers of the language; it just hap

pens that the conventions are different in each of these two languages.

Language Universals

If languages can differ from one another in these ways, one might ask: Are there any restrictions on the ways in which they can differ from one another? Are there some general properties that are common to all human languages? There are. For instance, many languages use differences in the order of elements to carry differences in meaning. In English, one difference between the statement The green parrot can speak Hixkaryana and the question Can the green parrot speak Hixkaryana? is a difference in the order of elements of this kind, more specifically inversion of the subject and the auxiliary verb. But no language is known to relate sentences by inverting the order of sentences of indefinite length (so that the question would appear as Hixkaryana speak can parrot green the?). Linguists believe that such a relation would violate constraints on humans' linguistic ability. In other words, while languages can be astonishingly different from one another--and this is why it is important for linguists to study languages of as many different types as possible--there are nonetheless features that unite all languages as different manifestations of the human language ability.

·第5章

Grammar

by Sandy Chung and Geoff Pullum of the University of California, Santa Cruz

What Is Grammar?

People often think of grammar as a matter of arbitrary pronouncements (defining 'good' and 'bad' language), usually negative ones like There is no such word as ain't or Never end a sentence with a preposition. Linguists are not very interested in this sort of bossiness (sometimes called prescriptivism). For linguists, grammar is simply the collection of principles defining how to put together a sentence.

One sometimes hears people say that such-and-such a language 'has no grammar', but that is not true of any language. Every language has restrictions on how words must be arranged to construct a sentence. Such restrictions are principles of syntax. Every language has about as much syntax as any other language. For example, all languages have principles for constructing sentences that ask questions needing a yes or no answer, e.g. Can you hear me?, questions inviting some other kind of answer,e.g. What did you see?, sentences that express commands, e.g. Eat your potatoes!, and sentences that make assertions, e.g. Whales eat plankton.

Word Order

The syntactic principles of a language may insist on some order of words or may allow several options. For instance, English sentences normally must have words in the order Subject-Verb-Object: in Whales eat plankton, 'whales' is the subject, 'eat' is the verb, and 'plankton' is the object. Japanese sentences allow the words to occur in several possible orders, but the normal arrangement (when no special emphasis is intended) is Subject-Object-Verb. Irish sentences standardly have words in the order Verb-Subject-Object.

Even when a

language allows several orders of phrases in the sentence, the choice among them is systematically regulated. For example, there might be a requirement that the first phrase refer to the thing you're talking about, or that whatever the first phrase is, the second must be the main clause verb.

Not only does every language have syntax, but similar syntactic principles are found over and over again in languages. Word order is strikingly similar in English, Swahili, and Thai (which are utterly unrelated); sentences in Irish are remarkably parallel to those in Maori, Maasai, and ancient Egyptian (also unrelated); and so on.

Word Structure

However, there is another aspect of grammar in which languages differ more radically, namely in morphology, the principles governing the structure of words. Languages do not all employ morphology to a similar extent. In fact they differ dramatically in the extent to which they allow words to be built out of other words or smaller elements. The English word undeniability' is a complex noun formed from the adjective 'undeniable', which is formed from the adjective 'deniable', which is formed from the verb 'deny'. Some languages (like German, Nootka, and Eskimo) permit much more complex word-building than English; others (like Chinese, Ewe, and Vietnamese) permit considerably less.

Languages also differ greatly in the extent to which words vary their shape according to their function in the sentence. In English you have to choose different pronouns ('they' versus 'them') for Subject and Object (though there is no choice to be made with nouns, as in Whales eat plankton). In Latin, the shapes of both pronouns and nouns vary when they are used as subjects or objects; but in Chinese, no words vary in shape like this.

Although we have identified some differences between syntax and morphology, to some extent it is a matter for ongoing research to decide what counts as morphology and what counts as syntax. The answer can change as discoveries are made and theories improved. For instance, most people--in fact, most grammarians--probably say that 'wouldn't' is two words: 'would' followed by an informal pronunciation of 'not'. But if we treat 'wouldn't' as one word, then we can explain why it is treated as one word in the yes/no question Wouldn't it hurt? Notice that we don't say Would not it hurt? for Would it not hurt?, or Would have he cared? for Would he have cared? In each case, the bad versions have two words before the subject.) The syntactic principle for English yes/no questions is that the auxiliary verb occurs before the subject.

If this is correct, by the way, then 'ain't' certainly is a word in English, and we know what kind: It's an auxiliary verb (the evidence: We hear questions like Ain't that right?). English teachers disapprove of 'ain't' (naturally enough, since it is found almost entirely in casual conversation, never in formal written English, which is what English teachers are mostly concerned to

teach). But linguists are generally not interested in issuing pronouncements about what should be permitted or what should be called what. Their aim is simply to find out what language (including spoken language) is like.

Even if you learned all the words of Navajo, and how they are pronounced, you would not be able to speak Navajo until you also learned the principles of Navajo grammar. There must be principles of Navajo grammar that are different from those of other languages (because speakers of other languages cannot understand Navajo), but there may also be principles of universal grammar, the same for all languages. Linguists cannot at present give a full statement of all the principles of grammar for any particular language, or a statement of all the principles of universal grammar. Finding out what they are is a central aim of modern linguistics.

·第6章

Language and Brain

by Stephen Crain of the University of Maryland, College Park

The Domain of Study

Many linguistics departments offer a course entitled 'Language and Brain' or 'Language and Mind.' Such a course examines the relationship between linguistic theories and actual language use by children and adults. Findings are presented from research on a variety of topics, including the course of language development, language production and understanding, and the nature of language breakdown due to brain injury. These topics provide examples of what is currently known about language and the mind, and they offer insights into the central issues in this area of linguistic research.

Language is a significant part of what makes us human, along with other cognitive skills such as mathematical and spatial reasoning, musical and drawing ability, the capacity to form social relationships, and the like. As with these other cognitive skills, linguistic behavior is open to investigation using the familiar tools of observation and experimentation.

It is wrong, however, to exaggerate the similarity between language and other cognitive skills, because language stands apart in several ways. For one thing, the use of language is universal--all normally developing children learn to speak at least one language, and many learn more than one. By contrast, not everyone becomes proficient at complex mathematical reasoning, few people learn to paint well, and many people cannot carry a tune. Because everyone is capable of learning to speak and understand language, it may seem to be simple. But just the opposite is true--language is one of the most complex of all human cognitive abilities.

The Language Instinct

Even outside the laboratory, one can make many interesting observations that one can make about the course of language development. Many of the most complex aspects of language are mastered by three- and four-year-old children. It is astonishing for most parents to watch the process unfold. What many parents don't realize is that all children follow roughly the same path in language deve

pment. And all children reach essentially many of the same conclusions about language, despite differences in experience. All preschool children, for example, have mastered several complex aspects of the syntax and semantics of the language they are learning. This suggests that certain aspects of syntax and semantics are not taught to children. Further underscoring this conclusion is the finding, from experimental studies with children, that knowledge about some aspects of syntax and semantics sometimes develops in the absence of corresponding evidence from the environment.

To explain this remarkable collection of facts about language development, linguists have attempted to formulate a theory of linguistic principles that apply to all natural languages (as opposed to artificial languages, such as programming languages). These principles, known as linguistic universals, offer insight into the acquisition scenario set out before us: why language is universal, why it is mastered so rapidly, why there are often only loose or incomplete connections between linguistic knowledge and experience. These features of development follow from a single premise--that linguistic universals are part of a human 'instinct' to learn language, i.e., part of a biological blueprint for language development.

There is another way in which knowledge of language and real-world experience are kept apart in the minds of children; they do not always base their understanding of language on what they have come to know from experience. For example, children do not combine the words of the sentence 'Mice chase cats' in a way that conforms with their experience; if they did, they would understand it to mean that cats chase mice, not the reverse. In other words, children are able to tell when sentences are false, as well as when they are true. This means that children use their knowledge of language structure in comprehending sentences, even if it means ignoring their wishes and the beliefs they have formed about the world around them.

Modularity

Research on adult language understanding is also concerned with the architecture of the mind and with the possibility that linguistic knowledge and belief-systems reside in separate 'modules'. To investigate the issue of modularity, studies of adult language understanding ask when different sources of information are used in processing sentences that have more than one possible interpretation. It is in the nature of language that many sentences are ambiguous. Yet, ordinarily, by the time a person reaches the end of an ambiguous sentence, only a single interpretation remains, the one that is consistent with the conversational context. In the absence of any context, e.g. in a laboratory setting, the interpretation that survives is often the one that best conforms with a person's general knowledge of the world.

Adopting a modular conception of the mind, some researchers contend that the preference for one interpretation over its com

ompetitors is initially decided on linguistic grounds (syntactic and semantic structure); real-world knowledge comes into play only later, on this view. The availability of different sources of information is difficult to determine, however, because the resolution of ambiguity takes place as a sentence is being read or heard, rather than after all the words have been taken in. In order to establish the time-course of various linguistic and nonlinguistic operations involved in language understanding, sentence processing is often measured in real time, by recording the movements of the eyes in reading, for example. The jury is still out on the question of the modularity of mind in language processing, but there are some suggestive research findings, and few researchers in the area would deny the contribution of linguistic knowledge in the process.

Another source of evidence bearing on the modularity hypothesis comes from studies of language breakdown. Language loss, or aphasia, is not an all-or-nothing affair; when a particular area of the brain is affected, the result is a complex pattern of retention and loss, often involving both language production and comprehension. The complex of symptons can be strikingly similar for different people with the same affected area of the brain. Research in aphasia asks: Which aspects of linguistic knowledge are lost and which are spared? The fact that language loss is not always associated with a corresponding loss of pragmatic knowledge supports the modularity hypothesis, bringing the findings of research on aphasia in line with those from the study of child and adult language understanding.

·第7章

The Sounds of Speech

by Morris Halle of MIT

Identifying Words

When we speak we say words and when spoken to we hear words. In normal discourse, however, we do not separate---the---words---by---short---pauses, but rather run one word into the next. Yet in spite of this we still hear utterances as composed of discrete words. Why should that be so?

A clue is provided by the fact that in order for us to hear the words, the utterance must be in a language we know; in utterances in a language we do not know we do not hear the words. Similarly, when we hear a string of nonsense syllables, we cannot tell whether it is composed of one or of several words. Knowledge of language is therefore crucial.

In a way this is not surprising. Everybody who has studied a foreign language knows that learning the words is a major part of mastering the language. Knowing the words is not sufficient, but it surely is necessary. When we learn a word we store in our memory information that allows us both to say the word and to recognize it when said by someone else. And the reason we do not hear words when spoken to in a foreign language is that we have not learned them, we do not have them in our linguistic memory, i.e., in the part of our memory dedicated to language.

Speaking

A plausible account of an act of speaking might

run as follows. Speakers select from their memories the words they wish to say. They then perform a special kind of gymnastics with their speech organs or articulators, i.e., with the tongue, lips, velum, and larynx. The gymnastics results in an acoustic signal that both the speaker and the interlocutors hear. Since in performing the gymnastics speakers do not pause at the end of each word, the words in the utterance run into one another. This model of speaking is represented graphically below:

Words in memory>>> Articulatory action>>> Acoustic signal

There is some evidence that when we hear speech the same process is activated but in reverse An acoustic signal strikes our ears; we interpret the signal in terms of the articulatory actions that gave rise to it, and we use this interpretation--rather than the acoustic signal itself--to access our memory.

Consider now the gymnastics that we execute as we pronounce the English words 'meet' and 'Mott'. In both words we begin with an action closing the oral cavity with the lips and end with an action by the tongue blade closing the oral cavity at a point in the anterior region of the hard palate. Between these two actions is an action of the tongue body: The tongue body is raised and advanced in 'meet', and lowered and retracted in 'Mott' without, however, closing the cavity. The production of these words is, thus, made up of distinct actions by three distinct articulators. The actions must, moreover, proceed in the order indicated: If the order of the three actions is reversed, different words are produced, viz., team, Tom. Facts of this kind motivate the hypothesis that the words we say are composed of discrete sounds or phonemes.

Words in Memory

As noted above, words are learned and are stored in our linguistic memory. If the words we utter are composed of discrete sounds, then it is reasonable to suppose that words in memory also consist of sequences of discrete sounds. Scientific study of language strongly supports this supposition although the evidence and argumentation are too complex to be given here.

In uttering a word we actualize the sequence of discrete sounds stored in memory as a sequence of actions of our articulators. Because, like other human actions, speaking is subject to limitations on accuracy, it is to be expected that there will be some slippage and that the discreteness of the sounds will be compromised to some extent in the utterance. In fact, X-ray motion pictures of speaking show that the actions of the articulators in producing a given sound do not begin and end at exactly the same time. This slippage, however, does not interfere with the hearer's ability to identify the words--i.e., to access them in memory. Inertia of the articulators is, of course, not the only factor in the failure of the speech signal to reproduce accurately various aspects of the word as represented in memory. Other factors are rapid speech rate and a variety of memory lapses.

In spite of

the fact that burps, yawns, coughs, the sound made in blowing out a candle, and many other noises are produced by actions of the articulators, they are not perceived as sequences of phonemes, even though they may be indistinguishable acoustically and articulatorily from utterances of phoneme sequences. Not being words, these noises are not stored in the part of our memory that is dedicated to words. By hypothesizing that only items stored in the linguistic memory are composed of phonemes, we explain why burps, yawns, etc. are not perceived as phoneme sequences.

In sum, speech sounds are the constituents of words, and words are special in that only words are sequences of speech sounds.

Suggested Readings

Halle, Morris. 1992. "Phonological features". International encyclopedia of linguistics, Vol. III, ed. by William Bright, 207-11. Oxford: Oxford University Press.

Ladefoged, Peter, and Ian Maddieson. 1996. The sounds of the world's languages. Oxford, UK and Cambridge, MA: Blackwell.

McCarthy, John J. 1988. "Feature geometry and dependency: A review". Phonetica 45. 84-108.

·第8章

Machine Translation

by Martin Kay of Xerox-PARC, Palo Alto, CA, and Stanford University

History of Machine Translation

At the end of the 1950s, researchers in the United States, Russia, and Western Europe were confident that high-quality machine translation (MT) of scientific and technical documents would be possible within a very few years. After the promise had remained unrealized for a decade, the National Academy of Sciences of the United States published the much cited but little read report of its Automatic Language Processing Advisory Committee. The ALPAC Report recommended that the resources that were being expended on MT as a solution to immediate practical problems should be redirected towards more fundamental questions of language processing that would have to be answered before any translation machine could be built. The number of laboratories working in the field was sharply reduced all over the world, and few of them were able to obtain funding for more long-range research programs in what then came to be known as computational linguistics.

There was a resurgence of interest in machine translation in the 1980s and, although the approaches adopted differed little from those of the 1960s, many of the efforts, notably in Japan, were rapidly deemed successful. This seems to have had less to do with advances in linguistics and software technology or with the greater size and speed of computers than with a better appreciation of special situations where ingenuity might make a limited success of rudimentary MT. The most conspicuous example was the METEO system, developed at the University of Montreal, which has long provided the French translations of the weather reports used by airlines, shipping companies, and others. Some manufacturers of machinery have found it possible to translate maintenance manuals used within their organizations (not by th

eir customers) largely automatically by having the technical writers use only certain words and only in carefully prescribed ways.

Why Machine Translation Is Hard

Many factors contribute to the difficulty of machine translation, including words with multiple meanings, sentences with multiple grammatical structures, uncertainty about what a pronoun refers to, and other problems of grammar. But two common misunderstandings make translation seem altogether simpler than it is. First, translation is not primarily a linguistic operation, and second, translation is not an operation that preserves meaning.

There is a famous old example that makes the first point well. Consider the sentence:

The police refused the students a permit because they feared violence.

Suppose that it is to be translated into a language like French in which the word for 'police' is feminine. Presumably the pronoun that translates 'they' will also have to be feminine. Now replace the word 'feared' with 'advocated'. Now, suddenly, it seems that 'they' refers to the students and not to the police and, if the word for students is masculine, it will therefore require a different translation. The knowledge required to reach these conclusions has nothing linguistic about it. It has to do with everyday facts about students, police, violence, and the kinds of relationships we have seen these things enter into.

The second point is, of course, closely related. Consider the following question, stated in French: Ou voulez-vous que je me mette? It means literally, "Where do you want me to put myself?" but it is a very natural translation for a whole family of English questions of the form "Where do you want me to sit/stand/sign my name/park/tie up my boat?" In most situations, the English "Where do you want me?" would be acceptable, but it is natural and routine to add or delete information in order to produce a fluent translation. Sometimes it cannot be avoided because there are languages like French in which pronouns must show number and gender, Japanese where pronouns are often omitted altogether, Russian where there are no articles, Chinese where nouns do not differentiate singular and plural nor verbs present and past, and German where flexibility of the word order can leave uncertainties about what is the subject and what is the object.

The Structure of Machine Translation Systems

While there have been many variants, most MT systems, and certainly those that have found practical application, have parts that can be named for the chapters in a linguistic text book. They have lexical, morphological, syntactic, and possibly semantic components, one for each of the two languages, for treating basic words, complex words, sentences and meanings. Each feeds into the next until a very abstract representation of the sentence is produced by the last one in the chain.

There is also a 'transfer' component, the only one that is specialized for a particular pair of languages, which co

nverts the most abstract source representation that can be achieved into a corresponding abstract target representation. The target sentence is produced from this essentially by reversing the analysis process. Some systems make use of a so-called 'interlingua' or intermediate language, in which case the transfer stage is divided into two steps, one translating a source sentence into the interlingua and the other translating the result of this into an abstract representation in the target language.

·第9章

Meaning

by William Ladusaw of the University of California, Santa Cruz

Why Study Semantics and Pragmatics?

Meaning seems at once the most obvious feature of language and the most obscure aspect to study. It is obvious because it is what we use language for--to communicate with each other, to convey 'what we mean' effectively. But the steps in understanding something said to us in a language in which we are fluent are so rapid, so transparent, that we have little conscious feel for the principles and knowledge which underlie this communicative ability.

Questions of 'semantics' are an important part of the study of linguistic structure. They encompass several different investigations: how each language provides words and idioms for fundamental concepts and ideas (lexical semantics), how the parts of a sentence are integrated into the basis for understanding its meaning (compositional semantics), and how our assessment of what someone means on a particular occasion depends not only on what is actually said but also on aspects of the context of its saying and an assessment of the information and beliefs we share with the speaker.

Applications

Research in these areas reveals principles and systems which have many applications. The study of lexical (word) semantics and the conceptual distinctions implicit in the vocabulary of a language improves dictionaries which enable speakers of a language to extend their knowledge of its stock of words. It also improves materials which help those acquiring a second language through instruction. Studying the rules governing the composition of word meanings into sentence meanings and larger discourses allows us to build computer systems which can interact with their users in more naturalistic language. Investigating how our understanding of what is said is influenced by our individual and cultural assumptions and experience, which are much less visible than what is explicitly said, can help make us more aware and effective communicators. The result of all of these (sometimes very abstract) investigations is a deeper understanding and appreciation of the complexity and expressive elegance of particular languages and the uniquely human system of linguistic communication.

The Importance of Context

We can appreciate how someone can mean more than they `strictly speaking' say by considering the same thing said in two different contexts. Consider two people, Pat and Chris, who are getting to know each o

er on a first date. If Chris says to Pat at the end of the evening, "I like you a lot.

languages, are they stored in two different parts of your brain? Is the left side of your brain really the language side? If you lose the ability to talk because of a stroke, can you learn to talk again?

Do people who read languages written from left to right (like English) think differently from people who read languages written from right to left (like Hebrew and Arabic)? What about if you read a language that is written using some other kind of symbols, like Chinese or Japanese? If you're dyslexic, is your brain different from the brain of someone who has no trouble reading?

All of these questions and more are what neurolinguistics is about. Techniques like Functional Magnetic Resonance Imaging (FMRI) and event-related potential (ERP) are used to study language in the brain, and they are constantly being improved. We can see finer and finer details of the brain's constantly changing blood flow--where the blood flows fastest, the brain is most active. We can see more and more accurate traces of our electrical brain waves and understand more about how they reflect our responses to statements that are true or false, ungrammatical or nonsense, and how the brain's electrical activity varies depending on whether we are listening to nouns or verbs, words about colors, or words about numbers. New information about neurolinguistics is regularly covered in national news sources.

Where Is Language in the Brain?

Brain activity is like the activity of a huge city. A city is organized so that people who live in it can get what they need to live on, but you can't say that a complex activity, like manufacturing a product, is 'in' one place. Raw materials have to arrive, subcontractors are needed, the product must be shipped out in various directions. It's the same with our brains. We can't say that all of language is 'in' a particular part of the brain; it's not even true that a particular word is 'in' just one spot in a person's brain. But we can say that listening, understanding, talking, and reading each involve activities in certain parts of the brain much more than other parts.

Most of these parts are in the left side of your brain, the left hemisphere, regardless of what language you read and how it is written. We know this because aphasia (language loss due to brain damage) is almost always due to left hemisphere injury in people who speak and read Hebrew, English, Chinese, or Japanese, and also in people who are illiterate. But areas in the right side are essential for communicating effectively and for understanding the point of what people are saying. If you are bilingual, your right hemisphere may be somewhat more involved in your second language than it is in your first language.

Are All Human Brains Organized in the Same Way?

The organization of your brain is similar to other peoples' because we almost all move, hear, see, and so on in essentially the same way. But our individual experiences and training also affect the organization

of our brains--for example, deaf people understand sign language using just about the same parts of their brains that hearing people do for spoken language.

Aphasia and Dyslexia

What is aphasia like? Is losing language the reverse of learning it? People who have lost some or most of their language because of brain damage are not like children. Using language involves many kinds of knowledge and skill; some can be badly damaged while others remain in fair condition. People with aphasia have different combinations of things they can still do in an adult-like way plus things that they now do clumsily or not at all. Therapy can help them to regain lost skills and make the best use of remaining abilities. Adults who have had brain damage and become aphasic recover more slowly than children who have had the same kind of damage, but they continue to improve over decades if they have good language stimulation.

What about dyslexia, and children who have trouble learning to talk even though they can hear normally? There probably are brain differences that account for their difficulties, and research in this area is moving rapidly. Since brains can change with training much more than we used to think, there is renewed hope for effective therapy for people with disorders of reading and language.

Suggested Reading

Menn, Lise, et al. 1995. Non-fluent aphasia in a multilingual world. Amsterdam: John Benjamins.

手机电子书 http://www.sjshu.com下载本文

显示全文
专题