Basic issues in perception for language
Some basic tasks for successful language comprehension are that language users must recognise the signals that reach the brain from eye or from ear, or even from the fingers in the context of Braille as being language rather than non-language, they must recognise them as being in a language that they understand, and they must interpret them as meaningful. In the comprehension of written and spoken language, these tasks involve knowledge about how letters and sounds are used, but also knowledge about writers and speakers, about the processes of writing and speaking, and about the structures and units of language. In this section we will focus on some of the general issues that exist for perception for language.
Hemispherical specialisation 
It is clear that as a species, humans have become specially adapted for language. Our upright posture, the position of our larynx voice box in the throat and the shape and dimensions of our vocal tract all contribute to our ability to produce a rich and well-controlled range of speech sounds. Our hearing for language is helped by the fact that these speech sounds have sound frequencies and amplitudes to which our auditory system is especially sensitive. There is also neurophysiological evidence that humans have perceptual specialisation for language. This includes hemispherical specialisation , where the two halves of the brain have different specialisations. It is typically though not always the case that language faculties are predominantly in the left hemisphere of the brain. Interestingly, and following the general pattern that the left hemisphere is responsive to and responsible for the right-hand side of the body, this is linked to a right ear adantage REA for speech for most people. This was demonstrated in the 1960s and 1970s in dichotic listening experiments) e.g. Kimura, 1961; Studdert- Kennedy, Shankweiler & Pisoni, 1972; Studdert- Kennedy & Shankweiler, 1970). In these experiments, participants hear competing sequences of words presented over headphones to each ear. More accurate identification occurs for words presented to the right ear, as long as participants have no basic hearing imbalance between the ears.
An interesting question is whether the REA arises because we hear bet ter with the right ear, or because we hear speech sounds better with the right ear, or because we process language better when we receive it through the right ear. That is, at what processing level does the left hemi sphere get its advantage to test this, dichotic listening experiments have been carried out with a number of different kinds of stimuli. Musical stimuli fail to show the REA, and indeed have been found to give a left ear advantage Bryden, 1988. This shows that the REA is not a reflection of auditory processing per se, as this would predict an REA for any kind of auditory input. In addition, neurophysiological studies using speech and equivalently complex non-speech sounds found differences in left brain hemisphere activation for the speech and non-speech, but equal activation levels for the two types in the right hemisphere (Parviainen, Helenius & Salmelin, 2005).
The REA is also clearly not phonetic, i.e. not an advantage specifically for speech sounds, because Morse code signals sequences of short and long tones acting as a code for letters of the alphabet also show the REA (Papcun, Krashen, Terbeek, Remington & Harshman, 1974). It is most likely therefore that the REA reflects the linguistic processing that takes place in the left-brain hemisphere, and which will apply to both speech and Morse code input. It also turns out that speech-like but unintelligible stimuli also show an REA (i.e. more can be remembered about these stimuli when presented to the right ear). Even though these stimuli are unintelligible, our linguistic processing system attempts to make sense of them, and so we can recall more about them as a consequence.
If participants are instructed to pay greater attention to one ear than the other, then this can enhance or decrease the REA, suggesting again that the advantage is not an automatic peripheral effect (Hugdahl & Andersson, 1986). In addition, the REA for simple syllables in dichotic listening tasks is affected by the nature of a preceding prime stimulus presented to both ears simultaneously (Saetrevik Hugdahl, 2007). If the prime differs from both of two test items, one presented to each ear, then the REA persists, i.e. the item presented to the right ear is recalled more accurately. If the prime is identical to the left-channel test item, then the REA increases, and if the prime is identical to the right-channel test item, the REA decreases, i.e. there is inhibition of the previously presented prime item. Inhibition has been shown independently for primes that have to be ignored i.e. where no response is expected for the prime item, as in this task. Saetrevik & Hugdahl argue that after the prime item is presented, cognitive control inhibits it because it is a potential interfering factor, and this leads to a recognition advantage for the novel item. Interestingly, this effect is found in this dichotic listening task even when the primes are visually displayed on a computer screen (e.g. <g a> before dichotic auditory stimuli consisting of /ga/ to one ear and /ba/ to the other.
Mapping from the input to the linguistic system
 An important part of perceptual processing for language is how the listener or reader gets from the input signal to the linguistic system. In Chapters 8 and 9 we discuss this process more closely in the context of spoken and visual word recognition respectively, building on assumptions that words provide a significant linguistic building block and that recognising words in the input stream is therefore an important objective of the perceptual system.
An issue that is common to both visual and auditory processing, though different in its specific workings, is the nature of pre-lexical processing, i.e. what kinds of units need to be identified before words can be accessed. We are so used to a particular way of thinking about how written words are made up that it might seem obvious that word recognition would involve the recognition of a word’s component letters. However, there is evidence that practised readers recognise individual letters only in the case of relatively uncommon words, and that a lot of word recognition is based on overall word shape. While this suggests recognition units larger than the letter, other approaches to visual word recognition argue that there are smaller recognition units than letters, and that letter features such as horizontal, vertical or diagonal lines at various heights on a text line form an important part of the recognition process.
Similarly, models of spoken word recognition argue for different types of intervening representation between the input and the word. The most obvious is the phoneme, or distinct speech sound, as the nearest equivalent in speech to the letter in visually presented words. But smaller units, e.g. phonetic features have also been claimed to have perceptual validity. In addition, pre-lexical units larger than the phoneme have also been proposed, such as the syllable or the diphone.
In both the visual and auditory domains there are also peripheral perceptual processes that must take place before linguistically relevant information can be extracted from the input. These more automatic processes do not generally form part of the subject matter of psycholinguistics, except insofar as they may be relevant to the extraction of linguistically relevant features.
Variability 
A significant issue in perception for language is Variability. That is, the input that we receive can be highly variable in its detail. This variability adds to the difficulty of identifying the units of writing or of speech. Writing styles and legibility vary from one person to the next. The care taken over writing depends on the nature of the writing task and the intended reader of the material notes written by a student during a lecture will differ from a scholarship application letter from the same student. The choice of font in a typed document will affect letter shape, as Figure 7.1 illustrates.

 
Variability is very obviously present in speech too. Speakers have different vocal tract shapes and sizes and different chest cavity sizes. These and other physical factors will contribute to variation in the sounds produced by different speakers. Even the same speaker will produce qualitatively different versions of the same sound on separate occasions, depending on a range of factors such as health, emotional state, the situation of speaking, the phonetic and linguistic context in which a sound is found, and so on.
Variability is a potential problem for perception, as too much variability will result in difficulty identifying the intended letter or sound or word or message. Researchers in speech perception and automatic speech recognition have often struggled to identify invariant cues to the identity of individual speech sounds. Of course, some variability is predictable and therefore potentially useful. Predictable variation in speech can indicate differences between speakers in terms of their age, sex, size, social class, place of origin, and many other demographic, social and personal factors. Some variation, both in writing and in speech, results from the effects of the context in which a letter or sound is found, and can therefore provide information about that context which can actually help perception. For instance, nasalisation on an English vowel may make that vowel different from other instances of that vowel, and therefore increase the vowel’s variability, but at the same time this nasalisation may be informative, because it may tell you that the following consonant is nasal.
Exemplars
 Rather than assume that the input is matched against a single template for a phoneme, word or other recognition unit, recent approaches to language perception and comprehension have argued that our memory systems allow us to store multiple representations for a given unit. These are known as exemplars, and are assumed to be rich in information that relates to the actual utterances on which they are based. For example, it has been argued that we have exemplar representations for words which include information about the speaker who uttered the word, such as their age, sex, social grouping, dialect, etc., as well as possibly about the time and place of the utterance, and so on (Hay, Warren & Drager, 2006; Johnson, 1997; Pierrehumbert, 2001; Strand, 1999). Exemplars provide a possible mechanism for coping with variation, because the latter becomes part of the richness of the set of representations rather than a problem to be overcome. Since social information is also associated with the exemplars, speech perception has a built-in mechanism for recognising social variation and for normalising for it. As new exemplars are encountered, they are added to our exemplar sets, and older exemplars that are not re activated fade over time.
Segmentation
If language comprehension involves the recognition of basic units of writing or of speech, then these units need to be separable from adjacent units. But segmentation of the input is not always straightforward.

Take for instance the example in 7.1. Although this is a highly regularised version of connected letters, using a computer font rather than actual handwriting, there are areas of ambiguity and uncertainty concerning where one letter finishes and the next begins. For instance, the beginning of the final word could be <u>, the second letter in the first word might be <a>.
Likewise, speech sounds run into one another as the articulators move from the position for one sound to that for the next. Figure 7.2 gives a visual representation – a spectrogram (see sidebar) – of speech, for the utterance Pete is keen to lead the team. An approximate segmentation into words is shown below the spectrogram. Note that there are seldom any clear boundaries’ between the words in the spectrogram, i.e. segmentation of the speech into words is difficult, let alone into individual sounds within these words. In this respect, speech is different from most instances of writing, in that writing – even joined up writing as in 7.1 – usually places spaces between words. Note also that Figure 7.2 includes a good example of variability resulting from the context in which a sound is uttered – there are four instances of the /i/ sound in this utterance, and the portions of the spectrogram corresponding to those instances are not identical, as shown in Figure 7.3. They differ both in their duration and in the shape of the darker bands showing how the sound energy is distributed, though there are also some common features.
This section has highlighted some of the common issues for the perception of written and spoken language. These issues relate to the fact that the perceiver has to extract linguistic information from the input signal, and that this can be made difficult by two major problems. One is the lack of invariance in how the same’ letter or sound is produced on different occasions or by different people. The other is how to segment a piece of text or an utterance into its constituent parts. We have noted above that the segmentation of speech into words is more problematic than the seg mentation of written or printed text. In addition, the transitory nature of speech means the initial re-coding of the input into linguistic units is likely to be more critical with speech than with writing, where the reader can go back and look again at the input. In the next section we will look at further issues for speech perception.


				
				
					
					
					 الاكثر قراءة في  Linguistics fields 					
					
				 
				
				
					
					
						اخر الاخبار
					
					
						
							  اخبار العتبة العباسية المقدسة