المرجع الالكتروني للمعلوماتية
المرجع الألكتروني للمعلوماتية

English Language
عدد المواضيع في هذا القسم 6193 موضوعاً
Grammar
Linguistics
Reading Comprehension

Untitled Document
أبحث عن شيء أخر المرجع الالكتروني للمعلوماتية


Problems for underspecification  
  
40   08:23 صباحاً   date: 2024-12-25
Author : APRIL McMAHON
Book or Source : LEXICAL PHONOLOGY AND THE HISTORY OF ENGLISH
Page and Part : P217-C5


Read More
Date: 2024-03-02 534
Date: 2024-04-25 543
Date: 2024-05-07 484

Problems for underspecification
Underspecification - and the more radical the better - seemed an indispensable part of 1980s derivational phonology. There has been a typical 1990s backlash against it. Several phonological frameworks have ruled it out altogether: for instance, Prince and Smolensky (1993: 188) note that the unmarkedness of coronals and their diversity and frequency in segment inventories are irreconcilable within underspecification, and conclude that Optimality Theory should `abandon underspecification in favor of markedness theory'. Similarly, underspecification is rejected in Government Phonology because it conflicts with the autonomous interpretation hypothesis (Harris and Lindsey 1995), which holds that phono logical elements should be directly interpretable at all levels. Even within rule-based derivational phonology, underspecification is increasingly challenged (Mohanan 1991, Steriade 1995), for reasons of unlearnability, psychological implausibility, and theory-internal contradiction.


We have already seen that underspecification is predicated on simplicity, and on an alleged though rarely defended preference for computation over storage. Harris and Lindsey (1995: 48) argue, however, that this implies an in efficient model of lexical access: `Just as an archived computer text file has to be de-archived before it can be accessed, so would a speaker-hearer have first to ``unpack'' the condensed, underspecified form of a lexical entry before submitting it to articulation or recognition.'


This problem also cuts the other way, in terms of learnability. Arch-angeli (1988: 192) notes that the learnability of a contrastively specified system depends on the learner's knowledge of both distinctive and non-distinctive features; thus, the child must initially internalize a fully specified representation, then strip out non-contrastive specifications algorithmically. This assumption guarantees the existence of a single contrastively specified underlying representation for any system, but we must ask what would motivate a child, having internalized the fully specified representation which is necessarily prior to a contrastively specified level, to identify and remove the redundant information, only to reintroduce it in time for the phonetics.


Radical underspecification is not so learner-friendly. In particular, there is considerable indeterminacy over which feature value should be marked at the underlying level, and indeed which feature is to be selected in the case of `balanced mutual dependencies' (Harris and Lindsey 1995: 47, and see further below). While contrastive specification guarantees a single set of underlying forms per system, radical underspecification thus permits a variable, theoretically unlimited number of underlying systems for any set of surface forms. Archangeli (1988: 193) admits that `the learnability of [such] a system becomes quite a challenge', and argues that, although decisions may sometimes be made on language-specific grounds, frequently guidance from Universal Grammar will be needed. Radical underspecification therefore requires a directive theory of Uni versal Grammar: underlying forms are decided on universal grounds, and universal principles constrain the ordering of redundancy rules. It is interesting in this context that Optimality Theory, with its particularly strong conception of UG, nonetheless rejects underspecification.


There have been attempts to justify underspecification in psycholinguistic terms: for instance, Stemberger (1992) argues that radical under specification is supported by speech error evidence, claiming that in tasks involving pairs of phonemes, `if one of the phonemes is underspecified relative to the other, there are more errors on the underspecified phoneme' (1992: 496). However, Stemberger's argument relies on his characterization of /ε/ as the maximally underspecified vowel phoneme of English: since [ε] appears rather late in child language, this conflicts with the usual hypothesis that underspecified vowels are acquired early. Similarly, Lahiri and Marslen-Wilson (1991) propose underspecified entries in the mental lexicon as a solution to the notorious problem of matching highly variable perceived forms to the appropriate underliers in speech recognition systems. The Cohort Model is a parallel information processing system, which assumes activation of all words in the mental lexicon beginning with the same sound sequence as the sensory input. As more input is heard, this cohort of eligible forms is continuously assessed, and mismatches trigger a fall in activation level for the affected candidates, until the best candidate is recognized. However, this model rules out late entry of candidates into the cohort; yet since onsets vary considerably in connected speech, the right candidate might be excluded initially, and only recognized relatively late in the procedure. Lahiri and Marslen-Wilson's Underspecified Cohort Model attempts a resolution by invoking underspecification, which would allow initial matching over a wider range of forms.


Lahiri and Marslen-Wilson (1991) hypothesise that a value specified underlyingly will match only the same value in the input; the opposite value will be a mismatch. An underspecified value will provide a better match for the unmarked surface value, but will also be a partial match for the marked surface value. Lahiri and Marslen-Wilson report a series of gating experiments, where subjects were asked to give word choices for heard stimuli, based on Bengali, where vowel nasalization is contrastive, with [+ nasal] the marked value, and English, where vowel nasalization is redundant. In both languages, oral vowels become nasalized by assimilation to a following nasal consonant. The results seem to support the underspecification hypothesis. In Bengali, almost no words with under lyingly nasal vowels were given in response to stimuli containing oral vowels. Responses with nasal vowels were initially given to inputs with surface nasal vowels, regardless of the presence or absence of a nasal consonant; however, subjects progressively became aware of the nasal consonant condition, and began matching only oral vowel responses in cases of assimilation. However, in English, subjects consistently interpreted vowel nasalization as signalling a following nasal consonant.


Ehala (1992) provides a detailed critique of the Underspecified Cohort Model, focusing on three main problems. First, he argues that the degree of underspecification will either be incompatible with the spread of phonologically possible variants for each lexical item, or will not allow underlying forms to be kept distinct. For instance, Lahiri and Marslen Wilson (1991) consider English hand in hand you [nʤ], hand me [mm] and hand care [ŋk]. Since both /h/ and /d/ are potentially deletable, they should be totally unspecified; and in view of its assimilatory behavior, /n/ also cannot be specified for place. The resulting underlier /æN/ will not be unique; but with a less radical version of underspecification, the underlying form will not be compatible with its full range of attested surface realizations. Secondly, Lahiri and Marslen-Wilson argue that marked information cannot be altered by phonological rule, since this would produce surface forms not matching their underliers. However, Ehala (1992) notes that neutralization processes potentially delink and hence effectively erase marked feature values, which are then substituted by later redundancy rules axiomatically supplying unmarked values. Thus, Lahiri and Marslen-Wilson predict that English disbar, disguise should not be recoverable since the underlyingly voiced consonant after [s] surfaces as voiceless; but listeners can understand these forms. This might be resolved by disregarding mismatches of only a single feature value; but since Lahiri and Marslen-Wilson introduce their Underspecified Cohort Model to deal with precisely such mismatches, underspecification would then lose its value. Finally, Lahiri and Marslen-Wilson's model predicts that speakers should not use predictable information in speech recognition; however, Ehala (1992) argues that English speakers use lack of aspiration as a cue for underlying stop voicing. In general, then, the incorporation of underspecification into word recognition is not particularly successful.


Perhaps most importantly, some of the main phonological predictions of underspecification theory seem difficult to maintain. For instance, Hualde (1991) notes that in radical underspecification, unmarked vowels, defined as those behaving asymmetrically, will be unspecified under lyingly, with the proposed empty vowel slot being filled by default feature values. However, Hualde argues that in the Arbizu dialect of Basque, suffixes beginning with an empty vowel slot, subsequently specified as [e], must be distinguished from those beginning with underlying /e/. Radical underspecification will enforce identity between these two classes, both starting out with an empty vowel, and will therefore lose this distinction. Similarly, McCarthy and Taub (1992) contend that even coronal under specification, surely the best-known and apparently most robust example, is contentious: although many papers have claimed that coronal underspecification in English extends throughout the phonology, `It is ... remarkable that there is also a considerable body of evidence that coronals, even plain alveolars like t or n, must actually be specified for [coronal] in English phonology' (1992: 364). McCarthy and Taub provide nine such cases, many of which involve the conflict that plain alveolars must be seen as unspecified for [coronal] to explain their special phonological behavior, but also form a natural class with marked coronals like [ʃ θ], which can be unified only using [+ coronal]. For instance, American English prohibits initial coronal plus [ju], while the diphthong [au] can only be followed by a coronal, as in mouth(e), mouse, lout, gouge: but both restrictions hold regardless of whether the coronal is marked or unmarked. In short, `although [coronal] underspecification explains much about English phonology, it also encounters significant difficulties' (McCarthy and Taub 1992: 366).

Moving away from the language-specific, although radical underspecification is avowedly based on cross-linguistic considerations, notably relating to markedness and Universal Grammar, it may inhibit cross system comparison. One of the major problems of early structuralist linguistics was the theoretical impossibility of equating or even com paring a given phoneme in one language with the `equivalent' phoneme in another: since members of a system are definable only in terms of the elements with which they contrast, and since two languages will have different systems of phonological oppositions, comparison between systems is strictly invalid. It seems likely that the adoption of radical underspecification will reintroduce or even exacerbate this difficulty, as the same surface segments will be underspecified in potentially very different ways according to the other elements in the system. Even within a single system, it is often unclear exactly what shape underspecified forms should take. Very frequently, issues of mutual dependency arise (Harris and Lindsey 1995, Mohanan 1991, Steriade 1995): for instance, if segment structure depends on syllable structure and vice versa, which should we regard as derived? Why is there general agreement that sonorants should be underlyingly unspecified for [voice], but not that voiceless segments should be unspecified for [sonorant]? And how do we decide the best way of distinguishing an underlyingly placeless vowel from no segment at all? As Steriade (1995: 135) notes, `the choice between marking an underlying null segment by using a stricture feature like [+ sonorant] or ... a place feature like [+ high] remains arbitrary. No credible principle will lead us to the desired conclusion'.


It is all too easy to bandy about apparent justifications like natural ness, simplicity and predictability without exploring them in depth. As Mohanan (1991: 300) comments, `For more than three decades, the assumption that underlying representations may not contain predictable information ... has been accepted as an unquestioned dogma in generative phonology'; but if we follow Mohanan and address this dogma directly, we find two perhaps surprising facts. First, `underspecification does not directly follow from predictability. It follows only if we subscribe to some further principle such as Lexical Minimality' (Steriade 1995: 121). As Goldsmith (1995b: 17) remarks, underspecification is not the only, or even the obvious way of encoding simplicity either. And secondly, the definition of predictability underlying underspecification theory is not the one usually found in other sciences, where it straightforwardly means the opposite of unpredictable (Mohanan 1991: 288):
When one tosses a coin, the result is random or unpredictable because we cannot tell whether the outcome will be heads or tails. Suppose we use the following convention: if it is heads, we write [+ head], and if it is tails, we write nothing. Since there is now a `rule' that interprets the absence of any specification as tails, Archangeli's notion of predict ability would imply that tails is predictable, but heads is not! Clearly, we must not confuse rules that interpret linguistic notation with rules that predict what can be observed in linguistic phenomena.

We might hesitate over adopting a further principle like Lexical Minimality in view of the fact that many `predictable' features are in fact required in the phonology, as pointed out by McCarthy and Taub (1992) for [coronal] in English. Others, which would be supplied routinely by phonological or redundancy rules in underspecification theory, seem not to need specification at all: Keating (1988: 275) argues that `under specification may persist into phonetic representations' in cases of phonetic transparency, for instance, where a segment like /h/ may incorporate purely transitional values for certain features, and may also allow neighboring segments to interact freely, notably in vowel-to vowel coarticulation. Harris and Lindsey (1995) argue that these cases are compatible with their element theory, which assumes monovalent features and purely privative oppositions: indeed, a privative system will be significantly less powerful than an equipollent one, but will nonethe less predict strong asymmetries of the type originally used to motivate underspecification. For example, if [round] is a single-valued feature, we would expect roundness to participate in phonological operations like spreading, but `there is no way of expressing a complementary system in which ``absence-of-round'' is harmonically active' (Harris 1994: 93). Underspecification here is `trivial and permanent' (Steriade 1995: 157).


Of course, if underspecification is inherent and monovalent, many redundancy rules and structure-building operations will simply dis appear. Mohanan (1991: 301) sees this as the right approach in any case: since he regards underspecification and default rules; structure changing linking rules; and constraints and structure changing rules as three implementational variants, and argues that we require constraints and structure changing processes independently in any case, it follows that `structure building rules should be eliminated from segmental phonology'. This proposal is seconded by Steriade (1995). One might argue that structure building rules are still necessary for prosodic purposes, and Mohanan's statement leaves the door open for this; but recent developments in LP may make the situation clearer here. Most notably, Giegerich's (in press) model of base-driven stratification rules out the pre-morphology cycle, on which structure building applications of stress and syllabification have hitherto been located. In that case, underspecification might provide the only motivation for maintaining such structure building operations, making the whole argument irreducibly circular.