المرجع الالكتروني للمعلوماتية
المرجع الألكتروني للمعلوماتية

English Language
عدد المواضيع في هذا القسم 6112 موضوعاً
Grammar
Linguistics
Reading Comprehension

Untitled Document
أبحث عن شيء أخر
المفعول معه
2024-11-07
المفعول به
2024-11-07
تربية الماشية في بلغاريا
2024-11-07
The tail
2024-11-07
نقل النفط العربي
2024-11-07
لا النافية للجنس
2024-11-07


Ways of measuring productivity  
  
799   08:10 مساءً   date: 19-1-2022
Author : Rochelle Lieber
Book or Source : Introducing Morphology
Page and Part : 66-4


Read More
Date: 2023-10-09 750
Date: 18-2-2022 750
Date: 2023-08-24 677

Ways of measuring productivity

We have seen that the productivity of lexeme formation processes depends on a variety of factors, including restrictions on possible bases, usefulness of the words formed, and the transparency of the process. Looking at these factors can give us some sense of how productive a process might be, but can we do better and actually measure productivity? Is it possible to compare the productivity of different processes? If so, how might we go about making such measurements?

It’s not hard to think of reasons why counting items in a dictionary wouldn’t be an accurate way of estimating productivity. For one thing, counting items that are already in the dictionary doesn’t really tell us anything about how many new words might be created with a lexeme formation process, and it’s the possibility of creating new forms that’s most important in making processes productive. Further, the most productive of lexeme formation processes are ones that are phonologically and semantically transparent. If the words resulting from these processes are perfectly transparent in meaning, then it’s unlikely that dictionaries will need to record them! On the other hand, less productive processes, as we’ve seen, frequently have outputs that are less transparent (more lexicalized), and therefore have more need to be listed in the dictionary. So simple counting might give a paradoxical result: less productive processes would be represented by more entries in the dictionary than more productive processes!

Morphologists have therefore tried hard to come up with other ways of measuring productivity. One suggestion (Aronoff 1976) was to make a ratio of the number of actual words formed with an affix to the number of bases to which that affix could potentially attach.

Most morphologists see several problems with Aronoff’s way of measuring productivity. First, all of the problems we mentioned above with counting items in a dictionary (or corpus) apply to this measure as well. In addition, it’s not clear that we can ever know for sure how many potential bases there are for a given lexeme formation process. If -esque can attach to any name (or at least to any name with two or more syllables), how would we ever know that we’d amassed all possible names?

A somewhat more sophisticated – but still not perfect – measure of productivity proposed by Baayen (1989) capitalizes on what we know about the token frequency of derived words. Remember from chapter 1 the distinction between types and tokens: if we’re counting types in a corpus or language sample we look for each different word and count it once, no matter how many times it appears, but if we’re looking at tokens we count up all the separate occurrences of that word in a particular corpus. The number of separate occurrences of a word in the corpus is the token frequency of that word.

 

 

An important observation that has been made about lexeme formation processes is that the less productive they are, the less transparent the words formed by those processes, and the less transparent the words, the higher their mean token frequency in a corpus. In other words, words formed with less productive suffixes are often more lexicalized in meaning and will often display many tokens in a corpus. The more productive a process is, the more new words it will give rise to and the more chance that these items will occur in a corpus with a very low token frequency, sometimes only once. One way of measuring the productivity of specific lexeme formation processes is to capitalize on this observation. To do so, we take a corpus, count up all tokens of all words formed with a particular affix, and then see how many of those words occur only once in the corpus (a type with token frequency of one in a corpus is called a hapax legomenon or sometimes just a hapax). The ratio of hapaxes to all tokens tells us something about productivity. Using this measure confirms, for example, our intuition that -ness is more productive that -ity (Baayen and Lieber 1991).