• Nem Talált Eredményt

General assumptions about lexicalization

In document A profile of the Hungarian DP (Pldal 38-42)

The functional sequence meets the lexicalization problem

2.4 The lexicalization algorithm of Nanosyntax

2.4.2 General assumptions about lexicalization

As pointed out earlier, non-terminal spellout is compatible only with post-syntactic vocabulary insertion. Nanosyntax operates with late insertion indeed. It is assumed that syntax manipulates

abstract features that have no phonological information whatsoever. Insertion of vocabulary items operates on already existing structures.

Nanosyntax is also a non-lexicalist model. It assumes that language has only one generative system, and that takes care of both syntax and morphology (also known as‘syntax all the way down’). As there is no generative lexicon, words are assembled in syntax, in the same module as phrases and clauses.

Neither assumptions discussed so far are either original or radical; they are both shared by Distributed Morphology, too. Nanosytax, however, breaks with DM in how syntax-morphology mismatches should be handled. DM has a suite of post-syntactic operations: a morphology module on the PF-branch. Morphology adapts the output of syntax to the vocabulary of the lexicon: if syntax yields a tree that cannot be spelled out, morphology comes into the picture to fix the situation. Morphology covers a variety of PF-operations such as Fusion, Fission, Im-poverishment/Obliteration, readjustment rules, insertion of dissociated features/morphemes and PF-movement/Morphological merger (of two kinds, Lowering happens before, while Local Dislo-cation happens after vocabulary insertion). I will not be concerned with the technicalities of these operations here, though later on I will make reference to some of them at appropriate points for comparison with Nanosyntax. For discussion of DM’s morphological operations, I refer the reader to Halle and Marantz (1993, 1994); Embick (1998); Harley and Noyer (1999); Embick and Noyer (2001, 2007); Arregi and Nevins (2008) and Siddiqi (2009).

Nanosyntax has no morphological component, or post-syntactic operations of any kind (apart from vocabulary insertion). If syntax yields a tree that cannot be lexicalized then that particular structure simply doesn’t have a well-formed spellout. DM’s Fusion and Fission fall out as a direct consequence of lexical insertion (to be explained later); the other morphological operations have no equivalents and cannot be rendered in Nanosyntax. This kind of architecture is more restricted than DM because it dispenses with a range of operations and has fewer modules. Consequently, the viability of this theory is well worth pursuing.

One of the basic assumptions of Nanosyntax is that every syntactic feature of a tree-representation must receive a spellout. This principle, called the Exhaustive Lexicalization Principle, is detailed in F´abregas (2007).

(18) Exhaustive Lexicalization Principle:

Every syntactic feature must be lexicalized. (F´abregas, 2007, p. 167.)

This principle requires every feature in the output of syntax to be associated to some phonology, but crucially, it does not require that phonology to be overt. That is, features are allowed to remain inaudible as long as they are realized by a phonologically null element. Note that the principle does not regulate the distribution of empty categories; that must be taken care of independently.

The Exhaustive Lexicalisation Principle has a number of parallels in other frameworks. Bonet (1991), for instance, argues that the role of impoverishment is to delete the features that would receive no spell out if they remained in the representation. In essence, its role is to create rep-resentations that can be exhaustively lexicalized. This idea has been taken up in the subsequent DM literature. The crucial difference between (18) and Bonet’s proposal is that the Exhaustive Lexicalisation Principle places a requirement on the output of syntax, while Bonet places the same requirement on the output of morphology.

In Borer’s (2005) exo-skeletal theory functional heads are open values that can be assigned range in various ways. Every open range introduced into syntax must be assigned range in one way or another, or else the derivation does not converge. This obligatory range assignment is reminiscent of the Exhaustive Lexicalisation Principle, but of course cannot be equated with it (range assignment can be brought about by insertion of a head, specifier-head agreement and even a long-distance relationship, thus it does not call for phonological material in the head, but the Exhaustive Lexicalisation Principle requires this).

Newson (2010) explicitly claims that every conceptual unit (basic building block of syntax) in the syntactic output must be lexicalized. This is practically the Exhaustive Lexicalisation Principle.12

12There is a caveat, though. If the optimal output violates some parse constraint, there will be input elements missing from the output. That is, an equivalent of a deletion process can take place between the input and the optimal

Once adopted, the Exhaustive Lexicalisation Principle has far-reaching consequences. It au-tomatically entails that Nanosyntax does not have impoverishment or obliteration rules. In fact, Nanosyntax could not have such rules even if it had a morphological component. Impoverishment and obliteration rules are morphological rules in DM that delete syntactic features and nodes after syntax and before vocabulary insertion. As a result of these rules, a number of features/terminals are removed from the syntactic representation before they could get a chance to be spelled out.

This means that some features are present in the output of syntax but they are not realized by any phonology. This is incompatible with the Exhaustive Lexicalization Principle.

Halle and Marantz (1994) identify the three key features of DM as late insertion, syntax all the way down and underspecification of lexical items. We have seen that Nanosyntax shares the first two assumptions. The reader has do doubt inferred by now that this is not the case with underspecification. Underspecified lexical items are widely used in DM. The idea is that if there is no lexical item that would be a perfect match to the feature content of a terminal, then the lexical item that can realize the largest subset of the relevant features is chosen for insertion. This is known as the Subset Principle.

(19) The Subset Principle

The phonological exponent of a Vocabulary item is inserted into a morpheme . . . if the item matches all or a subset of the grammatical features specified in the terminal. Insertion does not take place if the Vocabulary item contains features not present in the morpheme.

Where several Vocabulary items meet the conditions for insertion, the item matching the greatest number of features specified in the terminal morpheme must be chosen.

(Halle, 1997)

The Subset Principle allows lexical items to spell out terminals even if they are underspecified for some features of that terminal. Underspecification is the standard treatment of syncretisms in DM. If a lexical item is compatible with different values of a feature, say, singular vs. plural for number, nominative vs. accusative for case, then the assumption is that it is underspecified for that feature.

From the Exhaustive Lexicalisation Principle, it follows that Nanosyntax does not share the assumption of underspecified lexical items and does not adopt the Subset Principle. Underspecifica-tion and the applicaUnderspecifica-tion of the Subset Principle leave certain features of the syntactic representaUnderspecifica-tion without a spellout, and (18) does not allow this.

But if Nanosyntax does not use the Subset Principle and underspecification, then what does it do with nodes for which there is no perfectly matching vocabulary item? There is only one logically possible answer here. If every syntactic feature must be spelled out and there is no perfectly matching lexical item, then an overspecified lexical item is chosen for insertion. This principle, called the Superset Principle was first proposed by Michal Starke in unpublished work, and was formulated in Caha (2007).13

(20) The Superset Principle (informal)

A lexical item can spell out syntactic structures which are smaller than that lexical item.

(from the Nanosyntax glossary at http://nanosyntax.auf.net/glossary.html)

Basically, the difference between the Subset and the Superset Principles boils down to this. The Superset Principle allows the features of lexical entries to be ignored but does not allow features in the syntax to be ignored. The Subset Principle, on the other hand, allows features in syntax to be ignored, but it does not allow the features of lexical entries to be ignored. Intuitively, the features in syntax are more important than the features in lexical items, as lexical items merely serve as‘clothing’ on the syntactic structure. As pointed out by Michal Starke, this intuition can be straightforwardly cashed out only with the Superset Principle. For a detailed comparison of the Subset and the Superset Principles, I refer the reader to Caha (2007). In the next section, where I discuss competition between lexical items in Nanosyntax in general, I will come back to the Superset Principle and show how it favours overspecification over underspecification and captures patterns of syncretism.

order. However, once the optimal order is determined, all conceptual units contained in it must be lexicalized.

13See Newson (2010) for the use of overspecified lexical items and the Superset Principle in Alignment Syntax.

The Superset Principle thus allows a lexical item to spell out fewer features in a particular tree than it possibly could. Unused features, i.e. features that a lexical entry could spell out but in a particular syntactic structure it does not are said to be‘Underassociated’ (term from Ramchand, 2008b). Recent work in Nanosyntax has shown that while Underassociation is widely available, some lexical items have idiosyncratic specifications that prevent them from Underassociating par-ticular features of theirs (Svenonius, 2009; Starke, 2011). In other words, these lexical items must spell out these particular features under all circumstances. Finding out which lexical items have such restrictions and for which feature(s) is an empirical task. In Chapters 4 and 9 I will argue that Hungarian pronouns have this kind of restriction: they spell out both low and high features in xNP, and none of them can be Underassociated.

Some clarification about Underassociation is in order before we proceed. In the present frame-work, lexical items are associated to three kinds of information in the lexicon: i) syntactic-categorial (the category features or chunk of structure they spell out), ii) phonological (the phonological shape of the spellout) and iii) lexical-conceptual (semantic information independent of category features, real world knowledge). Syntactic-categorial information is what distinguishes the verbsleep from the nounsleep, for instance. The lexical-conceptual information associated with these lexical items is the same, but they are used to spell out different syntactic categories. Lexical-conceptual in-formation is what distinguishes the nounsleep from the nouns dream or table. Underassociation applies only to syntactic-categorial information. That is, a lexical item can spell out all or only a subset of the syntactic features it could, but the lexical-conceptual meaning associated to the lexical item is constant.

Suppose that a lexical item may spell out three syntactic features, A, B and C, and the lexical-conceptual specification of this item can be characterized as xyz. In a structure where this item spells out all of A, B and C, it semantic contribution is A+B+C+xyz. If this item Underassociates its A feature, then its semantic contribution is B+C+xyz. If this item Underassociates its A and B features, then its semantic contribution is C+xyz. The meaning contribution of a lexical item in a particular structure thus depends on both the amount of structure it spells out and the lexical-conceptual information associated to that item. This way of deriving a lexical item’s semantic contribution is a perfect fit for the syntax-semantics mapping I assume (c.f. Chapter 1), because it is fully compositional.

It is obvious at this point that the Superset Principle and Underassociation yield a restrictive theory of polysemy. A lexical item that can spell out multiple terminals and is able to Under-associate some of its category features will always have polysemous uses. The example I have just discussed shows how: polysemy that arises via the Superset Principle yields multiple related meanings that stand in a superset-subset relationship to one another. The Superset Principle also predicts that morphemes polysemous in this way appear at different positions in the functional sequence. If f-seq defines the order of terminals as A > B > C, and the previously mentioned lexical item spells out A+B+C, then it linearizes in the A position. When it spells out B+C, it is not associated to the A position any more. Instead, it linearizes in B.

The last general assumption about lexicalization in Nanosyntax is that features in the lexical entries are arranged in a hierarchical relationship. In other words, the features in the lexical entries are ordered. Spellout is a procedure that matches not only the featural information, but also the hierarchical (i.e. ordering) information between lexical entries and syntactic structures. For a successful spellout, both need to match. That is, features A and B in a syntactic representation like (21) can only be spelled out by a lexical entry like (22). (23) is not a good match because the features are not in the correct order.

(21) structure to be spelled

out A B C

(22) lexical entry1

{A > B}

(23) lexical entry2

{B > A}

Spellout of syntactic structures goes bottom-up, and it consists in matching the hierarchically ordered features in the lexicon to the hierarchy of features built by narrow syntax.

In document A profile of the Hungarian DP (Pldal 38-42)