• Nem Talált Eredményt

A comparison of LFG, MP, GASG and HPSG

In this section, I briefly compare some salient properties of four generative models: LFG, the framework of this dissertation, mainstream MP, GASG and HPSG. This general comparison is based on Sections 1.2.1-1.2.6 to a great extent, but not exclusively. I only concentrate on aspects that are relevant from the perspective of this dissertation. I keep the comparative discussion at a general level and defer the comparison of analytical details to various stages of my presenting my LFG account of the phenomena under investigation.

(A) Degree of modularity a. LFG: very high

b. MP: moderate (GB: very high) c. GASG: very moderate

d. HPSG: very moderate

I think these characterizations are straightforward. LFG is highly modular with all its representational levels. GB was similarly highly modular (in its own way), and MP has reduced this considerably. GASG and HPSG package different types of information into a considerably reduced number of modules.

(B) The basic architectural organizing principle of the theory a. LFG: representational

b. MP: derivational

c. GASG: representational d. HPSG: representational

MP still follows the original deep structure → surface structure derivational pattern. The other two theories are representational, and one of the basic differences between them is the degree of representational modularity, see (A).

(C) The locus of the treatment of morphological phenomena a. LFG: lexicon

b. MP: syntax

c. GASG: syntax (strongly lexically driven) d. HPSG: lexicon

LFG fundamentally subscribes to the Strong Lexicalist Hypothesis: it assumes that all morphological processes (whether inflectional or word formational) take place in the lexical component. In the Chomskyan tradition (i) the Standard Theory handled both inflectional and derivational morphology in the syntax (and the phonology); (ii) GB accepted the Weak Lexicalist Hypothesis (inflection in the syntax, word formation in the lexicon); and (iii) MP’s morphology is, again, fully back to syntax. Interestingly, although GASG claims that it has a Totally Lexicalist Morphology, I pointed out in the previous section (1.2.5) that as far as I can see the real locus of combining morphemes (whether inflectional or word formational) is the syntax. However, the blueprint itself for handling morpheme combination in the syntax belongs to the lexicon. Thus, this is a kind of lexically driven morphology in the syntax. HPSG is much closer in spirit to LFG in this respect.

(D) Importance of phrase structure a. LFG: high

b. MP: very high c. GASG: n/a d. HPSG: high

Functionally annotated phrase structures are at the heart of LFG’s syntax. In MP, phrase structure is even more important, because it has been designed to encode semantic types of information by means of specific functional projections, for instance aspect (AspP).

GASG strongly argues against phrase structure representation. As I pointed out in Section 1.2.6, phrase-structural representation has an important role in HPSG, just like in LFG.

(E) Nature of functional categories a. LFG: highly constrained b. MP: a wide variety c. GASG: n/a

d. HPSG: highly constrained

LFG basically constrains the number of functional categories to three: D(P), I(P) and C(P), and even these need to be empirically justified by the existence of at least one word (free morpheme) unquestionably belonging to that category. HPSG is similar in spirit. By contrast, in MP bound morphemes, or even morphologically never realized features can also head a variety functional projections. In GASG there is no phrase structure; hence, there are no functional categories.

(F) Strict endocentricity a. LFG: no b. MP: yes c. GASG: n/a d. HPSG: no

LFG assumes that at the level of sentence structure exocentricity (S) and endocentricity (CP/IP/…) are part and parcel of the space for parametric variation across (and even within) languages. GASG declares that there is no need for phrase structure in the syntax:

sentences are simply strings of words. From this it follows that this criterion is not applicable to GASG. However, I think that even GASG needs a symbol for sentences (although I have not seen any in any one of the representations I am aware of), and, naturally, the most likely candidate is the S symbol. If this is the case then in (Fc) the answer is no.

(G) Empty categories in syntax a. LFG: no

b. MP: yes c. GASG: no (?) d. HPSG: no

Recent versions of LFG strongly reject the use of empty categories of any kind. Although earlier versions of the theory did postulate empty categories in c-structure for the treatment of long-distance dependencies like WH-question formation, see Kaplan &

Bresnan (1982), for instance. It is important to note, however, that even in these analyses no syntactic movement was assumed. Instead, unbounded metavariables were employed to encode the necessary filler–gap relation. Even so, this was a way of adapting the

original transformational treatment. Later Kaplan and Zaenen (1989) proposed to dispense with such an empty category approach by applying LFG’s functional uncertainty device. For a discussion and (further) arguments against empty categories in LFG, see Dalrymple et al. (2007). By contrast, they are among the hallmarks of the GB/MP tradition. My understanding is that GASG is also “realistic” in this sense. However, as I pointed out in the previous section (1.2.5), some practitioners of GASG also employ the notion of a phonetically null morpheme. Consequently, I raised the question of how such elements are formally treated in this approach, given that morphemes are directly represented in syntax, hence the question mark in parentheses in (Gc).63 In some (earlier) versions of HPSG an empty category was assumed in the lexical representation for the treatment of long-distance dependencies, which was involved in a HPSG style filler-gap relation, see Pollard & Sag (1994), for instance. By contrast, Sag (2005), among others, proposes a traceless treatment. For a discussion, see Szécsényi (2009).

(H) Implementability a. LFG: very strong b. MP: moderate c. GASG: strong d. HPSG: very strong

The characterizations in (Ha-c) are my current understanding of the implementability potentials of the three frameworks in general. As regards the implementation of analyses of the relevant Hungarian phenomena, there have been remarkable results in LFG and in this dissertation I plan to contribute to these results considerably. There has been some (limited) implementation in GASG, and no implementation in MP that I am aware of. As I fully agree with the conviction of a great number of linguists that the proof the generative theoretical pudding is in the implementational eating, in the future I would be very interested in comparing my LFG-theoretical and LFG-implementational results with similar results in other generative frameworks. The attested implementability of HPSG is roughly at the same level as that of LFG.

(I) Autonomy of Syntax and related issues

Nőthig & Alberti (2014) also elaborate on issues pertaining to one of the central principles of the Chomskyan mainstream from the very beginning: the Autonomy of Syntax. They admit its positive and seminal influence on generative syntactic research in the first period. They continue by quoting Surányi (2010), who points out several theory-internal problems, at both descriptive and explanatory levels, with GB and the “cartographic” version of MP pertaining to this autonomy principle. Then they write this.

63 Gábor Alberti (p.c., February 2016) made the following comments on this issue. “A eALIS valóban realisztikus kíván lenni a tekintetben, hogy nem feltételez üres szavakat/morfémákat. Ha ilyen felbukkan egy implementációban, az átmeneti megoldás. A fókuszosság például realizálódhat egyes nyelvekben testes morféma alakjában, míg más nyelvekben nem üres morfémát keresünk (ami "valahol" ott lapulna láthatatlanul és átrendezné maga körül a szórendet), hanem egy erős kívánalom domináns érvényesülését kell kimutatni, illetve sajátos intonációs relációt. NB: az én, téged stb. névmások a magyarban operátorviszony jelenlétére utalnak, nem személyre, mert az a ragozásból adódik.” [Indeed,

eALIS intends to be realistic in the sense of not assuming empty words/morphemes. If such a thing emerges in an implementation, this is a temporary solution. For instance, in certain languages focusing can be expressed by overt morphemes, while in other languages we are not after an empty morpheme (which should invisibly hide “somewhere”

and rearrange word order around itself). Instead, we need to discern a dominant manifestation of a strong requirement and/or a special intonational relation. NB: pronouns like én ‘I.NOM’, téged ‘you.SG.ACC’, etc. signal the presence of operator relationships in Hungarian and not persons, as person encoding is provided by inflection – my translation, TL.]

The 2000’s is the age of the “anticartographic” Minimalist Program (see e.g. van Craenenbroeck 2009), whose decisive property is that many features/positions are held to be

“methodologically unsound” (Surányi 2010) and hence to be avoided in the name of some Semantic Economy,64 which can be captured in the form of a generalization of Last Resort:

reference to some semantic function is sufficient for move. The earlier syntactic disguise is not required any more, because it counts as a superfluous cost, which is against the minimalist spirit. What is to be regarded as the most economical grammar in the minimalist spirit is practically the approach of GASG, in which both Merge and Move are dispensed with.

The intuition behind Semantic Economy, thus, already almost coincides with that behind our semantics-based approach. We consider it obvious that Generalized Last Resort is in explicit conflict with the Autonomous Syntax Principle, or at least makes it vacuous. We think that the only reason why ASP has not been abandoned is the anxiety about loss of prestige, the loss of what seems to be the cornerstone of the enormous building of generative linguistics.

What we have intended to prove is that there is no danger of this loss. We can declare that semantics is prior to syntax in the sense that it is syntax that should be based on semantics, and not vice versa. All descriptively or explanatorily adequate results due the generative paradigm, however, can be quite easily reformulated in our semantics-based approach (2014: 121).

Next, Nőthig & Alberti (2014) discuss Bobaljik & Wurmbrand’s (2002) architectural proposal for MP. Their most important quotes from this proposal are these:

… different PF representations (word orders, in the general case) compete for the realization of a fixed LF, and not the other way around […] this model stands in conflict with common proposals that inherit (sometimes tacitly) the GB ordering of covert operations after Spell-Out (2014: 122).

They compare the two models in the following way, claiming that the two are identical in spirit, but theirs is the simplest and the most radical realization of Bobaljik & Wurmbrand’s (2002) model.65

64 This is their quote from Surányi (2010). “Accordingly, pairs of agreeing uninterpretable and interpretable abstract morphosyntactic features like [top(ic)] and [foc(us)] have been posited by analysts of pertinent constructions in different languages, conforming in this manner to the working hypothesis of the MP that all (displacement) operations are triggered. But such an implementation of the notion of trigger is methodologically unsound, since, while it applies the same mechanism of trigger throughout, it substantially weakens the predictive power of the hypothesis itself (the general prediction being that all movements are triggered), to the degree that makes the argumentation almost circular” (2010:

17).

65 It is interesting to note that Brody (1994) outlined a very similar conception of the architecture of generative grammar.

“We are proposing then a theory where there is only a single syntactic interface level, a level that both the lexicon and the conceptual systems have access to. We shall call this level of representation LF, keeping in mind that a different status is now attributed to this level. D-structure can now be thought of as a level properly included in LF, or abstracted from LF in a particular way. LF still needs to be related to the interface of sensory and motor systems, PF. Let us call the theory incorporating these assumptions the Lexico-Logical Form (LLF) theory. .[…] S-structure […] cannot be an intermediate point on the D-structure–LF derivation since such a derivation is not part of the grammar. Thus if S-structure is a noninterface level of the grammar, it can now only be an intermediate level on the LF–PF mapping, the only derivation that UG contains. Schematically then we have a theory like (i)” (1994: 31-32).

(i) Lexico-Logical Form Theory LF *_ _ _ _ _ * PF

| S-structure

(50) Bobaljik & Wurmbrand’s (2002) … → LF → … → PF Nőthig & Alberti (2014) LF → PF

Finally, Nőthig & Alberti (2014) address the variation issue from the perspective of Optimality Theory (OT). First they cite Newson (1994), who points out what fundamental problems free word order permutations in Hungarian (referred to as optionality) pose for OT.

They make the following comparison.

In the OT framework, thus, capturing the so frequent optionality in languages requires the double cost of introducing a meta-level of ordering of conditions and weakening an axiom of OT concerning UG, according to which all constraints are of relevance to all languages, and it is their ranking that is the only source of linguistic variation. In our approach, however, optionality is due to a straightforward fact: if ranks belonging to the demands are characterized by a very small set of numbers, these numbers will often coincide. Lack of optionality, thus, would require a thoroughgoing explanation (2014: 125).

Nőthig & Alberti (2014) also cite Heck et al. (2002) about the status of input in OT syntax.

“All this amounts to the same conclusion: Inputs can be dispensed with in syntax but not in phonology because syntax is information preserving and phonology is not” (2002: 394).

Nőthig & Alberti (2014) make the following comment. “We agree with the authors that there is an inevitable inherent redundancy between the input and the LF of the output in standard OT syntax, which can be ceased, among others, by deleting the input” (2014: 126). Then they compare the two models in the following way.

(51) a. Heck et al. (2002) b. Nőthig & Alberti (2014)

a. INPUT =? → CANDIDATE SET → OPTIMAL CANDIDATE → LF b. INPUT = LF → CANDIDATE SET → OPTIMAL CANDIDATE

They conclude that the solution in (51a) “[…] ‘masks a simpler LF-to-PF account’, because this mechanism involves backtracking: the set of competing candidates are determined on the basis of their ‘would-be’ interpretations, encoded in some (quite obscure) way in the competing candidates themselves, which are ‘information preserving’ syntactic trees” (2014:

126).

Alberti et al. (2015) also discuss the issue of motivation and the autonomy of syntax. In addition to mentioning Surányi’s (2010) criticism of GB and “cartographic” MP, Bobaljik &

Wurmbrand’s (2012) LF-First proposal, see above, they elaborate on a further alternative direction: Survive Minimalism, see Stroik (1999), Putnam & Stroik (2009, 2010, 2011, 2013).

They welcome the following goals of Survive Minimalism: it aims at eliminating the feature-driven movement mechanism universally accepted in mainstream MP, at maintaining empirical coverage, at dispensing with the redundancy of “mixed”

(multi)representational/derivational models and at constraining excessive generative power by developing a purely derivational syntactic model. It does away with the principle of the Autonomy of Syntax as well as with the interface-incompatible features like EPP and EDGE in narrow syntax.66 It posits the lexicon and narrow syntax at the cross section of interfaces.

From this it straightforwardly follows that every feature must be inherently compatible with these components, and thus it must directly contribute to the interpretation of a sentence by

66 “[…] a syntactic representation is a semantic representation and, therefore, […] syntax cannot be seen as independent of the semantics. But this does not mean that the C-I system can be reduced to syntax (Putnam & Stroik 2013: 13).

dint of its semantic or phonetic/articulatory content. For further details, see Alberti et al.

(2015).

From my perspective in this section, the most important point is that there are several fundamental parallels between the “philosophies” of Survive Minimalism and GASG: no autonomy of syntax, motivation, the centrality of the lexicon, and the rejection of uninterpretable features. Obviously, one of the main differences is that GASG is much more radically (i.e. “totally”) lexicalist.67

Before I spell out my LFG-specific view of these semantics, syntax and input issues, let me very schematically show how Alberti et al. (2015) envisage what they call the (“motivated”) directionality of communication.

(52) a. a thought emerges →

b. the speaker decides to use language (pragmatics) →

c. they appropriately combine parts of the relevant meaning content (semantics) → d. they realize the formal (syntactic and morphological) properties of the lexical

items →

e. they turn (52d) into vibration in the air or a picture of a series of letters (phonology) →

d’. the hearer identifies a structured list of morphemes and words → c’. on the basis of this, they infer the combined meaning content →

b’. they find out what the speaker’s purpose has been by sending that complex linguistic sign →

a’. they reconstruct the speaker’s thought

In the light of this view, it seems obvious to me that Nőthig & Alberti’s (2014) rejection of the autonomy of syntax and the claim that syntax should be based on semantics is to be posited in the following (simplified) larger picture of the process of communication.

SPEAKER HEARER

semantics

syntax

&

morphology

semantics Figure 11. Simplified model of communication

Essentially, if we want to model communication from the speaker’s perspective, we need the semantics → syntax (and morphology) sequence; whereas from the hearer’s perspective the order is just the opposite: syntax (and morphology) → semantics.

Let me now show how I think the architecture of LFG I presented in Section 1.2.1 on the basis of Falk (2001) can accommodate and model these crucial aspects of communication. It is to be emphasized at this point already that the implemented version of LFG, the XLE68 grammar, to be introduced in the next section (1.3), straightforwardly models the fundamental

67 To the best of my knowledge, in these two frameworks, there are no theoretically developed and/or implemented analyses of the Hungarian phenomena I am investigating in this dissertation. When such analyses are proposed I will consider it an important task to compare them with my LFG approach developed here.

68 Xerox Linguistic Environment.

processes in Figure 11. It has a parser and a generator component. The parser fundamentally proceeds from syntax/morphology to semantics, and the generator follows the opposite path.69