Printer Friendly

On the realism-conceptualism debate about the ontology of linguistic objects: theoretical and epistemological consequences.

2.2 The Realist Enterprise

The most recent summary of the realist enterprise, and at the same time, a review of the arguments Chomsky has systematically ignored over the decades is to be found in Postal (2012). Postal (2008) points out that the "biolinguistic ontology" is incoherent (see also Behme, 2015), and Postal (2012) extends this claim beyond the limits of the label to argue against the ontology of language Chomsky proposes. It would not be possible (although it would certainly be desirable) to talk about "debate" since it has been a one-sided discussion: Chomsky rarely addresses a criticism from outside the orthodoxy. However, even Katz & Postal (apparently in discomfort with the Chomskyan enterprise) present their position as the only alternative to conceptualism, which makes it a two-sided problem, independently of the (re)actions of the actors involved. Chomsky's "conceptualism," which claims without any demonstration or argumentation that grammars are "(...) real objects, part of the physical world (...)" (Chomsky, 1983: 156) is explicitly rejected, both by Postal--and ourselves--, insofar as there is no definition of "real" or, more importantly, of "the physical world." There is an irresponsible use of the word "physical," as it has led to great misunderstandings and shortcomings, incidentally also affecting Postal's stance. Postal's position, which he has made explicit in a number of works with J. Katz (particularly, their 1991 article) and some recent solo pieces, is that

"(...) Katz's work not only rejected NCs psychological/biological conception of NL but developed the distinct platonist view that the elements NLs are composed of, sentences, are abstract not biological objects. Moreover, NLs, taken as certain classes of sentences, are clearly abstract objects and hence not biological entities. NL sentences share, under Katz's view, the ontology of mathematical objects, e.g. numbers, logical objects, e.g. propositions, musical objects, e.g. songs, etc. (...)" Postal (2012: 4). Our highlighting.

Notice that the fundamental thesis is that sentences (in this fragment, clearly an intra-theoretical term, as we do not know whether it is referring to mental entities or mind-external entities) are not biological but abstract objects, without making it explicit what it means and implies: what exactly is an "abstract object"? We could picture "biological objects" from common sense assumptions (but with no theoretical foundations, since they are not provided in Chomsky's writings, and there are only vague references in most state-ofthe-art books, like Di Sciullo & Boeckx, 2011) like, say, a cell. A species, for example, is not an object of our metatheoretical set (a), but, in any case, an abstraction of intensional characteristics belonging to the set (b): the species of the tigers, for example, can be described as the set of intensional characteristics any entity X must fulfill in order to belong to the set of "tigers." The ontology of so-called "abstract objects" in platonistic linguistics is far from clear, particularly as mentions to mathematics and physics enter the scene. In the excerpt above, three kids of objects are invoked as examples of "abstract objects:" numbers, propositions and songs. They are of little use, since those objects greatly differ from each other. The nature of numbers is still far from clear, see Frege (1884), and as we said in the previous section, it is not clear at all that mathematical relations like "be the square root of' are actually as independent from subjects as Postal claims, without providing any argument for it. Moreover, computer science has provided both upper and lower bounds for computability of certain expressions given, say, different memory capabilities: a simple Push-Down Automaton does not have the same computational properties as a Turing-Machine. Arguably, the differences rely on the software used to generate and process symbolic representations being dependent on the hardware: consider Hameroff & Penrose's OrchOR model for quantum consciousness, based on quantum vibrations in neural microtubules, which are in turn liked to processes in physics and cosmology (see Hameroff & Penrose, 2014 for a very recent presentation of the theory). If the software is a function of the hardware, the alleged gap between the two levels might not be such (see also Wilson and Golonka, 2013 for a more general view on embodied cognition).

Still, there is a more important problem in Postal's criticism of the Chomskyan version of linguistic conceptualism. While it is true that there is a gap in Chomskyan linguistics between formal tools and biological instantiations, in such a way that the biological content in the arguments (e.g., most articles in Di Sciullo & Boeckx, 2011) seems alien to the linguistic argumentation and vice versa, Postal (2012: 5) takes the incompatibility to another level. He claims that:

"The reason for the incoherence of NC's foundational position is that (...) the nature of NL sentences has always forced NC to describe them in a way incompatible with their being biological (...). Anything biological would exist in time and space, would have a cause, could cause things, would be destructible, would have mass or energy, etc. (...). But NL sentences have no physical properties at all" (highlighted in the original)

This paragraph shows a very naive conception of what "physical" is. Notice that Postal is confusing "material" with "physical," which is only acceptable if one interprets "physical" in the everyday sense, but in a technical discussion about the foundations of the biolinguistic enterprise, such a slip must not be overlooked. Let us clarify the position we will take on this issue, following current theoretical physics. While it is true that anything biological exists in time and space; that is also true of anything physical. In fact, to clarify the idea, time is (bended) space (as special relativity has demonstrated, at least beyond the Planck scale), with which all we have is "anything biological exists in space," quite a trivial claim for the ends pursued by Postal. But the informative part comes afterwards: anything biological would also "have a cause." This is completely unclear to us, since "cause" is very different from "origin" (clear evidence can be obtained from lexical semantics crosslinguistically, see Kosta, 2011 for a thorough study). We can say a cell has an origin, has evolved, has changed and undergone several processes, but we can hardly say it has a "cause." On the one hand, because the argument would have a theological flavor we want to avoid (recall Thomas Aquinas' arguments for the existence of God based on the "uncaused causator"); on the other, because the implications of the notion of "causation" go way beyond the scope of biology: causation can entail volition or not, but the possibility is always there. We can hardly say that there is "volition" in biological change under the light of modern evolutionary theory. The third characteristic, "could cause things" falls apart with this very same criticism. Consider the following sentence:

20) The wind opened the door.

It is quite clear that we have a caused event, and the external force which caused it is [the wind]. However, it is not a biological "thing" in any relevant sense we can think of. Conversely, let us see what would happen if we tried to make a causative sentence with a universally accepted "biological thing:"

21) The gene caused her disease.

Most biologists would say the verb choice is at least inaccurate. A consequential relation between two events does not entail causation, as it can be seen in the following example:

22) It is cold because it is winter.

While we could say that the cold is a consequence of the winter, it would be at the very least odd to assert that the winter is the cause of the cold in a strict sense. Coming back to (22), while it is possible that a disease or an impairment (e.g., in language) may be related to a mutation in a specific gene, no reference we know of refers to Specific Language Impairment in terms of "causation."

The fourth and fifth characteristics are perhaps the most important for the purposes of the present discussion. Consider "be destructible" and "have mass or energy." This is actually true of biological "things," though it is not of species or relations, which also fall within biological studies. But let us go deeper in the final claim: "(...) but NL sentences have no physical properties at all." Either Postal is equating biological to physical, which would be a gross methodological and substantive mistake (physical magnitudes are not biological entities in any sense, and physical objects are not always biological "things": there are particles with no mass), or he tries to make Chomsky claim two--apparently contradictory--things: that language is at the same time biological and physical (perhaps, a straw-man fallacy). In any case, the only condition that would hold for physical objects would be to "have energy," and this would be valid (that we know) only to a certain extent (even mass is a property that physical objects might or might not have). Consider 1-dimensional strings, for example. It is not clear whether they have mass, let alone energy. They do vibrate, but the source of the vibration is yet unknown. Physical magnitudes have neither mass nor energy, and vectors, for instance, represent forces in n-dimensional spaces, but certainly not mass. What is more, mass = force / acceleration, and we can express force and acceleration effects (the latter, subsuming gravitational force in Einstein's conception of gravity) independently of mass. Under the light of the preceding discussion, Postal's claims against conceptualism are not better founded, particularly the notion of "abstract" as opposed to "physical." While it is true that Chomsky has obscured the ontology of his linguistic program and the set-theoretical notions on which he bases the syntactic representations he uses, it is not less true that Postal does the same with the vague opposition "physical" / "abstract," grounded on dubious notions as far as "physical" objects are concerned, as we have seen. Postal finds several "incoherences" in Chomsky's program, one of which is his use of set-theoretical notions, particularly regarding Merge and the model outlined in The Logical Structure of Linguistic Theory to describe sentences, if these are actually biological objects. Postal claims:

"(...) physicists use abstract formal structures to characterize physical things, not abstract ones. The objects of description have temporal, spatial, causal, etc. properties" (Postal, 2012: 7).

We have already argued that Postal's stance on physical objects does not take into account many powerful counterarguments from theoretical physics --the proposal of 1-D strings, the Higgs boson, among others--(arguments that a piece that mentions "physical things" cannot overlook), apart from the crucial fact that (physical) models and (physical) objects are well-defined within physics (thus, for instance, no one would think Lorenz's attractor is a chaotic system, but a model of a chaotic system; whereas climate is a chaotic system), and we find no such clear distinction in linguistics:11 notice that Postal takes natural languages as "certain classes of sentences," without further clarification about which are those classes (not actually NLs) or which is the nature of those sentences (which would actually be NL). Postal's argument against Chomsky falls for its own weight: he talks about description, not explanation. And there is no principled reason why a biological object cannot be described or modeled set-theoretically, using natural numbers or whatever model the theoretician wants. Take an atomic model, for instance: it is a trace of pencil on a piece of paper used to describe the structure of the atom in an approximate way. Can we object to that? Certainly not, to the extent that the nature of the description is not equated to that of the represented object: X-bar theoretical trees, for instance, are not the structure of a syntactic object, or a syntactic object itself, but representations of the structure of a syntactic object. Even if we use a physical object to describe a physical object, or a formal structure to describe a formal structure (as in metamathematics), the distinction between our sets (a) and (b) prevents the model from criticisms like Postal's. In Chomskyan linguistics, set-theory has the status of a metalanguage (thus, our (b) set), as Chomsky himself acknowledges:

In the work that I've done since The Logical Structure of Linguistic Theory--which just assumes set theory--I would think that in a biolinguistic framework you have to explain what that means. We don't have sets in our heads (Chomsky, 2012a: 91).

Chomsky continues by assuming that set-theory is not neurologically realizable and thus not a candidate for the actual structure of language within the mind, which is a controversial claim since no further argument is provided: the claim that there is a gap between mathematical structure and physical reality, which both Postal and Chomsky share, is not unavoidable (the notion of emergence in a complex system could provide a useful bridge, but the possibility has been systematically overlooked), nor is the ambiguity with which the concepts of "physical" and "real" are used. Overlooking for the time being computational models for the human mind like the Turing Machine (a favorite of Chomskyan linguistics, see Watumull, 2013), consider modern models of the Universe within mathematical cosmology, like Tegmark's (2003, 2007):

"External Reality Hypothesis (ERH): There exists an external physical reality completely independent of us humans. Mathematical Universe Hypothesis (MUH): Our external physical reality is a mathematical structure" (Tegmark, 2007: 1).

Notice that ERH is independent of MUH, it could very well be that there is an external reality (an anti-solipsistic statement) but that it is not a mathematical structure. But the crucial point here is that, contra both Postal and Chomsky, there is no internal contradiction between ERH and MUH, therefore, no "gap" to be accounted for if we claim that biological structures are in fact mathematical structures: the reader should acknowledge that while this claim might be false, it is not inconsistent (and it is inconsistency we are discussing in this piece). It is to be noticed that HPSG, CG, and LFG make no explicit claim with respect to the mental / biological / neurological status of linguistic objects (constructions, sentences, lexical items...), and there is no, say, "HPSG manifesto" regarding the ontological nature of the researched objects.

Another argument in favor of purely mathematical models of linguistic knowledge (to which we will restrict ourselves in the present paper) comes from the relation that exists between biology, physics and mathematics and their respective objects of study. (12) Chomsky's claim that Merge forms sets, but that is "metaphorical" and "the metaphor has to be spelled out someday" (2012a: 91) suggests that the only way to think about Merge is metaphorically and thus the phenomenon of structural complexity in the Universe is to be yet explained. Needless to say, if MUH is considered (again, a fully independent and internally coherent proposal on its own), there is no metaphor involved at all: we have atomic (i.e., indivisible) elements at all levels, be them strings, conceptual roots, numbers, nucleotides or whatever object in whatever level; and some concatenation algorithm, which may or may not be sensitive to the characteristics of the objects it manipulates (e.g., nucleotides cannot be freely-merged because of their molecular structure). To the best of our knowledge, this enters the kind of propositions that are true because of Chomsky's frequently invoked "virtual conceptual necessity" in the following sense: for the Universe to be as it is, with different layers of complexity, both atomic elements and a combinatory operation are necessary conditions. This does not mean that this is the only possible universe (particularly taking into account recent Multiverse proposals), but that any theory that aims at descriptive and explanatory adequacy in this Universe must address the issue of complexity. If not with a mathematical algorithm like pure concatenation (in which there is no metaphor whatsoever if the Universe is itself a mathematical structure), we find it difficult to think of something else, but this does not mean there is in fact nothing else. Let us make explicit what is meant by concatenation in our own alternative theory, Radical Minimalism, since it is quite different from what is meant by Merge in orthodox Chomskyan linguistics:

23) Concatenation defines a chain of coordinates in n-dimensional generative workspaces W of the form {(x, y, z...n) C [W.sub.X], ... (x, y, z...n) [subset] [W.sub.Y], ...(x, y, z...n) C [W.sub.n]} [where each object has arbitrary computational complexity].

Our generator engine, it is to be noticed, is not sensitive to the substance of manipulated objects, but to their format: not inner structure, but ontology. The only condition to apply concatenation is that the objects, defined as n-plets of coordinates in n-dimensional conceptual spaces, following the lines of Krivochen (2012a) and current advances in cognitive linguistics. In this framework, geometrical "figures," sentences, and other "observable" objects which constitute the phenomenological world are epiphenomenal results of concatenation in one or several W read off at an interpretative system, in the event that there is one: arguably, mathematical derivations have no interface conditions, being therefore examples of pure syntax. This definition, which follows clearly from what we have been saying, can be formulated in a stronger way: there is no physical reality beyond the interpretation of the concatenation function applied to an n-number of objects sharing format. This is another way to express Tegmark's MUH; the so-called physical reality is a mathematical structure. We accept MUH without necessarily accepting ERH (the External Reality Hypothesis) as it is formulated: in any case, the notion of external and reality should be redefined. This must not be interpreted as a plea for solipsism: the concatenation function applies to objects that are external to the human mind and perception plays no role in generation. Moreover, there is no need to resort to a human mind: an automaton with the algorithm incorporated and interpretative routines (based on Relevance principles, see Sperber & Wilson, 1995) could serve as well. In this point, it is crucial to say that, if our view of syntax is that of a generative component that manipulates objects regardless their nature, then it can be applied to any so-called "complex object" insofar as complexity can be decomposed in layers of simple, atomic elements somehow concatenated for interpretation purposes. If this view is correct, all physical systems would have "derivations," in the sense of "successive applications of concatenation and subsequent interpretation."

It must be noticed that many critics to the biolinguistic position often come from both a misunderstanding of the concepts on the part of the one who makes the criticism and a frank lack of clarity and explicit definitions on the part of the biolinguist. Let us see an example:

QUESTION: Infinite use of finite means; doesn't it entail an inconsistency? Isn't the model of an infinite potential in, a finite organ inherently inconsistent?

CHOMSKY: That was the problem until about a century ago. It did look like an inconsistency. One of the important discoveries of modern mathematics is that it isn't an inconsistency. There is a perfectly coherent sense to notion of infinite use of finite means.

That is what ended up being the theory of computability, recursive function theory and so on. It is a big discovery of modern mathematics which clarified traditional ideas. There have been sort of intuitive ideas like this around but they really became clarified quite recently--not really until almost mid-century. So, yes, it looks like an inconsistency but it simply isn't. There's a very simple account of it that is not inconsistent. I can't go into it any further here" (Chomsky, 2000: 62-63).

The "infinite use of finite media" issue was raised in language by Humboldt, but it was already a topic in mathematics and, decades later, in computer science. Let us see an example and then discuss Chomsky's remark, together with Postal's criticism. Consider the following finite set of natural numbers: 24) (2, 4, 6, 8).

How many combinations can we make with them? A quick response would be "either n! or nn, depending on whether repetition is allowed or not." This answer would be, at best, incomplete: there is an implicit assumption that we can formulate in the following way: given a certain amount of numbers, possibilities of combination are restricted to that amount. For example, among the possibilities that the reader may have in mind these are surely included:

25) 2468 - 2648 - 2846 - 2684.

But the ones in (27) are most likely out:

26) 24648 - 26242864.

And so on and so forth. Are we cheating here? Certainly not, since combination is only restricted by stipulation, and we have added none. The "infiniteness" is a property of the operation (i.e., Merge is capable, as a formal procedure, of generating infinite representations, or a single representation of infinite length; where infinite does not mean not numerable or not computable in either polynomial or non-polynomial time, considerations Postal does not take into account as the very notion of "finite" is not well-defined, in our opinion), not of the objects that is manipulates. Let us see Postal's reasoning at this respect:

"A real organ, e.g. a lung, is finite along every dimension including the temporal one and everything it does or produces is finite. So if NL were an aspect of brains, it too would be finite and the output of sentences (granting counterfactually that it makes sense to take organ outputs as sentences) would be as well." (Postal, 2012: 14. Our highlighting)

At the very least, this is a non sequitur situation; at the very worst, a manipulation of some basic claims of Chomskyan linguistics. Natural languages (NL) are not aspects of brains, but of minds, which have neurological substratum (the equation brain = mind is not innocent here, as the dynamics hardware-software are at the very core of the discussion: the brain is material, therefore, finite, but there is no proof, either formal or empirical, that the computational emergent properties of matter are necessarily finite, see Watumull, 2013: 207 for a recent discussion on the level of uniform Turingcomputation). We doubt that the issue of finiteness or infiniteness of the mind even makes sense, at least without a formal definition of finiteness on the one hand, and a strong argument about the nature and ontology of the mind on the other (neither of which is presented by Postal after his criticisms). Moreover, consider Langendoen & Postal's (1985: 227) claim that:

"The collection of sentences comprising each individual NL is so vast that its magnitude is given by no number, finite or transfinite. This means that NLs cannot, as is currently almost universally assumed, be considered recursively enumerable [...]. It then follows that there can be no procedure, algorithm, Turing machine, or grammar that constructs or generates the all members of an NL [...]."

However, there are unarguably infinite sets, like natural numbers, that can be generated (in the "structural description" sense used by Chomsky, 1965) by finite procedures applying sequentially, consider Peano's axioms (see Leung, 2010 for a language-centered discussion of Peano's axioms): if every natural number has a successor, and that successor is itself a natural number, a E, F grammar (omitting the transformational component, as there is no need to map representations) can be modeled upon the axioms (e.g., Leung, 2010: 231), insofar as there is no limit for F, as would be the case of unlimited-memory Turing machines (and even more PDA+ automata, see Uriagereka, 2012).

Even without accepting this fundamental point regarding the nature and properties of the generative component, there are aspects of Postal's claim that are unclear--if not plainly wrong--: in which sense do we say that a lung "produces" something? Sure, there is [O.sub.2] coming in and C[O.sub.2] going out, but can we say that the lung "produces" C[O.sub.2], at least in the same sense that language is produced? There is not a single proposal that lungs (or any other "real organ," to use Postal's terms) are in any sense computational, thus, they are not generative in the relevant sense of complexity creators via combination. Language exists in the mind, while it is not the only "place" in which it might exist, it sure has mental entity at some point (there is an intention, which is embodied in a sentence that has entity before it is externalized, all this without even entering the realm of Fodor's "language of thought"). Mental objects have biological substrata, or, in other words, it is the configuration of the biological substrata that licenses computational properties of the "mind," quite a common claim in psycholinguistics. If there was no neurological (therefore, biological) dimension, brain injuries should not affect language. This is quite straightforward. There is a link, even if the specific kind of link (cause, mere concomitation, etc.) we are talking about is yet unclear. Please notice that in the whole of the preceding discussion we have not made use of the concept of UG, which we reject (Krivochen, 2012b; Krivochen & Kosta, 2013), because we simply have not needed it, and there is, as we said, no compelling evidence in its favor. A deep revision of Postal's arguments would have to acknowledge the fact that he sometimes uses NL and UG as synonyms: for example, "NL is both biological and infinite." UG (as the initial state of the Faculty of Language) is apparently biological (in Chomskyan linguistics), NLs are formally infinite by nature and regardless the theory we consider (neither formalists nor functionalists would say that a natural language is a finite set). This confusion, which is, like others, in the function of a certain rhetoric; is to be eliminated as a methodological mala fide.

Considering the Chomskyan response now, it is amazing how a very simple issue is made so complicated with irrelevant historical details and no argumentation, just a finishing line "I can't go into it any further here." Nor, apparently, anywhere else.

2.2.1 The evolution of realism

It is quite surprising for the follower of Postal's arguments that they have been turning from serious linguistic-philosophical objection into ad hominem claims, displaying more rhetoric manipulation than convincing arguments (quite the same could be said of Chomsky's 2013 paper, which is an impressive set of stipulations under the form of a formal discussion on labeling and projection). Perhaps the most interesting substantial exposition of the realist proposal (summarized in Postal, 2012, as well as the arguments against Chomsky's view from many previous works) is to be found in Katz & Postal (1991). While still very much influenced by the GB tradition, the article is a concise presentation of the so-called "realist" position and the fundamental differences with Chomsky's ontology. The same confusion of the three sets (a), (b) and (c) we claimed exists in "conceptualism" is to be found here, in a stronger version than in Postal (2012). Consider, for example, the following quote:

"(...) acceptance of an overlap between the senses of NL sentences and logical objects involves linguists in foundational issues at least to the extent of committing them to a common ontological position for linguistics and logic. This overlap assumption confronts one with the following paradox. If senses are parts of the grammatical structure of sentences and if linguistics both deals with the grammatical structure of sentences and is psychological, then senses are psychological. But if senses are psychological, then the laws of logic are also psychological, since the ontological status of a law is determined by the nature of the objects to which it refers. Consequently, logic is psychological, contradicting the accepted view that logic is nonpsychological." (Katz & Postal, 1991: 520)

Notice that the argument is only valid if set (a) and logical principles overlap. However, logical laws do not belong to set (a), since they are essentially meta-statements, thus belonging to set (b). Logical objects are, in any case, part of the formal apparatus used to analyze NL, but crucially not NL sentences. In fact, no claim is made about the status of NL sentences, just a presupposition, generated by the conditional "if senses are parts of the grammatical structure of sentences...," which is incompatible with the formal, mentalist approach that has prevailed in Chomskyan linguistics since Syntactic Structures. Externalization (i.e., materialization of syntactic structure via phonological matrices) has always been rendered parasitic, an "exaptation" (see, for example, Uriagereka, 1998, 2000): we cannot think of another sense in which there is a "sensorial" aspect of NL sentences. In any case, externalized sentences are part of e-language, explicitly excluded from the scientific study of language (an arguable claim, but it is not fair to disqualify a theory because of the methodological boundaries it has set itself). Katz & Postal (1991: 521) argue that the mistake of conceptualism is the failure to distinguish between knowledge of language and the object of this knowledge, language itself. They further elaborate this claim in three arguments they provide against the conceptualist enterprise. In the next section, we will analyze those arguments.

2.2.2 Three arguments against conceptualism

Katz & Postal (1991: 522 ff.) develop three arguments supporting the realist position, and arguing against what they understand as Chomsky's position. Before entering the arguments themselves, it is useful to address a commentary in a footnote, which seems to point at an inconsistency within the so-called "realist" enterprise. Katz & Postal (1991: 522, fn. 4) claim:

"Richard Montague advocated a realist approach to universal grammar, claiming that it should be pursued as part of mathematics."

In this context, it is to be noticed that the notion of "realist" within linguistics is still left undefined (despite the meaning it has in philosophy, the relevant sense must be a linguistic sense, insofar as the realist approach is being confronted to a linguistic, not a philosophical, theory). The confusion is even greater when one considers that one of the objections to the conceptualist approach is the use of set-theoretical apparatus to explain the process of Merge, either in Chomsky's or Kitahara's approach. If UG/FL is to be studied as a part of mathematics (which obviously licenses set-theory, as well as other formalisms), either Montague's approach is not realist or Chomsky's is. In any case, in the absence of an appropriate formalism for biological systems, both proposals are equally invalid as explanatory frameworks, achieving at most descriptive adequacy. It is curious that the mainstream biolinguistic approach takes biological systems for granted (with the exception of Jenkins' article, mentioned above) and does not attempt to find a physical or mathematical model for, say, genotype-phenotype dynamics or the cognitive reality of derivations within the mind-brain. This apparent gap (which is not so if, as we said, we consider that biological systems are particular instantiation of physical structures following limitations over mathematical systems) can be overcame if biology is modeled following, for instance, chaos theory (see, for example, Author, 2013a). Not having a definition of "real," the claim that a realist enterprise can model UG as part of mathematics is an empty claim, just like it is to claim that UG's content is to be adjudicated to "virtual conceptual necessity" (VCN) (see for example Chomsky, 1995: 169; 2005: 10; 2007: 8, 12), particularly Merge, Copy and Move, whose specific biological status is at best mysterious. This theoretical move has two direct consequences:

27) a. Eliminates the possibility of asking for formal demonstrations of theorems involving any of those operations, or the operations themselves

b. Eliminates the possibility of building alternative theoretical approaches, since whatever is not "virtually conceptually necessary" is to be eliminated in favor of allegedly principled "unavoidable" (sine qua non) elements.

Postal (2003) has argued against the wide use the notion of VCN, but his own methodological proposal is equivalent to (28 b). We will come back to this in the conclusion. Let us now focus on the three arguments against conceptualism.

Argument 1: The Type Argument

This argument derives from an apparent "type-token ambiguity," which is a long-dated problem in philosophy (but not in linguistics: consider, for instance, that Chomsky has clearly stated that the Numeration is a set of tokens, see Uriagereka, 2008: 16 for discussion about this point). In Katz & Postal's terms NL grammars and grammatical theory are about type-sentences. Apparently, for some unclear reason, this is inconsistent with the view that NL has psychological entity. The argument as we see it is a non sequitur (see also Watumull, 2013 for the same conclusion, although obtained via different assumptions): no proof is given that there are not types in the human mind (nor is any reference to linguistic work in which the type-token problem is discussed; notice that problematic as it may be for philosophers, the typetoken dynamics have been worked on in linguistics in unambiguous formal terms, consider for example Martin & Uriagereka, 2014; Krivochen, 2015b). Moreover, even if individual sentences were tokens, there must be a sense in which NLs are mental objects, since sentences belonging to NLs are "externalized" (technically, Spelled-Out, given a phonological form), and what is not "inside" cannot be "made external." It would be possible to claim that individual sentences are tokens even inside the mind, but this would be equivalent to claiming that NLs are a set of tokens, which implies that any speaker must have some representation of each sentence he uses as a token in his mind. Katz & Postal also seem to assume that sentences have some sort of primitive status, without considering their derivation / formation, either formally or neurocognitively (the implementational level of Marr, 1981). In Krivochen (2015b) and Krivochen & Kosta (2013), Kosta & Krivochen (2014) we have put forth arguments that a type-token dynamics enhance the explanatory power of a syntactic theory, particularly when it comes to the property of displacement and the establishment of referential dependencies (including Binding Theory). In this framework, there is no Merge-Move asymmetry (as in Chomsky, 1995, and much subsequent work, particularly the so-called Merge-over-Move principle): structure building is always token-Merge from the Lexicon, defined as the whole amount of types for NL. Yes, this approach (which we have developed in Krivochen, 2015b; see also Martin & Uriagereka, 2014 for a different though related approach to the distinction) requires the introduction of the distinction Type/Token in syntactic-semantic theory, and algorithms to link different tokens of the same type (which we have provided in Krivochen, 2015b) but it has proven useful in both a theoretical and an empirical domain (see also Stroik & Putnam, 2013): we can unify phrase structure and displacement in a single theoretical framework, without added notions like features or copy operations, and account for the data without ad hoc stipulations. If an element can be inserted in a structure (which would include constructions in the sense of CG as well) as many times as IC require, then we eliminate stipulations regarding intermediate landing sites for movement (see Abels, 2003 on this topic), moreover, being tokens of the same type, the establishment of referential dependencies at the C-I system becomes easier as it requires no added elements like indexes and diacritics (they can be used for expository purposes, but they would be theoretically superfluous and have no biological or physical entity).

In simpler terms, the whole argument against Kats & Postal's objection can be summarized as follows: if types have no mental entity, neither do tokens, insofar as tokens are instantiations of types. If tokens do not have mental entity, we could not derive a sentence (assuming the generative algorithm manipulates tokens following interface conditions and thus creates interpretable structures). If we cannot derive (i.e., produce in real time (13)) sentences, there is no NL. But, mind you, there is NL.

Argument 2: The Necessity Argument

This argument is based on relations of necessary implication between sentences, or so it may seem. Consider the following pair:

28) a. John killed Bill.

b. Bill is dead.

According to Katz & Postal (1991: 523), (29 a) could not be true and (29 b) false. In their approach, a "proper account of NL" must explain this necessary entailment, arising from the "semantic structures of the entailing and the entailed sentences." This argument is vulnerable on several flanks. On the one hand, Katz & Postal are assuming a theory of lexical semantics as well as a theory of truth conditions. In their account, we must explain the fact that [kill X] entails [X die], against which many voices have arisen (e.g., Fodor, 1970 and much subsequent work in lexical semantics and syntax-semantics interface). Our own argument against the "Necessity Argument" rests in the idea that language and truth do not belong to the same domain when it is not about analicity (which is based on lexical semantics within a structure, not on a relation between structures or propositions). Modern accounts of entailments make them a matter of pragmatics, that is, language in use, and not a matter of analicity in any way. They base their argument in an inference that arises from "the semantic structure" of the involved sentences, but there is no clue as to what such a structure may look like. What is more, the very concept of entailment, which is the word used by Katz & Postal, includes as one of its main characteristics being cancellable in assertive contexts (sometimes, this is used to differentiate entailments from presuppositions, which are only cancellable under the scope of negation). We will not get into this, but just point out that truth conditions, since they were relativized in formal semantics by Davidson (1967), cannot be determined in isolation (even less so if one subscribes to pragmatic accounts which reject the concept of truth values as primitives of a linguistic theory) but in relation to (minimally) a speaker, a time and a place. This said, following Fodor (1970), there is no contradiction in the following sentence:

29) Mary knew that Bill died, but she didn't know someone/John had killed him.

That is, there is no contradiction in asserting that (29 b) can be true for a speaker while (29 a) may not. The speaker can even deny the truth of one or the other based on his/her knowledge of the state of affairs described in the asserted sentence. The second argument Katz & Postal use is stained with stipulations about the nature of natural language semantics, for which they neither provide a theory nor cite independent references (only Katz's works, which are part of the "realist" enterprise, not independent theories that can be used to reinforce that enterprise).

Argument 3: The Veil of Ignorance Argument

This argument is based on the assumption that language and knowledge of language are two different things. This may sound trivial insofar as "know" is a dyadic predicate, requiring a "knower" and a "knowee," as we pointed out above. However, the conceptualist claim that there is no scientific object of study for linguistics outside the knowledge represented in the mind a speaker has of a language L (so-called i-language, with its alleged mathematical and biological aspects) is not an internal contradiction (although it might very well be wrong, as we think). Nor does a contradiction arise when the notion of e-language (i.e., external and extensional linguistic samples, in Postal's terms, sentence tokens) is included in the equation. It is true that the notion of a "Faculty of Language" is uncritically accepted as an axiom within orthodox Chomskyan studies rather than problematized and attempted to be demonstrated (with due, but minority, exceptions), but that is an independent argument, which we have made in the previous section. The conceptualist thesis that there is nothing more to language than a state of a mind-brain in a speaker may very well be false, but it is certainly not inconsistent with the axioms in the conceptualist enterprise.

3.3 A note on "progress in Linguistics"

The considerations we have been making with respect to the self-appointed exclusive tendencies in formal linguistics cast doubt with respect to the possibility of having scientific progress within the discipline: there are studies of the use of the language faculty ("faculty" in a wide, non-technical sense) from sociology, ethnography, philosophy and cognitive psychology, to mention but a few. However, the most basic problems, the foundational abstractions, the grounding concepts are still far from clear: there is not even consent as to what exactly is language, as we have seen in the previous sections. Is it a mental entity? A formal entity? A social entity? Something else? The concept of "fact" in linguistics is to be challenged if some progress is to be made. For example, Chomskyan theories base most claims about "language design" on movement, which is interpreted as literal displacement (GB tradition) or copy (MP tradition, see Nunes, 2004 and Corver & Nunes, 2007 for an overview) of constituents following well-established rules: phase impenetrability (Chomsky, 1998), barriers (Chomsky, 1986), Minimality (Rizzi, 1990), Head Movement Constraint (Travis, 1984), Condition on Extraction Domains (Huang, 1982), among many others. However, the alleged "facts" that are accounted for only follow from an architecture that actually claims that elements "move." Take Systemic Functional Grammar, for instance (for a recent review, see Fontaine, 2012, and, for a more technical introduction, Fawcett, 2010). It is unlikely that problems of displacement-asliteral (feature-driven) movement arise in such a theory, where issues of locality are interpreted in a more cognitive-discursive way, in close relation to the metafunctions (ideational, interpersonal, textual) that configure the mellow of the theory. Our point is, simply, that what is taken as a "fact" in a theory is something that may not even arise in another. In order to claim that an element is interpreted in a different place from which it appears phonologically, we would have to resort to some kind of mechanism dissociating interpretation from phonology (already a strong claim) and then make that procedure explicit in logical, cognitive, biological or mathematical terms. What can be expected as "progress", then? Little if the polarization explicitly proposed by Katz & Postal (1991: 1 fn.1) is held as a "fact," dismissing as "non-credible proposals" alternatives which are equally consistent formal systems but depart from different axioms. This historical situation allows us to say that little progress has been done in linguistics, because of its essentially atomized character. Something like "progress" in Lakatos' terms could be said to have been done within this or that theory, in terms of empirical adequacy. However, contrarily to Chomsky's repeated claims, the fact that new questions are being asked does not entail that we know more about "language," simply that there has been a shift in focus. Chomsky (2013: 33) says that

"There has been remarkable progress in understanding language in the post-World War II period, over a very broad range, including the general principles that shape this highly special cognitive faculty, dissociated from others in many ways and unique to humans in essentials. One indication is that it was routine and reasonable for prominent linguists in earlier years to write books entitled Language. No longer. The task would be radically different today; far too much has been learned. "

He seems to attribute the absence of books dealing with foundational issues to the fact that "far too much has been learned." However, outside Chomskyan linguistics, many object to that claim, including ourselves. The same can be held of any theory, however: it is not the (putative) fact that we know more that "prevents" linguists to undertake the enterprise of writing programmatic pieces, but mainly dogmatism. Within mainstream generative grammar (in both its conceptualist and realist versions), the foundational issues (like the existence of a "faculty of language" with its mental and physical aspects, or the very definition of "language," a very delicate issue Chomsky has refused to address beyond stipulative statements) are considered to be solved and understood, and are thus taken for granted. The same happens in SFL, Cognitive Linguistics, and their respective sub-theories (HPSG, LFG, etc.). To undertake research at "peripheral" levels of the theory (e.g., testing empirical adequacy) does not mean or entail leaving "core" issues aside (e.g., the very definition of "language" we are working with): a permanent revision of both is, in our perspective, the only way to avoid self-centeredness and guarantee integration with the last developments in different disciplines (particularly those more closely related to technological advances, like neurology or molecular genetics).

3.1 What is to be "discovered"?

Conceptualist generative linguistics has a curious concept of "discovery," and the ontology of the objects that qualify as "discoveries." The official vision is to be found, for example, in Pesetsky (2013), who lists alleged "discoveries" of generative syntax. An undeniable merit of Pesetsky's exposition is to provide an answer to a question often asked to Chomsky but that he seldom answers (see Behme, 2012 for discussion and examples), but his methods are at best arguable. The methodological assumptions behind Pesetsky's exposition are the topic of the present section.

To begin with, we consider that a crucial distinction has to be drawn between discovery and invention. To discover something is to notice and provide an account (descriptive, explanatory) of a pre-existing phenomenon, then subjected to further research. For example, electrons existed even before they were first noticed by scientists, in the late XIXth century. The existence of the electron is, and was, independent of scientists' awareness of it. Invention, on the other hand creates an object (either material of formal) which did not exist before the act of invention.

Now, let us take a look at the alleged "discoveries" of (orthodox Chomskyan) generative syntax Pesetsky lists:

* Hierarchical structure

* Case Theory

* Locality of Syntactic Relations

* Support for the central conjecture of generative syntax.

Then, Pesetsky proceeds to "discuss" articles that contradict the aforementioned "discoveries." To the best of our knowledge, Pesetsky (and many more) have incurred in two mistakes:

a) A historical mistake.

b) A methodological mistake.

The historical mistake is simple enough: the effects that are observed have been studied for quite a long time now, before generativism and independently of its particular assumptions. Panini, for instance, presents a distributional theory of Case, based on morphology and semantic roles (or karaka), quite similar to the generativist "Theta theory" but without its added stipulations (government, subcategorization frames, etc.). Varro, in his De Lingua Latina also studied nominal and pronominal declension in detail, as well as the basic property of natural language from MGG perspective: recursion (from which phrase structure and hierarchy derive). Langendoen (1966) puts it the following way:

"(...) First he [Varro] viewed the phenomenon of syntactic derivation in Latin as following a universal feature of human language: the ability to form an unlimited number of expressions (in fact, words) from a limited number of elements in a systematic fashion. Second, he justified this position on the grounds that if it were not true, then language acquisition would be impossible" (1966: 34).

The arguments in favor of locality (i.e., the relations between constituents are limited to certain domains, outside which relations result in ill-formedness: binding theory is a good and well-known example) within recent Chomskyan generative grammar have unfortunately reduced to self-reference (something Boeckx & Grohmann, 2007a acknowledge) and forcing the data to fit the theory (e.g., Chomskyan phases), in a procrustean way. There are some notable exceptions, which try to link linguistic phenomena to properties of other systems, thus deriving locality effects from more general principles (e.g., Uriagereka, 1998, 2012), but these alternative approaches are overwhelmed by orthodox assumptions.

We think the historical mistake is sufficiently illustrated, but we can also mention the Port Royal Grammar and Logic (1660 and 1662 respectively), which argued for the mental entity of grammar and the fundamentally logical structure of natural language, which was to be argued for in Chomsky's early Logical Structure of Linguistic Theory}4

The methodological mistake is somehow more serious: it implies that, if Pesetsky is aware of the references we have cited (and many others which also tackle these issues in the Greek-Latin tradition, as well as medieval studies with Aristotelian bases), he actually believes that generative Chomskyan syntax has provided evidence that proves that:

a) The invoked principles and rules are necessary conditions for a certain phenomenon to appear.

b) The invoked principles and rules are sufficient conditions for a certain phenomenon to appear.

Why is this relevant? Because scientific proof requires not only an account of why the relevant portion of the Universe is the way it is, but also why it could not be otherwise given certain parameters (mathematics tends to offer fine examples of such proofs). In this particular domain, the linguist (if concerned with methodological issues and willing to make a strong claim) must formally demonstrate that his formal procedure can productively generate the phenomenon in question (not only describe it, as it would be a mere reaffirmation of existence) and that either no other procedure can or, if there is an alternative way, that such a way requires extra assumptions or is in a specific and well-defined way less economical. Needless to say, this is not achieved in Pesetsky's talk, but, more worryingly, it does not seem to be of the concern of first-line linguists: the methodology is more inclined towards assuming something (e.g., the existence of FL/UG) and provide "evidence" in favor of that alleged "fact," which in turn results in the conclusion that the assumption is actually the case (by all means a circular reasoning). Counterevidence or alternative but equally consistent proposals are seldom discussed, and the basic properties of formal axiomatic systems (particularly, consistency and incompleteness, both intimately related to overgeneration15) are most frequently overlooked. This, we argue, goes against both theoretical and empirical development, since a theory must be restrictive enough to give interesting insight over natural language (or any other object) and it must be explicit enough to be subjected to the strictest formal scrutiny. As the reader may have noticed, the arguments we have been revising in this paper do not fulfill these criteria, or do only to a very limited extent. The "realist" and "conceptualist" positions thus limit themselves by their own rhetoric.

5. Conclusion and Methodological Warnings

In the preceding discussion we have analyzed two approaches to the foundations and ontology of linguistics. In this conclusion, we would like to make some methodological considerations regarding the apparent validity of the conceptualist-realist "discussion" (underlying more than just Chomsky's and Postal's personal stances, as we have seen), which has deep consequences for the ontology of theoretical linguistics and its future as a field.

Katz & Postal (1991: 515, fn. 1) claim:

"We are aware that some philosophers and linguists think there are foundational positions distinct from nominalism, conceptualism, and realism. Although we cannot deal with this issue here, every such putative alternative with which we are familiar reduces to one of the three standard ontological positions."

Crucially, their discussion is centered on conceptualism and realism, leaving nominalism aside. This polarization has the following logical consequence: if there is no other position (and their insistent ignorance of nominalism leads to this thought), then for every axiom or theorem p in theory A, theory B has a ~p axiom or theorem by necessity, otherwise, the polarization thesis they need for their arguments to be valid (and convince the reader that rejecting their presentation of Chomskyan conceptualism inevitably leads to accepting their realism) would fall apart. If there are only two truth values, let us call them true and false (or 1 and 0, after all, it is the same as long as we have two discrete possibilities), then for every proposition that belongs to sets A or B, we could determine its truth value unambiguously. This is, of course, false: it is unlikely that either A or B would have only true propositions and the other only false propositions: a third alternative (C), composed by only the true propositions of A and B is logically necessary (that is, C is the intersection of A and B only containing true propositions). The definition of a method to determine the truth of the propositions that compose A and B, of course, is not provided by either Chomsky and his advocates or Katz & Postal and their supporters; moreover, the logical way out (i.e., build theory C) is a fallacy, insofar as there are really more than two theories (or stances) about the object, its nature, origin, and use; some internally consistent, some inconsistent, some logically complete, some incomplete; but all describing / explaining a different aspect of the object of inquiry, thus all epistemologically valid (with the demarcation criterion being internal consistency). An excellent summary of the allegedly exclusively binary debate we have been analyzing, and a fundamental flaw in its logical conception is enlightenedly put by Ross (1983: 3), quotation that summarizes the main points in the present contribution:

" What is the matter with a pluralistic situation in which there are many approaches to the truth?

My answer to this question would be: nothing is wrong. But that is an answer which seems to go against the mythology of science in which I was trained. I was taught to believe that for any two theories of the same domain [in our case, Realism and Conceptualism], A and B, there are only two possible logical situations that can obtain:

1) One of these theories is correct and the other incorrect.

2) These theories only appear to be different--really they are the same theory, wearing different terminological clothing. They are notational variants.

I was not prepared to deal with a third situation:

3) Each of the theories captures a fundamental part of the truth, but they are incompatible with each other, and neither can be reduced to the other. Both are necessary" (highlighted in the original).

A further note is in order at this point (applicable to inter-theoretical criticism in general): notice that the alleged "incoherences" of the conceptualist position have been pointed out from a "realist" position, but not within conceptualism. In that case, we are not facing inconsistencies, but merely incompatibility (incommensurability, in Kuhnian terms) of two different frameworks (Ross' situation (3)). A criticism to a framework is to be made in the very terms of that framework, as it is the only way of finding internal logical inconsistencies and proving that the theory in question is logically untenable. There is no point in finding apparent contradictions in a theory from the perspective of another: it is at least a trivial enterprise, if not directly mislead.

What is more, we have provided some examples of propositions that belong to neither A nor B (nor C!) theories, but to a D alternative which is not made up from propositions of one or the other, but constitute whole new systems: Simpler Syntax; Survive Minimalism, Radical Minimalism, the CLASH model, among many others, are viable alternatives to both the binary system proposed by Katz & Postal, and the unary system Chomsky proposes dismissing all other alternatives as unreasonable or departing from the undefined concept of "virtual conceptual necessity," without further discussion. The proof that there exists at least a third and a fourth alternatives opens the door for more alternatives, all equally valid and consistent (and we are limiting ourselves to theories that make a statement as to the ontology of language, not to mention those theories that present an architecture of the grammar without asking for its place in the natural world, as most non-transformational models do): Katz & Postal's, and Chomsky's position are equally dangerously close to what Austin (1962) calls "the descriptive fallacy," in this case also involving a polarization of the market in terms of venues for alternative positions apart from a narrow scientific horizon. We hope this work, rather than making a case for any particular theory, helps depolarizing the field of formal linguistics and create some awareness of the necessity of alternative frameworks and the creation of interdisciplinary bridges with mathematics, physics and biology (or the fields the reader feels closer to), crucially without limiting the collaborations to adopting of terminology or forcing concepts (like "features," or even "UG") on shaky grounds.

Received 16 December 2014 * Received in revised form 9 June 2015 Accepted 10 June 2015 * Available online 1 April 2016


This study was partially supported by the project Linguistic and lexicostatistic analysis in cooperation of linguistics, mathematics, biology and psychology, grant no. Cz.1.07/2.3.00/20.0161, which is financed by the European Social Fund and the National Budget of the Czech Republic.


(1.) We beg the reader to notice how the notions of "theory" and "program," which should be kept apart in hard science, are permanently overlapping in the literature and are sometimes used as synonyms. Minimalism, as it apparently has "ineliminable elements" (Epstein & Seely, 2002) such as features (thus, substantive elements), is more a theory than a program. Its purely programmatic side, the elimination of superfluous elements, is nothing new, as it is the mere application of the Occam's razor metatheoretical desideratum, which has been applied in science and philosophy for many centuries now.

(2.) For instance, Boeckx (2010: xiv) claims: "It stands to reason that in selecting material I am offering a rather personal view of what cognitive science is and what cognition may bel" However, there is no explicit formulation of that alleged view. Therefore, his views (like those of many others, frequently including Chomsky himself), turn out to be outside the domain of falsation.

(3.) Strong regularity, when applied to phrase structure, is what Culicover & Jackendoff (2005) refer to as strong uniformity. See Krivochen (2015a) for discussion about the theoretical and empirical consequences of such a uniformity thesis.

(4.) Tegmark (2007: 19) graphs the interrelations between Formal Systems, Mathematical Structures and Computations, together with the potential problems for each of those aspects of the unified theory of the Multiverse. With respect to Computations (defined as special cases of mathematical structures which produce theorems of formal systems), the problem Tegmark finds is that there might be no halting algorithm built-in the system. In Krivochen (2012b) we made two proposals:

a) Not all computations need halting: only those that have to be interpreted, because of memory limitations.

b) In an architecture with invasive interfaces, or where computations take place within the interpretative components (Stroik and Putnam, 2013), there is no need to formulate independent halting algorithms, apart from a general definition of legibility conditions:

[for all] (x), Transfer(x) applies iff:

[there exists] (IL) | x [subset] IL (in the case of language, IL = Phon / Sem)

[??] (p) | p [member of] x [and] p [not member of] IL

(5.) We use Sinn and Bedeutung in their Fregean sense. We maintain the original terms to avoid confusion and misinterpretations.

(6.) The fact that frameworks like Goldberg's Construction Grammar (see 2006: 4-5), and HPSG work without assuming UG should be revealing at this respect. This problem is not polarized either (that is, it is not necessarily a matter of "FL or not FL"), but most likely, when we need to resort to a domain-specific workspace. If all the time, we have a Chomskyan FL, assuming UG and so on. If only when deriving an object in real time, then alternative models arise. Or, it can be claimed that there is no FL at all, and all processes are shared between all domains. No option should be a priori dismissed.

(7.) In Krivochen (2012a) and Krivochen & Kosta (2013) we have argued against the existence of a fixed FL on computational grounds following the criteria exposed in Laka (2009), affecting Merge and the form-function binarism. The following argumentation will focus on the logical structure of the argument, while assuming previous discussion.

(8.) The sole concept of recursion in linguistics, and the misunderstandings it has generated, would deserve a full book. However, let us point out that the architectural theses focusing on the recursive engine frequently overlook data in favor of theoretical stipulations. The recent article by Watumull et al. (2014) loquaciously expresses the underlying assumptions with respect to empirical threats (like the one proposed, e.g., by Everett's, 2005 claims about Piraha lacking phrasal recursion; or the non-recursive nature of some portions of the English language, like adjuncts, see, e.g., Uriagereka, 2005): "To the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import(Watumull et al., 2014: 1).

(9.) For a very recent take on the issue, see Putnam & Fabregas (2014). Their argument can be summarized as follows: there are certain phenomena (categorization, labeling, clitic positioning) which "have to be present in the narrow syntax." However, the article presupposes the existence of features, without discussing the possible implications of a feature-less model.

(10.) Perhaps the best example of such a theoretical complication is the famous "EPP", which is now understood in at least three senses (see, for instance, Gallego, 2010: 62):

a) Move a DP to Spec-TP (the traditional definition of the EPP)

b) Merge a with p containing EPP (sometimes called "edge feature")

c) Move an XP to an outer specifier of a phase head (sometimes called "occurrence feature").

To this day, the EPP itself remains unmotivated, which makes the whole proposal vacuous. In general, the assignment of features to heads (EPP, Wh-, phi-features, Edge Features, features triggering scrambling, topicalization, etc.) is one of the most controversial issues in MGG, since such features more often than not are mere artefacts to accommodate a particular analysis. As early as 1971, Paul Postal already raised an argument against the use of features like these (referring, specifically, to the assignment of a [Wh-] feature to a PP or just to its dominated NP in a [PP [NP]] structure to selectively "account" for Pied Piping effects):

(...) the whole feature-marking proposal has no independent justification. The point is not that descriptive adequacy is unachievable in this way, but rather that it is achievable under the assumption of successive cyclicity only at the cost of having available the overly powerful device of marking arbitrarily selected nodes with arbitrary rule behavior coding features. It is strange that this powerful device should be appealed to by authors who are often at pains to stress the need for restricting the power of syntactic theory, and who have often objected to other approaches on just this ground. [...] A theory which bans arbitrary syntactic features is stronger than one which allows them, hence to be preferred in the absence of concrete evidence showing the need for weakening the theory, following the principles which Chomsky has long stressed (Postal, 1971: 215).

Little, if any, attention was paid to this sensible fragment in subsequent developments of MGG.

(11.) Consider, for instance, how Postal exemplifies his point (2012: 5):

"Take for instance:

(3) Most rabbits have big ears.

Where in space is (3)? At what points in time did it begin and will it end? What is its mass? Is it subject to gravity? How can one destroy it? These questions, entirely appropriate for physical things, biological or not, make no more sense than their parallels for objects like the square root of 169 or Sibelius' 5th Symphony."

We find many misunderstandings here: consider 1-D strings, as physical constructs: the question about where in space they are is, at least, misguided: the relevant question would be about scale. There is no point in asking about their beginning and end, because they might not have such things. And so on. The same happens when we go beyond the Planck scale: is space (considered as a quantum foam) a physical "thing," in Postal's terms? We are afraid it might fail some, if not all of Postal's "tests."

(12.) Needless to say, this does not mean that non-mathematically based models of language are not interesting or relevant, for other respects. A model of competence like the one outlined in Sag & Wasow (2011) would be inherently limited if a purely mathematical stance was adopted.

(13.) A note is in order here: the fact that lexical derivations can be coined, thus fossilized, as well as idioms, beyond the word-level, does not mean that those fossilized linguistic elements, of arbitrary complexity, have not been derived at some point in the diachrony of a language L. The work on proleptic names done by Trejo (2013), under Radically Minimalist assumptions provides good arguments in favor of this approach.

(14.) Not to mention, for example, these kinds of claims: "Legate's discovery that the left periphery of Warlpiri looks like Rizzi's left periphery for Italian (and Cable's for Tlingit) would have been the topic of an hour on NPR Science Friday" (slide 72). The Left Periphery and the whole array of functional projections cartographic approaches assume did not pre-exist Rizzi's work as theory-independent objects of the world (or, at least, it has not been adequately proven otherwise), consequently, it all falls within the realm of the "invention." This does not mean they are wrong (that has to be argued for independently), they are just not discoveries.

(15.) If a formal axiomatic system contains every p and its negation ~p, it will be complete, but necessarily inconsistent. It will also be useless for scientific purposes, for it would have no restrictions.


Abels, Klaus (2003), Successive Cyclicity, Anti-localit[gamma] and [lambda]dposition Stranding. Ph.D. thesis. University of Connecticut, Storrs.

Austin, John L. (1962), How to do Things with Words. Oxford: Clarendon Press.

Behme, Christina (2012), "A Potpourri of Chomskyan Science." Retrieved on 10/6/ 2013 from

Behme, Christina (2014), Evaluating Cartesian Linguistics: From Historical Antecedents to Computational Modeling. Frankfurt am Main: Peter Lang.

Behme, Christina (2015), "Is the Ontology of Biolinguistics Coherent?," Language Sciences 47: 32-42.

Boeckx, Cedric (2008), Bare Syntax. Oxford: Oxford University Press.

Boeckx, Cedric (2010), Language in Cognition: Uncovering Mental Structures and the Rules behind Them. Malden: Wiley-Blackwell.

Boeckx, Cedric, & Kleanthes K. Grohmann (2007a), "Putting Phases in Perspective," Syntax 10: 204-222.

Boeckx, Cedric, & Kleanthes K. Grohmann (2007b), "The Biolinguistics Manifesto," Biolinguistics 1: 1-8.

Chomsky, Noam (1957), Syntactic Structures. The Hague: Mouton de Gruyter.

Chomsky, Noam (1965), Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.

Chomsky, Noam (1966), Cartesian Linguistics: A Chapter in the History of Rationalist Thought. New York: Harper & Row.

Chomsky, Noam (1980), Rules and Representations. New York: Columbia University Press.

Chomsky, Noam (1988), Language and Problems of Knowledge. Cambridge, MA: MIT Press.

Chomsky, Noam (1995), The Minimalist Program. Cambridge, MA: MIT Press.

Chomsky, Noam (1999), "Derivation by Phase," Ms., MIT.

Chomsky, Noam (2000), New Horizons in the Study of Language and Mind. Cambridge: Cambridge University Press.

Chomsky, Noam (2005a), "Three Factors in Language Design," Linguistic Inquiry 36: 1-22.

Chomsky, Noam (2005b), "On Phases." Ms. MIT.

Chomsky, Noam (2007), "Approaching UG from Below," in Uli Sauerland and Hans-Martin Gartner (eds.), Interfaces + Recursion = Language? Chomsky's Minimalism and the View from Syntax-semantics. Berlin: Mouton de Gruyter, 129.

Chomsky, Noam (2012a), The Science of Language: Interviews with James McGilvray. Cambridge: Cambridge University Press.

Chomsky, Noam (2013), "Problems of Projection," Lingua 18: 1-35.

Collier, Peter, & David Horowitz (2004), The Anti-Chomsky Reader. San Francisco, CA: Encounter Books,

Davidson, Donald (1967), "Truth and Meaning," Synthese 17(3): 304-323.

Deacon, Terrence W. (1997), The Symbolic Species: The Co-Evolution of Language and the Brain. New York: W.W. Norton.

Dehaene, Stanislas (2011), Space, Time and Number in the Brain: Searching for the Foundations of Mathematical Thought. London: Academic Press.

DeLancey, Scott (2001), Lectures on Functional Syntax. University of Oregon.

Di Sciullo, Anna-Maria, & Cedric Boeckx (eds.) (2011), The Biolinguistic Enterprise: New Perspectives on the Evolution and Nature of the Human Language Faculty. Oxford: Oxford University Press.

Di Sciullo, Anna-Maria (2015), "On the Asymmetry of Language and Genes Lwords and DNA Words," Czech and Slovak Linguistic Review: Special Issue "Informational Fundamentals of Life: Genomes and Languages": 10-31.

Everett, Daniel (2005), "Cultural Constraints on Grammar and Cognition in Piraha," Current Anthropology 46(4): 621-646.

Fawcett, Robin (2010), A Theory of Syntax for Systemic Functional Linguistics (Current Issues in Linguistic Theory 206). Amsterdam: Benjamins.

Fodor, Jerry (1970), "Three Reasons for Not Deriving 'Kill' from 'Cause to Die,'" Linguistic Inquiry 1(4): 429-438.

Fontaine, Lise (2012), Analysing English Grammar: A Systemic Functional Introduction. Cambridge: Cambridge University Press.

Frege, Gottlob (1884), Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung uber den Begriff der Zahl. Breslau: W. Koebner.

Frege, Gottlob (1892), "Uber Sinn und Bedeutung," in Zeitschrift fur Philosophie und philosophische Kritik NF 100: 25-50. Also in: Gottlob Frege: Funktion, Begriff Bedeutung. Funf logische Studien. Vandenhoeck & Ruprecht, Gottingen, 1962, 38-63.

Goldberg, Adele (2006), Constructions at Work. The Nature of Generalization in Language. Oxford: Oxford University Press.

Graben, Peter, Dimitris Pinotsis, J. Douglas Saddy, & Roland Potthast (2008), "Language Processing with Dynamic Fields," Cognitive Neurodynamics 2: 79-88.

Hameroff, Stuart, & Roger Penrose (2014), "Consciousness in the Universe: A Review of the 'OrchOR' Theory," Physics of Life Reviews 11(1): 39-78.

Hauser, Marc D., Noam Chomsky, & W. Tecumseh Fitch (2002), "The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?," Science 298: 15691579.

Hendrick, Randall (ed.) (2003), Minimalist Syntax. Oxford: Blackwell

Hornstein, Norbert (2000), Move! A Minimalist Theory of Construal. Oxford: Blackwell.

Hornstein, Norbert (2003), "On Control," in Hendrick (ed.), 6-81.

Hornstein, Norbert, Kleanthes, Grohmann, & Jairo Nunes (2005), Understanding Minimalism. New York: Cambridge University Press.

Huang, C.-T. James (1982), Logical Relations in Chinese and the Theory of Grammar. PhD dissertation, MIT.

Idsardi, William, & Eric Raimy (2013), "Three Types of Linearization and the Temporal Aspects of Speech," in T. Biberauer and I. Roberts (eds.), Principles of linearization. Berlin: Mouton de Gruyter, 31-56.

Jackendoff, Ray (1987), "The Status of Thematic Relations in Linguistic Theory," Linguistic Inquiry 18: 369-412.

Jackendoff, Ray (1990), Semantic Structures. Cambridge, MA: MIT Press.

Jackendoff, Ray (1997), The Architecture of the Language Faculty. Cambridge, MA: MIT Press.

Jackendoff, Ray (2002), Foundations of Language. Oxford: Oxford University Press.

Jackendoff, Ray (2011), "Conceptual Semantics," in Claudia Maienborn, Klaus von Heusinger, and Paul Portner (eds.), Semantics: An International Handbook of Natural Language Meaning, Vol. 1. Berlin: De Gruyter Mouton. 688-709.

Katz, Jerrold (1981), Language and Other Abstract Objects. Totowa, NJ: Rowman and Littlefield.

Katz, Jerrold (2004), Sense, Reference, and Philosophy. New York: Oxford University Press.

Katz, Jerrold, & Paul M. Postal (1991), "Realism vs. Conceptualism in Linguistics," Linguistics and Philosophy 14: 515-554.

Kayne, Richard (1994), The Antisymmetry of Syntax. Cambridge, MA: MIT Press.

Kosta, Peter (2011), "Causatives and Anti-Causatives, Unaccusatives and Unergatives: Or How Big is the Contribution of the Lexicon to Syntax," in Kosta, P. & L.

Schurcks (eds.), Formalization of Grammar in Slavic Languages. Frankfurt am Main: Peter Lang, 235-296.

Kosta, Peter, and Diego Krivochen (2012), "Book Review: The Biolinguistic Enterprise: New Perspectives on the Evolution and Nature of the Human Language Faculty," Anna-Maria Di Sciullo and Cedric Boeckx (eds). Oxford: Oxford University Press. International Journal of Language Studies 6(4): 154-182.

Kosta, Peter, and Diego Krivochen (2014), "Flavors of Movement: Revisiting A vs. A,'" in Kosta, Peter et al. (eds.), Minimalism and Beyond: Radicalizing the Interfaces. Amsterdam: John Benjamins, 251-282.

Krivochen, Diego (2011), "An Introduction to Radical Minimalism I: On Merge and Agree," IBERIA 3(2): 20-62.

Krivochen, Diego (2012a), The Syntax and Semantics of the Nominal Construction. Frankfurt am Main: Peter Lang.

Krivochen, Diego (2012b), "Towards a Geometrical Syntax." Ms., Universidad Nacional de La Plata.

Krivochen, Diego (2015a), "On Phrase Structure Building and Labeling Algorithms: Towards a Non-uniform Theory of Syntactic Structures," The Linguistics Review 32(3): 515-572.

Krivochen, Diego (2015b), "Types vs. Tokens: Displacement Revisited," Studia Linguistica. Early Access. DOI: 10.1111/stul.12044

Krivochen, Diego, and Peter Kosta (2013), Eliminating Empty Categories: A Radically Minimalist view on the Ontology and Justification. Frankfurt am Main: Peter Lang.

Laka, Itziar (2009), "What Is There in Universal Grammar? On Innate and Specific Aspects of Language," in M. Piattelli-Palmarini, J. Uriagereka, & P. Salaburu (eds.), Of Minds and Language: A Dialogue with Noam Chomsky in the Basque Country. Oxford: Oxford University Press, 329-343.

Langendoen, D. Terrence (1966), "A Note on the Linguistic Theory of M. Terentius Varro," Foundations of Language 2(1): 33-36.

Langendoen, D. Terrence, & Paul Postal (1985), "Sets and Sentences," in Jerrold Katz (ed.), The Philosophy of Linguistics. Oxford: Oxford University Press, 227-248.

Lasnik, Howard, Juan Uriagereka, & Cedric Boeckx (2005), A Course in Minimalist Syntax. Oxford: Blackwell.

Leivada, Evelina (2014), "From Comparative Languistics to Comparative (Bio)linguistics: Reflections on Variation," Biolinguistics 8: 53-66.

Leung, Tommi (2010), "On the Mathematical Foundations of Crash-Proof Grammars," in Michael T. Putnam (ed.), Exploring Crash-Proof Grammars. Amsterdam: John Benjamins, 213-244.

Marr, David (1981), Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Cambridge, MA: The MIT Press.

Martin, Roger, & Juan Uriagereka (2014), "Chains in Minimalism," in Kosta, Peter et al. (eds.), Minimalism and Beyond: Radicalizing the Interfaces. Amsterdam: John Benjamins, 169-194.

Mateu Fontanals, Jaume (2000a), "Why Can't We Wipe The Slate Clean? A Lexical Syntactic Approach to Resultative Constructions." Universitat Autonoma de Barcelona, Departament de Filologia Catalana.

Mateu Fontanals, Jaume (2000b), "Universals of Semantic Construal for Lexical Syntactic Relations." Universitat Autonoma de Barcelona. Departament de Filologia Catalana.

Mateu Fontanals, Jaume (2002), Argument Structure. Relational Construal at the Syntax-Semantics Interface. Dissertation. Bellaterra: UAB. Retrieved from http://

Mateu Fontanals, Jaume (2008), "On the l-syntax of Directionality/Resultativity," in Asbury, Anna, Jakub Dotlacil, Berit Gehrke, & Rick Nouwen (eds.), Syntax and Semantics of Spatial P. Amsterdam: John Benjamins, 221-250. DOI: 10.1075/ la.120.11mat

Mendivil Giro, Jose Luis (2006), "Biolinguistica: que es, para que sirve y como reconocerla," Revista Espanola de Linguistica 35(2): 603-623.

Moss, Helen, Lorraine Tyler, & Kristen Taylor (2007), "Conceptual structure," in G. Gaskell (ed.), Oxford Handbook of Psycholinguistics. Oxford: Oxford University Press, 217-234.

Murphy, Elliot (2015), "Labels, Cognomes, and Cyclic Computation: An Ethological Perspective," Frontiers in Psychology 6: 715. 2015.00715

Newmeyer, Frederick J. (2003), "Review article: On Nature and Language by Noam Chomsky; The Language Organ: Linguistics as Cognitive Physiology by Stephen R. Anderson & David W. Lightfoot; Language in a Darwinian Perspective by Bernard H. Bichakjian," Language 79: 583-600.

Penrose, Roger (1994), Shadows of the Mind. Oxford: Oxford University Press.

Penrose, Roger (1997), "Physics and the Mind," in M. Longair (ed.), The Large, the Small and the Human Mind. Cambridge: Cambridge University Press, 93-143.

Pesetsky, David (2013), "Hto girara? What Is to Be Done?" Slides from plenary talk, January 4, LSA meeting.

Pesetsky, David, & Esther Torrego (2000), "T-to-C Movement: Causes and Consequences," in Kenstowicz, K. (ed.), Ken Hale: A Life in Language. Cambridge, MA: MIT Press.

Pesetsky, David, & Esther Torrego (2004), "The Syntax of Valuation and the Interpretability of Features," Ms. MIT / UMass Boston.

Pesetsky, David, & Esther Torrego (2007), "Probes, Goals and Syntactic Categories," in Proceedings of the 7th annual Tokyo Conference on Psycholinguistics, Keio University, Japan.

Piatelli-Palmarini, Massimo & Giuseppe Vitiello (forthcoming), "Linguistics and Many-body Physics." To appear in Physics of Life Reviews.

Postal, Paul M. (1971), "On Some Rules That Are Not Successive Cyclic," Linguistic Inquiry 3(2): 211-222.

Postal, Paul M. (2003), "Remarks on the Foundations of Linguistics," The Philosophical Forum 34: 233-251.

Postal, Paul M. (2004), Skeptical Linguistic Essays. New York: Oxford University Press.

Postal, Paul M. (2009), "The Incoherence of Chomsky's Biolinguistic Ontology," Biolinguistics 3(1): 104-123.

Postal, Paul M. (2012), "Chomsky's Foundational Admission." Retrieved on 5/12/ 2012

Prince, Alan, & Paul Smolensky (2004), Optimality Theory. Constraint Interaction in Generative Grammar. Oxford: Blackwell.

Putnam, Michael (ed.) (2010), Exploring Crash-Proof Grammars. LFAB Series. Pierre Pica & Kleanthes K. Grohmann (eds.). Amsterdam: John Benjamins.

Putnam, Michael, & Thomas S. Stroik (2011), "Syntax at Ground Zero," Linguistic Analysis 37(3/4): 389-405.

Putnam, Michael, & Antonio Fabregas (2014), "On the Need for Formal Features in the Narrow Syntax," in Kosta, Peter et al. (eds.), Minimalism and Beyond: Radicalizing the Interfaces. Amsterdam: John Benjamins, 56-78.

Reuland, Eric (2009), "Language, Symbolization and beyond," in R. Botha & C. Knight (eds.), The Prehistory of Language. Oxford: Oxford University Press, 295-331.

Reuland, Eric (2011), Anaphora and Language Design. Cambridge, MA: MIT Press.

Rizzi, Luigi (1990), Relativized Minimality. Cambridge, MA: MIT Press.

Roberts, Diana (1991) Linking Thematic Roles and Syntactic Arguments in HPSG. Ithaca, NY: Cornell University Press.

Ross, Haj (1983), "Human Linguistics." Ms., MIT.

Sag, Ivan, & Tom Wasow (2011), "Performance-compatible Competence Grammar," in R. Borsley & K. Borjars (eds.), Non-Transformational Syntax: Formal and Explicit Models of Grammar. Oxford: Wiley-Blackwell.

Schopenhauer, Arthur (1831), Eristische Dialektik: Die Kunst, Recht zu Behalten. Translation in A. Schopenhauer, Manuscript Remains in Four Volumes, Arthur Hubscher (ed.), E.F.J. Payne (tr.), Vol. III, "Berlin Manuscripts (1818-1830)." Berg: Oxford, New York, & Munich, 1989.

Schrodinger, Erwin (1935), "The Present Situation in Quantum Mechanics," Proceedings of the American Philosophical Society 124: 323-338.

Smolensky, Paul, & Geraldine Legendre (2006), The Harmonic Mind. 2 Vols. Cambridge, MA: MIT Press.

Sperber, Dan, & Deirdre Wilson (1986/1995), Relevance: Communication and Cognition. Oxford: Blackwell.

Stroik, Thomas, & Michael Putnam (2013), The Structural Design of Language. Oxford: Oxford University Press.

Taylor, Kristen, Barry Devereux, & Lorraine Tyler (2011), "Conceptual Structure: Towards an Integrated Neurocognitive Account," Language and Cognitive Processes 26(9): 1368-1401.

Tegmark, Max (2003), "Parallel Universes," Scientific American Magazine May: 41-51.

Tegmark, Max (2007), "The Mathematical Universe Hypothesis," in Foundations of Physics. Retrieved on 22/4/2009 from

Travis, Lisa (1984), Parameters and Effects of Word Order Variation. PhD Diss., MIT.

Trejo, Malena (2013), Prolepticidad de los Nombres Propios: un estudio en Historia Apollonii Regis Tyri. BA Thesis, Universidad Nacional de La Plata.

Uriagereka, Juan (1998), Rhyme and Reason. Cambridge, MA: MIT Press.

Uriagereka, Juan (2000), "Some Thoughts on Economy within Linguistics," D.E.L.T.A. 16: 221-243.

Uriagereka, Juan (2002), Derivations: Exploring the Dynamics of Syntax. London: Routledge.

Uriagereka, Juan (2005), "A Markovian Syntax for Adjuncts." Ms., UMD.

Uriagereka, Juan (2008), Syntactic Anchors: On Semantic Restructuring. Cambridge: Cambridge University Press.

Uriagereka, Juan (2012), Spell-Out and the Minimalist Program. Oxford: Oxford University Press.

Vicente, Luis (2009), "A Note on the Copy vs. Multidominance Theories of Movement," Catalan Journal of Linguistics 8: 1-23.

Watumull, Jeffrey (2013), "Biolinguistics and Platonism: Contradictory or Consilient?," Biolinguistics 7: 301-315.

Watumull, Jeffrey, Marc Hauser, Ian Roberts, & Norbert Hornstein (2014), "On Recursion," Frontiers in Psychology: Language Sciences 4: 1-7. DOI: 10.3389/ fpsyg.2013.01017

Wilson, Andrew, & Sabrina Golonka (2013), "Embodied Cognition Is Not What You Think It Is," Frontiers in Psychology: Cognitive Science 4: 58. DOI: 10. 1075/la.120.11mat

Wittgenstein, Ludwig (1953), Philosophische Untersuchungen. G. Anscombe (tr.). Oxford: Blackwell.

Wurmbrand, Susi (2014), "The Merge Condition," in Kosta, Peter et al. (eds.), Minimalism and Beyond: Radicalizing the Interfaces. Amsterdam: John Benjamins, 130-167.


School of Psychology and Clinical Language Sciences, University of Reading
COPYRIGHT 2016 Addleton Academic Publishers
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2016 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:p. 136-166
Author:Krivochen, Diego Gabriel
Publication:Linguistic and Philosophical Investigations
Article Type:Report
Geographic Code:1USA
Date:Jan 1, 2016
Previous Article:On the realism-conceptualism debate about the ontology of linguistic objects: theoretical and epistemological consequences.
Next Article:A conceptual-phenomenological approach to exploring education: re-conceiving standardized curriculum in terms of a poetic, transcendent, and...

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters