Numerous linguistic phenomena have been shown to correlate with some logical properties of a sentence with which they occur, such as what logical entailments are supported by the sentence, or what the logical relation is between it and some other sentence. One such phenomenon is the acceptability of negative polarity items (NPIs): NPIs are linguistic expressions such as any, whose acceptability has been linked to a class of logical entailments supported by the sentence in which the items occur (Fauconnier, 1975b; Ladusaw, 1979, a.o.). More specifically, NPIs are acceptable (= licensed) in downward-monotonic environments: these are environments in which inferences from supersets to subsets are valid. For instance, the NPI any is licensed in the sentence ‘Harry didn’t eat any pie’: note that this sentence entails ‘Harry didn’t eat pumpkin pie’ (in other words, the inference from superset to subset is valid). On the other hand, the NPI any is not licensed in the sentence ‘Harry ate (*any) pie’: note that this sentence does not entail ‘Harry ate pumpkin pie’ (in other words, the inference from superset to subset is not valid.) .
Another such phenomenon is scalar implicatures: in certain cases, when two sentences stand in the entailment relation, the use of the logically weaker sentence systematically triggers the inference that the logically stronger sentence is false (for instance, the sentence ‘John ate a cookie or a muffin’ triggers the inference that the logically stronger ‘John are a cookie and a muffin’ is false) (Grice, 1975; Sauerland, 2004; van Rooij and Schulz, 2004; Schulz and van Rooij, 2006; Spector, 2006, 2007; Chierchia et al., 2012; Franke, 2011; Bergen et al., 2016, a.o.).
The existence of the correlation between logical properties of sentences and certain linguistic phenomena raises the question of whether these logical properties play a causal role at the cognitive level in these linguistic phenomena. By logical properties playing a causal role at the cognitive level we mean that the connection between logical properties and the linguistic phenomena in question is part of the linguistic competence of the speaker: the speaker needs to compute the logical properties in order to assess the relevant linguistic phenomena.
Option 1 is that logical properties of sentences do not play a causal role at the cognitive level in the linguistic phenomena in question: according to this view, the connection of linguistic phenomena with logical properties would be ‘external’ to speaker’s competence (and would have to be explained differently, in terms of language evolution, for instance). To give a concrete example, in the case of NPIs, it has been argued that they are acceptable when a syntactic relation has been established between them and certain operators such as negation, which happen to create a downward-monotonic environment for the NPI (but it is the syntactic relation rather than the downward-monotonicity of the environment which plays a causal role in the licensing of NPIs according to this view) (Guerzoni, 2006; Szabolcsi, 2004, a.o).
Option 2 is that the logical correlates play a causal role at the cognitive level in these linguistic phenomena. If Option 2 is on the right track, that opens up a major language and cognition architecture question: at what level do calculations of these logical properties occur? There are two broad possibilities here: these logical properties could be computed grammarinternally, or these computations could be post-grammatical. Let us explain what we mean by this.
Numerous authors have argued, based on different linguistic phenomena, that there is a level of representation of the ‘logical’ meaning of a sentence, which is computed by a formal system which does not have access to contextual knowledge, nor to the meaning of lexical (roughly, open-class) categories (see Chierchia, 1984; Fox, 2000; Gajewski, 2002; Fox and Hackl, 2006; Chierchia, 2013 for arguments in favor of this view, as well as Dowty, 1979; Barwise and Cooper, 1981; Ladusaw, 1986; Von Fintel, 1993; Rullmann, 1995; Chierchia, 2004, 2006; Menendez-Benito, 2005 a.o. for relevant linguistic phenomena). In other words, this formal system computes the meaning of a sentence based on its ‘logical’ (roughly, closed-class) vocabulary, such as conjunctions, negation, quantifiers, prepositions etc. Let us call this formal system which does not have access to contextual knowledge or the meaning of lexical categories ‘grammar’
The first possibility is that the logical properties which have a causal role in the linguistic phenomena are calculated within this formal system, ie. within grammar. Let us call this possibility Option 2a: grammar-internal. For instance, in the case of NPIs, there could exist a mechanism in grammar which evaluates monotonicity properties of linguistic environments, and the result of this calculation plays a causal role in NPI licensing. The second possibility is that the logical properties which have a causal role in the linguistic phenomena are calculated post-grammatically. Let us call this possibility Option 2b: postgrammatical. For instance, it could be that logical entailments of a sentence are evaluated outside of grammar, but that the result of this evaluation is then available to the procedure which determines the licensing of NPIs.
Deciding whether logical correlates of NPI licensing and of scalar implicatures, mostly entailment patterns and monotonicity, play a causal role in these linguistic phenomena, and if so, at what level these logical properties are calculated, is important both for our understanding of these specific linguistic phenomena, as well as more generally for our understanding of what computations can be performed grammar-internally, and what kind of post-grammatical computations affect language.
The first one comes from Chemla et al., 2011. In this study, they ask how well subjective inferential judgments of monotonicity predict NPI acceptability judgments. These inferential judgments are naturally understood as post-grammatical computations: they might be affected by the specific predicates in the sentence and by different cognitive and contextual constraints etc. Note further that people are known to have difficulties reporting monotonicity properties of sentences (see for instance Geurts and van Der Slik, 2005). It is thus non-trivial that one’s inferential judgments of monotonicity should predict one’s NPI acceptability judgments. However, this is precisely what Chemla et al., 2011 find. In other words, Chemla et al., 2011 find that the NPI judgments of a particular participant best correlate with this same participant’s inferential judgments of monotonicity, suggesting that the computations involved in NPI licensing occur at the same, post-grammatical level as the computations participants use to produce their inferential judgments.
Crniˇc, 2014 argues that the differences between (1a) and (1b) can be explained if general world knowledge according to which congressmen are likely to read books and not likely to kill people somehow finds its way into NPI licensing. Inferential judgments and contextual knowledge playing a role in NPI licensing can be taken as an argument that, if indeed the monotonicity of the environment plays a causal role in NPI licensing, it is calculated post-grammatically. Many challenges remain however both for the weak view according to which the monotonicity properties of the environment play a causal role in NPI licensing (Option 2) with no commitment to how these properties are calculated, and for the stronger view according to which these monotonicity properties are calculated postgrammatically (Option 2b: post grammatical). Studies reported in Part II of this thesis address some of these challenges.
I Thesis overview