One Word, Many Concepts (To appear in the Oxford
            Handbook to Contemporary Philosophy of Language,
          E.Lepore and U. Stojnić, eds.).
 Polysemy
        runs deep. But if polysemy runs deep, meanings do not map
        contexts to extensions--and sentences do not express
        propositions, or mappings from contexts to propositions--not
        even if each sense of a polysemous expression maps contexts (of
        using the expression with that sense) to extensions or
        proposition-constituents. A word like 'window', which has more
        than one sense, can be used neutrally without using it to
        express any particular sense. 
      
Fostering Liars
        (Topoi 40:5-25, 2021)
        This paper--like I-Languages
          and T-sentences
      
Meanings via Syntactic Structures (Syntactic
          Structures after 60 Years, edited by N. Hornstein et.al.,
        De Gruyter: Mouton 2017)
        This short essay was prompted by teaching Syntactic Structures, in an
        undergraduate course, and paying attention to the (often
        ignored) remarks about meaning. 
Semantic Internalism (The Cambridge Companion to Chomsky,
        edited by Jim McGilvray, CUP 2017)
        This essay discusses some of Chomsky's views about meaning,
        contrasting them with some of Putnam's claims in "The Meaning of
        'Meaning' ".
       
I-Languages and T-sentences
        This paper, about the relevance of Liar Paradoxes for truth
        conditional semantics, and the paper below are companions.
        Bottom line for this one: sentences of a human language don't
        have truth conditions. No sentence of a human language is true.
        The previous sentence isn't true, and neither is this one. Snow is white isn't true,
        and neither is 'Snow is
          white.' is true if and only if snow is white.  
Framing Event Variables
        Slides for the talk, at a conference in Erfurt, can be found here.
        This paper is about the relevance of puzzles concerning event
        individuation for semantics. Bottom line: event analyses of
        'Alvin chased Thedore' are good; truth-theoretic constuals of
        such analyses are bad. Together with the paper above, and Meaning Before Truth listed below, the
        larger conclusion is that Davidsonian conceptions of meaning are
        in big trouble. Even bracketing concerns about specific
        constructions, and focusing on cases that are supposed to
        motivate truth conditional semantics, foundational problems
        quickly emerge if you focus on truth, predication, or reference.
      
Concepts, Meanings, and Truth: First Nature,
          Second Nature, and Hard Work (Mind and Language 25: 247-78, 2010)
        The idea is that lexical expressions of a human I-language let
        children use available concepts to introduce formally distinct
        "I-concepts," which can then be combined
        via operations invoked by phrasal syntax. So while
        "prelexical" concepts may not exhibit the kind of systematicity
        required for truth, I-concepts do. But various empirical
        considerations suggest that I-concepts are massively
        monadic, and that the relevant "I-operations" are fundamentally
        conjunctive. This, I claim, makes it implausible that I-concepts
        are true of language-independent things. Meanings can be viewed
        as instructions to assemble concepts that make it possible for
        humans to have truth-evaluable thoughts. But forming such
        concepts requires independent cognitive work, not just a
        language with a compositional semantics. This paper, which
        abstracts from the technical details, forms a pair with Minimal Semantic
          Instructions (listed under Compositional Semantics)
Meaning Before Truth (Contextualism in
          Philosophy, edited by G. Preyer and G. Peters, OUP 2005).
        
        This paper extends the line of thought in "The Character of
        Natural Language Semantics." A running theme is that Chomsky
        offers a conception of semantics that lets us preserve what is
        right about truth-conditional semantics--and this has less to do
        with truth than the usual rhetoric suggests--while also
        preserving late-Wittgensteinian/Austinian insights about the
        relation between truth, meaning, and context. There are three
        main sections: one about the relevance of negative facts (and
        nativism) for semantics, and why this tells against both
        "deflationary" conceptions of meaning and Quine-Davidson
        "interpretability" conceptions; one that reviews some familiar
        reasons for rejecting the hypothesis that names denote things in
        the environment; and one that concedes externalism about truth,
        while noting that externalism about linguistic meaning does not
        follow. The paper ends with a brief tour of some alternatives,
        and some familiar reasons for rejecting the hypothesis that
        predicates are satisfied by things in the environment. A handout elaborates this line of
        thought (in a handouty way). 
       
Semantic
          Types: Two is Better than Too Many. 
        (New Frontiers in Artificial Intelligence, edited by
        M. Sakamoto et. al., Springer LNCS/LNAI 12331, 2020. LENLS-16
          conference proceedings.)
      In this paper and the one below, I
            discuss the motivations and prospects for the very spare
            semantic typology employed in Conjoining
                  Meanings.  
          
Semantic Typology and Composition
          (The Science of Meaning,
          edited by B. Rabern and D. Ball). 
      It is
        often said that expressions of a human language include (i)
        truth-evaluable sentences of a basic semantic type <t>,
        (ii) entity designators of a basic semantic type <e>, and
        (iii) unsaturated expressions whose semantic types are
        characterized by the recursive principle "if <α> and
        <β> are types, so is <α, β>." I think this
        hypothesis is wrong in three respects.  
       
Minimal Semantic Instructions (in
        the Oxford Handbook of
          Linguistic Minimalism, edited by Cedric Boeckx, 2011).
        This is an attempt work out, for a range of basic constructions,
        the idea of meanings as "instructions to assemble conjunctive
        concepts." This paper, mainly devoted to technical details and
        minimalist reasoning, forms a pair with Concepts,
          Meanings, and Truth: First Nature, Second Nature, and Hard
          Work (listed under Semantic
          Internalism). And with regard to the syntactic details, I
        draw on the paper below.
Interrogatives, Instructions, and
          I-languages: an I-Semantics for Questions, coauthored with
        Terje Lohndal (Linguistic
          Analysis 37:459-510, 2011). 
        The basic idea is simple: an "instructionist" conception of
        meaning, along lines developed in the paper above, can easily
        accommodate an attractive internalist version of the old
        force/content distinction; and there are interesting
        implications for the syntax/semantics of relative clauses and
        "sentential" expressions. I never intended to have views
        about--much less write a paper about--interrogatives. But my
        co-author was persuasive. 
       
Describing I-junction
        (In
          Language and Value,
          edited by J. Yi and E. Lepore, 
        The meaning of a noun phrase like 'brown cow', or 'cow that ate
        grass', is somehow conjunctive. But conjunctive in what sense?
        Are the meanings of other phrases--e.g, 'ate quickly', 'ate
        grass', and 'at noon'--similarly conjunctive? I suggest a
        possible answer, in the context of a broader conception of
        natural language semantics.
 Small
          Verbs, Complex Events: Analyticity without Synonymy 
        (in Chomsky and His Critics, edited [heroically] by
        Louise Antony and Norbert Hornstein, Blackwell 2003)
        You may need to "Rotate View, Clockwise" to get the .pdf file to
        appear properly.
        This paper was written in 1998, and so may be past
          its use-by date. Updated versions of various bits of the
        paper appear elsewhere; see note 1.
        More Truth in Advertising: I'm not criticizing Chomsky; though I
        am being critical, and Chomsky does figure prominently. 
        The idea, as the subtitle suggests, is that there are analytic
        truths--even if the notion of synonymy is suspect. The trick
        involves (can you guess?) combining, in the right way, a
        neo-Davidsonian event semantics with a Minimalist syntax.
        Blatant Advertising: get hold of the entire book if only for
        Chomsky's replies; for anyone interested Chomsky's conception of
        meaning (and his semantic
          internalism), see especially his replies to Egan, Rey,
        Ludlow, Horwich, and Pietroski. 
       
On
          Explaining That (Journal of Philosophy 97: 665-62,
        2000) 
        How can a speaker can explain that P without explaining the fact
        that P, or explain the fact that P without explaining that P,
        even when it is true (and so a fact) that P? Or in formal mode:
        what is the semantic contribution of 'explain' such that 'She
        explained that P' can be true, while 'She explained the fact
        that P' is false (or vice versa), even when 'P' is true?
        The proposed answer is that 'explained' is a semantically
        monadic predicate, satisfied by events of explaining. But 'the
        fact that P' (a determiner phrase) and 'that P' (a
        complementizer phrase) get associated with different thematic
        roles, corresponding to the distinction between a thing
        explained and the content of a speech act.
Does
Every
          Sentence Like This Exhibit A Scope Ambiguity? coauthored
        with Norbert Hornstein 
        (In Belief and Meaning, edited by W. Hinzen and H. Rott,
        Hansel-Hohenhausen 2002) 
        The answer is 'no'. Instances of 'every F likes some G' may not,
        after all, be examples of scope ambiguity. 
        Figuring out whether a given expression with multiple
        quantifiers is semantically ambiguous is hard.
    
Quantification
and
          Second-Order Monadicity (Philosophical
          Perspectives 17: 259-298, 2003).
        The first part of this paper reviews some developments regarding
        the apparent mismatch between the logical and grammatical forms
        of quantificational constructions like 'Pat kicked every
        bottle'. I suggest that (even given quantifier-raising) many
        current theories still posit an undesirable mismatch. But all is
        well if we can treat
        determiners (words like 'every', 'no', and 'most') as
        second-order monadic predicates without treating them as predicates satisfied
        by ordered pairs of sets.
        Drawing on George Boolos's construal of second-order
        quantification as plural quantification, I argue that we can and
        should view determiners as predicates satisfied (plurally) by
        ordered pairs each of which associates an entity with a
        truth-value (t or f). The idea is 'every' is
        satisfied by some pairs iff every one of them associates its
        entity with t. It turns
        out that this provides a kind of explanation for the
        "conservativity" of determiners. And it lets us say that
        concatenation signifies predicate-conjunction even in phrases
        like 'every bottle' and 'no brown dog'.
To Be a
          Value of  a Plural Variable, You Don't Have to Be Plural
          (You Just Have to Be) 
        This is something between a handout and a paper. It focusses on
        an idea, acquired from George Boolos, discussed in the papers
        immediately above and below. For purposes of giving a
        compositional semantic theory for a natural language, we can and
        should allow for genuinely plural variables; where a genuinely
        plural variable is one that has more than one value relative to
        each assignment of values to variables.
Induction
and
          Comparison (Maryland
          Working Papers in Linguistics, 15: 157-90, 2006)
        This speculative paper is an attempt to say why Frege's Theorem
        might bear, in interesting ways, on several issues in
        linguistics.
Function
and
          Concatenation (in Logical Form, edited by G.
        Preyer and G. Peters, OUP 2002). 
        Explores the idea that concatenating natural language
        expressions corresponds to predicate-conjunction, as opposed to
        function-application. The proposal is developed in more detail
        in Events and Semantic Architecture (OUP 2005). But the
        paper gives the main idea, in the context of questions about how
        natural language syntax is related to Logical Form.
       
Interpreting
Concatenation
          and Concatenates (Philosophical
          Issues 16:221-45, 2006).
        This paper presents a slightly modified version of the
        compositional semantics proposed in Events and Semantic Architecture.
        
        Some readers may find this shorter version, which ignores issues
        about vagueness and causal constructions, easier to digest. The
        emphasis is on the treatments of plurality and quantification,
        and I assume at least some familiarity with more standard
        approaches. Space constraints caused the final document to be
        considerably shorter than drafts with homophonous titles. The
        paper above (Systematicity
          via Monadicity) is a kind of companion piece, showing how
        to locate the proposed conception of semantic composition in the
        context of more general attempts to simplify (or "minimize")
        theories of linguistic competence, with the aim of isolating the
        distinctively human aspects of the human language faculty. There
        are points of contact with recent suggestions by Elizabeth
        Spelke and her colleagues; see also the BBS
paper
          by Peter Carruthers, my colleague in philosophy at
        Maryland.
Systematicity
          via Monadicity (Croatian
          Journal of Philosophy 7:343-374, 2007)
        This is the written version of a conference presentation in
        Dubrovnik (Fall 2006). I argue that a "Conjunctivist" conception
        of semantic composition, of the sort articulated in some of the
        papers above, helps explain many otherwise puzzling features of
        natural language. More speculatively, a Conjunctivist language
        faculty might also help explain why human thought is as
        systematic as it is.
Semantic Monadicity with Conceptual
          Polyadicity (In the Oxford
          Handbook of Compositionality, M. Werning, W. Hinzen,
        and E. Machery, eds., 2012). 
        Another paper in the same vein. 
Language and Conceptual Reanalysis (In Towards a Biolinguistic
          Understanding of Grammar: Essays on Interfaces, edited by A. DiSciullo,
        John Benjamin 2012). 
        Like the paper above, but more detailed, and drawing some
        connections to Frege's notion of fruitful definitions. 
Here is a video of a 2014 talk in the Defining Cognitive Science series at Simon Fraser
          University.
        My thanks to my hosts, especially Endre Begby. In the talk, I discuss some
        of the findings reported in the papers below. There are also
        pictures of my collaborators.
      
        
      
        
        
        
        
        
        
        
        
        
          
        
Observers
            efficiently extract the min and max in perceptual magnitudes
            sets: evidence for a bipartite format. 
              Authors: Darko Odic,
            Tyler
              Knowlton, Alexis Wellwood,
            Paul Pietroski, Jeff Lidz, and
            Justin Halberda
      To appear in Psychological
                      Science.
Psycholinguistic
evidence
          for restricted quantification. 
        Authors:
          Tyler Knowlton, Paul
          Pietroski, Alexander Williams, Justin Halberda, and
              Jeff Lidz. 
    
Natural
          Language Semantics 31:219–251 (2023).
      
      
Individuals and Ensembles
          and each versus every: linguistic framing affects performance
          in a change
          detection task.
      Authors:
            Tyler Knowlton, Justin Halberda, Paul
                  Pietroski, and Jeff Lidz. 
(Glossa Psycholinguistics 2 http://dx.doi.org/10.5070/G6011181
        (2023).
      
The sentences Each of the dots is blue and Every one of the dots is blue and All of the dots are blue illustrate distinct ways of expressing universal generalizations. But do the meanings of the words for universal quantification differ? And if so, is the difference between first-order and second-order quantification relevant? Answers: yes and yes.
The
          Meaning of 'Most': semantics, numerosity, and psychology:
        Paul Pietroski, Jeff Lidz, Justin Halberda, and Tim Hunter
        (Mind and Language,
        24:554-85, 2009). The title is descriptive. We offer
        experimental evidence in support of a certain view about how the
        meaning of the English determiner 'most' is related to various
        psychological capacities potentially relevant to human
        capacities for counting and quantifying. In this first
        installment of an ongoing project, we offer experimental
        evidence that adult speakers of English do indeed
        understand sentences like 'Most of the dots are blue' in
        terms of cardinality comparison (as opposed to, say, one-to-one
        correspondence). We also make some tentative suggestions about
        how the meaning of 'most' is related to potential verification
        procedures and the "analog magnitude system" that humans share
        with other animals.
Seeing What you Mean, Mostly.
          Authors: Paul Pietroski, Jeff Lidz, Justin Halberda, Tim
          Hunter, and Darko Odic
          (Syntax and Semantics:
            Experiments at the Interfaces, edited by J. Runner,
          37:187-224, 2011). 
          Another paper in the same vein, stressing that while our
          proposal is not a form of verificationism, meanings are
          related to verification strategies in empirically testable
          ways--at least with regard to "logical" vocabulary.
The Language Faculty coauthored with
        Stephen Crain, in The Handbook for Philosophy of Cognitive
          Science (edited by E.Margolis, S. Laurence, and S. Stich,
        OUP 2011). An essay on the language faculty, in keeping with the
        papers below, but also discussing some new material. 
       
Think of the Children (Australasian Journal of Philosophy 86:657-669, 2009). This was a critical notice of Michael Devitt's book, Ignorance of Language. Michael's reply, which you might want to look at, appeared in the same issue.
 Brass Tacks in Linguistic Theory
        coauthored with Stephen Crain and Andrea Gualmini
        (In The Innate Mind:
          structure and contents, edited by S. Laurence, P.
        Carruthers, and S. Stich, 175-197, Oxford University Press,
        2005).
        Yes, still arguing for innate constraints on linguistic
        meanings. Here, we discuss in more detail some of the individual
        phenomena addressed in other papers. And we're not replying to
        anyone in particular. 
 Innate Ideas coauthored with
        Stephen Crain, in The Cambridge Companion to Chomsky
        (edited by James McGilvray, 164-180, Cambridge Univ. Press
        2005). You may need to "Rotate View, Clockwise" to get the .pdf
        file to appear properly.
        A more general discussion of innateness and universal grammar,
        in the context of Chomsky's version of rationalism. 
        Some of the examples mentioned here are discussed in more detail
        in the other papers. 
       
Why Language Acquisition is a Snap
        coauthored with Stephen Crain (Linguistic Review, 19:
        163-83, 2002). 
        Presents additional empirical arguments for universal grammar in
        reply to a target article by Pullum and Scholz. The main
        arguments concerns a cluster of semantic phenomenon concerning
        downward entailment, negative polarity, and the "pragmatic"
        implicature associated with disjunctive claims.  
Nature, Nurture, and Universal Grammar
        coauthored with Stephen Crain (Linguistics and Philosophy
        24: 139-86, 2001). 
        Discusses the logic of "poverty of stimulus" arguments and some
        specific empirical premises, concerning both adults and
        children, in reply to recent empiricist conceptions of language
        acquisition--with particular focus on Cowie's book What's
          Within.
       
       Twentieth
          Century Papers 
 Actions, Adjuncts, and Agency
        (Mind 107: 73-111, 1998)
       
Experiencing the Facts: critical notice of John McDowell's Mind and World (Canadian Journal of Philosophy 26: 613-36, 1996)
A Defense of Derangement (Canadian Journal of Philosophy 24: 95-118, 1994)
Prima Facie Obligations, Ceteris
          Paribus Laws in Moral Theory (Ethics 103: 489-515,
        1993)
       
Intentionality and Teleological Error (Pacific Philosophical Quarterly 73: 267-82, 1992)