Conditionals - 2019 Logic, Linguistics and Psychology (école thématique CNRS et LABEX-EFL)

 

 logo-inalco logo-cnrs  logo-labex-efl   logo-USPC
 LLF-logo  logo-Diderot    

 

Faculty (confirmed)

 Edgington_Dorothy_4.png

 

 Professor Dorothy Edgington

 Birbeck College, University of London

 

 

Conditionals, Indeterminacy and Uncertainty

 

 Kaufmann_Stefan_4.png

 

 Professor Stefan Kaufmann,

 Department of Linguistics

 University of Connecticut, USA

 

 

 

Tense and temporal reference (1) indicatives (2) counterfactuals (3) cross-linguistic comparisons with a focus on Japanese (4) Causality or linguistic perspectives on Import-Export Principle.

 

 Khoo_Justin_5.png

 

 Professor Justin Khoo

 Department of Philosophy

 MIT, USA

 

 

Probabilities of Conditionals

This is a course about the probabilities of natural language conditionals. We will focus on two conflicting, but well supported, observations. The first is that the probability of a conditional "if A, B" is equal to the probability of B given A. The second is that this thesis cannot hold in full generality, as has been demonstrated in various ways by various triviality results (the first going back to David Lewis 1976). This course would explore strategies for reconciliation, and offer a defense of a positive thesis.

 Mackay_John_4.png

 

 Professor John Mackay

 Department of Philosophy

 University of Wisconsin, USA

(1) Past Tense in Subjunctive Conditionals: Subjunctive conditionals, despite their name, are commonly marked by the presence of past tense. Yet they do not always describe past events, and at least on the surface, the difference between indicative and subjunctive conditionals does not appear to be a temporal one. We will examine different ways of explaining these phenomena. Some theorists argue that subjunctive conditionals involve a “fake” past tense that receives a modal interpretation. Others argue that it is interpreted in the standard temporal way. We will look at ways of implementing both strategies, and explore the advantages and disadvantages of both.

(2) Mood, Tense and Actuality: When an indicative clause is embedded inside a subjunctive conditional, that clause is evaluated at the actual world, rather than at the counterfactual worlds relevant to the overall conditional. In some other environments, embedded indicative clauses are not evaluated at the actual world. We will examine how tense and mood interact with actuality in conditionals. One of the central themes will be whether tense can play a role traditionally thought to be played by an actuality operator.

(3) Presupposition and Assertion: We will examine ways in which tense and mood in conditionals interact with presupposition and assertion. For example, we will look at how indicative and subjunctive conditionals have different assertoric effects, and how they interact with the context in different ways. We will consider how tense affects presuppositions in conditionals, and different theories of presupposition projection.

 Over_David_4.png

 

 Professor David E. Over

 Psychology Department

 Durham University, Durham, UK

 

The psychology of conditionals

Conditionals - indicative, counterfactual, deontic, and others - are central to the study of reasoning in linguistics, logic, philosophy, and psychology. These classes will cover the history of research on conditionals in the psychology of reasoning, and will introduce new Bayesian and probabilistic approaches, which have recently had a major impact on psychological theories about, and experiments on, human reasoning in general and conditional reasoning in particular. The new approaches recognize that most of this reasoning takes places in contexts of uncertainty. Most human inferences are not from arbitrary assumptions, but rather from uncertain premises, and lead finally to belief revision and updating, and to decision making. A central topic in this new research has been the hypothesis, originally proposed in logic and philosophy, that the probability of the natural language conditional, P(if A then B), is the conditional probability of B given A, P(B|A). This relationship, P(if A then B) = P(B|A), is fundamental in a Bayesian account of reasoning, and the latest research on it with be introduced and examined.

Class 1: The psychology of conditional reasoning: An introduction

The different types of conditional in natural language: indicative, counterfactual, deontic, and others.

Early experiments in the psychology of reasoning: the "defective" truth table, the selection task, and inferences from arbitrary assumptions. Apparent biases in this reasoning: confirmation and belief biases.

Critical examination of early psychological theories of this reasoning: mental logic and mental model theories.

Class 2: The probability of indicative conditionals

The reasons for taking a probabilistic approach to the study of reasoning: most human reasoning is not from arbitrary assumptions, but from uncertain premises. The premises are uncertain beliefs or are possibilities relevant to decision making, and the results are belief revision or updating, and effective decisions.

Conditionals are basic in reasoning. An inference from premises to a condition can be turned into a conditional, if premises then conclusion, and reasoning from a possible antecedent, as a premise, to a consequent, as a conclusion, can justify a conditional.

Uncertain conditional premises are probable to a degree less than certainty, but what is the probability of a conditional? The first experiments on this question used frequency distributions, but this limitation can be overcome. The general finding is that people judge the probability of a natural language conditional, P(if A then B), to be the conditional probability of B given A, P(B|A). There are possible limitations of to this finding: missing-link conditionals.

A conditional that satisfies the relation, P(if A then B) = P(B|A), is called a probability conditional. Why a probability conditional cannot be a Stalnaker/Lewis conditional. How to account for the meaning of a probability conditional: the Ramsey test and de Finetti and Jeffrey tables.

Class 3: Probability and conditional reasoning

Assessing inferences from uncertain premises in a Bayesian approach: generalizing consistency to coherence and classical validity to probabilistic validity, p-validity.

Using coherence and p-valid inferences to study non-conditional inferences: &-elimination and the conjunction fallacy. Another example: or-introduction.

Inferences that are valid for the truth-functional material conditional but not p-valid for the probability conditional: the "paradoxes" of the material conditional. Inferring if A then B from not-A, and from B. Strengthening the antecedent: inferring if A & B then C from if A then C.

Monotonicity and p-validity: p-validity is monotonic but the probability conditional is non-monotonic.

Inferences that are valid for the truth-functional material conditional and p-valid for the probability conditional: modus ponens (MP) and modus tollens (MT). Inferences that are invalid for the truth-functional material conditional and p-invalid for the probability conditional: affirming the consequent (AC) and denying the antecedent.

The importance of the and-to-if inference (also called centering), inferring if A then B from A & B, for theories of conditionals.

Class 4: Counterfactuals

The distinction between indicative and counterfactual conditionals: what exactly is it? Does a counterfactual, if A were the case then B would be, entail not-A, or does the use of such a counterfactual by a speaker pragmatically suggest that not-A holds? MP and the and-to-if inference are relevant to answering these questions.

Possible worlds and counterfactuals: the Stalnaker/Lewis accounts. The psychological findings on people's judgments about possible worlds reviewed: counterfactuals and the emotions of regret and relief.

Probability judgments about counterfactuals: does P(if A were the case then B would be) = P(B|A) hold for counterfactuals?

Counterfactuals and causation: Bayes nets and interventions. How MP and the and-to-if inference are again relevant to this topic.

 

 Starr_Wiliam_4.png

 

 Professor William Starr

 Department of Philosophy

 Cornell University, USA

 

1. Counterfactuals and Bayesian Networks/Structural Equations

Bayesian Networks, causal models and structural equations are enormously powerful tools that are being widely used in cutting edge research across cognitive science, metaphysics, artificial intelligence and the philosophy of science. They are also at the heart of several recent analyses of counterfactual conditionals. This course will introduce these tools and applications with a focus on counterfactual semantics and cognition. We will survey several existing structural equation-based analyses, and present a new one I'm developing that extends to quantificational logic and counterlegals. We will conclude by comparing these accounts with the more familiar Lewis-Stalnaker similarity semantics and the Kratzer-Veltman premise semantics for counterfactuals.

2. Indicative and Subjunctive Conditionals

This course will compare two accounts of the semantics of indicative and subjunctive conditionals, across a range of three types of languages. Researchers working on English conditionals have debated whether the past tense morphology in subjunctive antecedents receives a truly past semantics (Past-as-Past Analyses like Ippolito, Khoo) or a modal semantics (Past-as-Modal Analyses like Iatridou, Schulz and Starr). The semantic literature on subjunctive conditionals has consequently focused on whether subjunctive conditionals receive their distinctive meaning from a past meaning or a modal meaning. But at least two other types of languages are equally important in the study of conditionals. Some languages, like French and Russian, use imperfective aspect about which the same debate can be had. Other languages, like Lakota, Yucatec Maya and Cheyenne, use distinct hypothetical mood unique to the antecedents of subjunctive conditionals. This course will examine the implications of these other two language types for the Past-as-Modal vs. Past-as-Past debate. 

3. Indicative Conditionals Beyond Truth-Conditions

This course will explore analyses of indicative conditionals that do not identify their meaning with their truth-conditions. It will focus on probabilistic and dynamic strict-conditional analyses, and whether these views are best understood as offering no-truth conditions, or just truth-conditions that are derived from their non-truth-conditional meaning.

 

Online user: 1