MIND: Vol. 128, No. 512, October 2019.
According to McHugh and Way, reasoning is a person-level attitude revision that is regulated by its constitutive aim of getting fitting attitudes. They claim that this account offers an explanation of what is wrong with reasoning in ways one believes to be bad, and that this explanation is an alternative to an explanation that appeals to the so-called Taking Condition. In this article, the author argues that their explanation is unsatisfying
Normative Uncertainty and Social Choice, CHRISTIAN TARSNEY
In "Normative Uncertainty as a Voting Problem," William MacAskill argues that positive credence in ordinal-structured or intertheoretically incomparable normative theories does not prevent an agent from rationally accounting for normative uncertainties in practical deliberation. Rather, such an agent can aggregate the theories in which he has positive credence by methods borrowed from voting theory--specifically, MacAskill suggests, by a kind of weighted Borda count. The appeal to voting methods opens up a promising new avenue for theories of rational choice under normative uncertainty. The Borda rule, however, is open to at least two serious objections. First, it seems implicitly to "cardinalize" ordinal theories, and so does not fully face up to the problem of merely ordinal theories. Second, the Borda rule faces a problem of option individuation. MacAskill attempts to solve this problem by invoking a measure on the set of practical options. But it is unclear that there is any natural way of defining such a measure that will not make the output of the Borda rule implausibly sensitive to irrelevant empirical features of decision situations. After developing these objections, the author suggests an alternative: the McKelvey uncovered set, a Condorcet method that selects all and only the maximal options under a strong pairwise defeat relation. This decision rule has several advantages over Borda and mostly avoids the force of MacAskill's objection to Condorcet methods in general.
On a Judgement of One's Own: Heideggerian Authenticity Standpoints, and All Things Considered, DENIS MCMANUS
This paper explores two models of Heidegger's notion of Eigentlichkeit. Although typically translated as "authenticity," a more literal construal of this term would be "ownness" or "ownedness." In addition to its exegetical value, the paper also develops two interestingly different understandings of what it is to have a judgment of one's own. The first model understands Heideggerian authenticity as the owning of what the author calls a "standpoint." Although this model provides an understanding of a number of key features of authenticity, it also invites an important objection--which the author calls "the closure objection"--that can be found in, for example, the work of Steven Galt Crowell and Tony Fisher. Although the author argues that this objection can be met, the response for which it calls reveals that the feat of authenticity as understood through the standpoint model rests upon a further feat, and one that may itself have a stronger claim to be identified with Heideggerian authenticity. The author develops this thought, introducing what he calls the "all-things-considered judgment model" of authenticity, the basis of which lies in, among other sources, Heidegger's appropriation of themes from Aristotle's discussion of phronesis. He explains the exegetical benefits of adopting this model and considers some objections that it invites, before closing with a discussion of how the two models understand the notion of "a judgment of one's own."
Logical Mistakes, Logical Aliens, and the Laws of Kant's Pure General Logic, TYKE NUNEZ
There are two ways interpreters have tended to understand the nature of the laws of Kant's pure general logic. On the first, these laws are unconditional norms for how we ought to think and will govern anything that counts as thinking. On the second, these laws are formal criteria for being a thought, and violating them makes a putative thought not a thought. These traditions are in tension, insofar as the first depends on the possibility of thoughts that violate these laws, and the second makes violation impossible. The author develops an interpretation of Kant's pure general logic that overcomes this tension. It accounts for the possibility of logical mistakes, as the first tradition does, while still establishing the absolute impossibility of logical aliens, as the second tradition does. He then argues that the formalist insight that illogical exercises of the understanding are not alternative ways coherent thoughts could have been, but are mere confusions, is fundamental for achieving a proper understanding of the absolute normativity of the laws of pure general logic.
Trial by Statistics: Is a High Probability of Guilt Enough to Convict? MARCELLO DE BELLO
Suppose one hundred prisoners are in a yard under the supervision of a guard, and at some point, ninety-nine of them collectively kill the guard. If, after the fact, a prisoner is picked at random and tried,
the probability of his guilt is 99 percent. But despite the high probability, the statistical chances, by themselves, seem insufficient to justify a conviction. The question is why. Two arguments are offered. The first, decision-theoretic argument shows that a conviction solely based on the statistics in the prisoner scenario is unacceptable so long as the goal of expected utility maximization is combined with fairness constraints. The second, risk-based argument shows that a conviction solely based on the statistics in the prisoner scenario lets the risk of mistaken conviction potentially surge too high. The same, by contrast, cannot be said of convictions solely based on DNA evidence or eyewitness testimony. A noteworthy feature of the two arguments in the paper is that they are not confined to criminal trials and can in fact be extended to civil trials.
New Hope for Relative Overlap Measures of Coherence, JAKOB KOSCHOLKE, MICHAEL SCHIPPERS, and ALEXANDER STEGMANN
Relative overlap measures of coherence have recently been shown to have two devastating properties: (1) according to the plain relative overlap measure, the degree of coherence of any set of propositions cannot be increased by adding further propositions, and (2) according to the refined relative overlap measure, no set can be more coherent than its most coherent two-element subset. This result has been taken to rule out relative overlap as a foundation for a probabilistic explication of coherence. The present paper shows that this view is premature: the authors propose a relative overlap measure that does not fall victim to the two properties. The guiding idea is to employ a wellestablished recipe for the construction of coherence measures and to adapt it to the idea of relative overlap. The authors show that this new measure keeps up with, or even outperforms, former overlap measures in a set of desiderata for coherence measures and a collection of popular test cases. This result reestablishes relative overlap as a candidate for a proper formalization of coherence.
Three Infinities in Early Modern Philosophy, ANAT SCHECHTMAN
Many historical and philosophical studies treat infinity as an exclusively quantitative notion, whose proper domain of application is mathematics and physics. The main aim of this paper is to disentangle, by critically examining, three notions of infinity in the early modern period, and to argue that one--but only one--of them is quantitative. One of these nonquantitative notions concerns being or reality, while the other concerns a particular iterative property of an aggregate. These three notions will emerge through examination of three central figures in the period: Locke (for quantitative infinity), Descartes (ontic infinity), and Leibniz (iterative infinity).
Reconciling Practical Knowledge with Self-Deception, ERIC MARCUS
Is it impossible for a person to do something intentionally without knowing that he is doing it? The phenomenon of self-deceived agency might seem to show otherwise. Here the agent is not (at least in a straightforward sense) lying but disavows a correct description of his intentional action. This disavowal might seem expressive of ignorance. However, the author shows that the self-deceived agent does know what he's doing. He argues that we should understand the factors that explain self-deception as masking rather than negating the practical knowledge characteristic of intentional action. This masking takes roughly the following form: when we are deceiving ourselves about what we are intentionally doing, we don't think about our action because it is painful to do so.
Abominable KK Failures, KEVIN DORST
KK is the thesis that if you can know p, you can know that you can know p. Though it is unpopular, a flurry of considerations has recently emerged in its favor. Here we add fuel to the fire: standard resources allow us to show that any failure of KK will lead to the knowability and assertability of abominable indicative conditionals of the form "If I don't know it, p." Such conditionals are manifestly not assertable--a fact that KK defenders can easily explain. The author surveys a variety of KK-denying responses and find them wanting. Those who object to the knowability of such conditionals must either (1) deny the possibility of harmony between knowledge and belief, or (2) deny wellsupported connections between conditional and unconditional attitudes. Meanwhile, those who grant knowability owe us an explanation of such conditionals' unassertability--yet no successful explanations are on offer. Upshot: we have new evidence for KK.
Accidentally about Me, DANIEL MORGAN
Why are de se mental states essential? What exactly is their de se-ness needed to do? In this article, the author argues that it is needed to fend off accidentalness. If certain beliefs--for example, nociceptive, proprioceptive, or introspective beliefs--were not de se, then any truth they achieved would be too accidental for the subject to count as knowing. If certain intentions-intentions that are in play whenever we intentionally do anything--were not de se, then any satisfaction they achieved would be too accidental for the subject to count as intentionally acting. How states hook on to their referent is relevant in a systematic but underexplored way to whether they nonaccidentally achieve their aim--truth in the case of beliefs, satisfaction in the case of intentions. In the relevant cases, the way of hooking on to a referent needed to avoid being accidental is the way a de se state hooks on to its referent.
More of Me! Less of Me! Reflexxive Imperativsm about Phenomenal Character, LUCA BARLASSINA and MAX KHAN HAYWARD
Experiences like pains, pleasures, and emotions have affective phenomenal character: they feel pleasant or unpleasant. Imperativism proposes to explain affective phenomenal character by appeal to imperative content, a kind of intentional content that directs rather than describes. The authors argue that imperativism is on the right track but has been developed in the wrong way. There are two varieties of imperativism on the market: firstorder and higher-order. The authors show that neither is successful, and they offer in their place a new theory: reflexive imperativism. Their proposal is that an experience P feels pleasant in virtue of being (at least partly) constituted by a command with reflexive imperative content (1), while an experience U feels unpleasant in virtue of being (at least partly) constituted by a command with reflexive imperative content (2): More of P! Less of U! If you need a slogan: experiences have affective phenomenal character in virtue of commanding us Get more of me! Get less of me!
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||CURRENT PERIODICAL ARTICLES: PHILOSOPHICAL ABSTRACTS|
|Publication:||The Review of Metaphysics|
|Date:||Dec 1, 2019|
|Previous Article:||JOURNAL OF THE HISTORY OF PHILOSOPHY: Vol. 57, No. 4, October 2019.|
|Next Article:||THE MONIST: Vol. 102, No. 4, October 2019.|