Eva Schmidt | Philosophy
Research
Interests
My research focuses on reasons and especially epistemic reasons. I argue for a reasons-first view in epistemology and discuss how reasons relate to other epistemic phenomena.
I investigate how philosophical views of acting for a reason and action explanation can be applied to the issue of explainability of the actions of AI systems, and how explainability relates to ethical-societal requirements on AI systems.
I am also still intersted in my dissertation topic - the nature of perception and its content, as well as its epistemic significance.
Books
As Author
I defend nonconceptualism, the claim that perceptual experience is nonconceptual and has nonconceptual content and offer a sustained defense of what I call 'Modest Nonconceptualism'.
Reviews of the book by:
As Editor
Festschrifts in Philosophy Series, New York: Routledge (with Christoph C. Pfisterer and Nicole Rathgeb, 2022)
Published Articles (Selection)
Journals
In this paper, I explicate pragmatic encroachment by appealing to pragmatic considerations attenuating, or weakening, epistemic reasons to believe. I call this the ‘Attenuators View’. I will show that this proposal is better than spelling out pragmatic encroachment in terms of reasons against believing – what I call the ‘Reasons View’. While both views do equally well when it comes to providing a plausible mechanism of how pragmatic encroachment works, the Attenuators View does a better job distinguishing practical and epistemic reasons to believe. First, this view does not appeal to the costs of believing falsely as reasons against believing; second, because of this, it does not run the risk of tearing down the wall between practical and epistemic reasons bearing on belief. I underpin the Attenuators View with a virtue-theoretic account of how pragmatic encroachment attenuates epistemic reasons and close my discussion by considering some objections against such a view.
- Facts about Incoherence as Non-Evidential Epistemic Reasons, Asian Journal of Philosophy 2 (2023) 1-22, doi: 10.1007/s44204-023-00075-1. (main article in an invited article symposium)
This paper presents a counterexample to the principle that all epistemic reasons for doxastic attitudes towards p are provided by evidence concerning p. I begin by motivating and clarify-ing the principle and the associated picture of epistemic reasons, including the notion of evi-dence concerning a proposition, which comprises both first- and second-order evidence. I then introduce the counterexample from incoherent doxastic attitudes by presenting three example cases. In each case, the fact that the subject’s doxastic attitudes are incoherent is an epistemic reason to suspend, which is not provided by evidence. I argue that the incoherence fact is a reason for the subject to take a step back and re-assess her evidence for her conflicting atti-tudes and thus a reason to suspend on all of them. Suspending judgment enables the subject to revise attitudes where appropriate and thus (typically) to arrive at a set of coherent and well-supported attitudes. I then address several objections and, in conclusion, briefly suggest a pic-ture of epistemic reasons according to which they can be understood against the background of the subject’s virtuous intellectual conduct.
Can the evidence provided by software systems meet the standard of proof for civil or criminal cases, and is it individualized evidence? Or, to the contrary, do software systems exclusively provide bare statistical evidence? In this paper, we argue that there are cases in which evidence in the form of probabilities computed by software systems is not bare statistical evidence, and is thus able to meet the standard of proof. First, based on the case of State v. Loomis, we investigate recidivism predictions provided by software systems used in the courtroom. Here, we raise problems for software systems that provide predictions that are based on bare statistical evidence. Second, by examining the case of People v. Chubbs, we argue that the statistical evidence provided by software systems in cold hit DNA cases may in some cases suffice for individualized evidence, on a view on which individualized evidence is evidence that normically supports the relevant proposition (Smith, in Mind 127:1193–1218, 2018).
-
From Responsibility to Reason-Giving Explainable Artificial Intelligence, Philosophy and Technology 35, topical collection on AI and Responsibility, eds. Niël Conradie, Hendrik Kempt, & Peter Königs,1-30 (with Kevin Baum, Susanne Mantel, and Timo Speith, 2022).
We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system’s recommendation. Reason explanations are especially well-suited to this end and we examine whether – and how – it might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.
In this comment on Katherine Dormandy's paper «True Faith», I point out that the clash she describes between epistemic norms and faith-based norms of belief needs to be supplemented with a clear understanding of the pertinent norms of belief. I argue that conceiving of them as evaluative fails to explain the clash, and that understanding them as prescriptive is no better. I suggest an understanding of these norms along the lines of Ross’s (1930) prima facie duties, and show how this picture can make sense of the clash.
- Teleology first: Goals before knowledge and belief (Commentary on Buckwalter et al. “Knowledge before belief”), Behavioral and Brain Sciences 44 (2021), E169. (with Tobias Schlicht, Johannes L. Brandl, Frank Esken, Hans-Johann Glock, Albert Newen, Josef Perner, Franziska Poprawe, Anna Strasser & Julia Wolf), doi:10.1017/S0140525X20001533.
- Pluralism About Practical Reasons and Reason Explanations, Philosophical Explorations 24 (2021), 119-136 (with Hans-Johann Glock), doi: 10.1080/13869795.2021.1908578.
This paper maintains that objectivism about practical reasons should be combined with pluralism both about the nature of practical reasons and about action explanations. We argue for an ‘expanding circle of practical reasons’, starting out from an open-minded monist objectivism. On this view, practical reasons are not limited to actual facts, but consist in states of affairs, possible facts that may or may not obtain. Going beyond such ‘that-ish’ reasons, we argue that goals are also bona fide practical reasons. This makes for a genuine pluralism about practical reasons. Furthermore, the facts or states of affairs that function as practical reasons are not exclusively natural or descriptive, but include normative facts. That normative facts can be reasons justifies a pluralism about reason explanations, one which allows for what we call enkratic explanations in addition to teleological ones.
- What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research, Artificial Intelligence 296 (2021), 103473. (with Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, and Andreas Sesing), doi: 10.1016/j.artint.2021.103473.
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.
- Where Reasons and Reasoning come Apart, Noûs 55 (2021), 762–781, doi: 10.1111/nous.12329.
Proponents of the reasoning view both analyze reasons as premises of good reasoning and explain the normativity of reasons by appeal to their role in good reasoning. The aim of this paper is to cast doubt on the reasoning view, not by addressing the latter, explanatory claim directly, but by providing counterexamples to such an analysis of reasons, counterexamples in which premises of good reasoning towards φ-ing are not reasons to φ.
I defend a reasons-first view of epistemic justification, according to which the justification of our beliefs arises entirely in virtue of the epistemic reasons we possess. I remove three obstacles for this view, which result from its presupposition that epistemic reasons have to be possessed by the subject: (1) the problem that reasons-first accounts of justification are necessarily circular; (2) the problem that they cannot give special epistemic significance to perceptual experience; (3) the problem that they have to say that implicit biases provide epistemic.
I argue against Kearns and Star’s reasons-as-evidence view, which identifies normative reasons to ɸ with evidence that one ought to ɸ. I provide a new counterexample to their view, the student case, which involves an inference to the best explanation from means to end or, more generally, from a derivative to a more foundational “ought” proposition. It shows that evidence that one ought to act a certain way is not in all cases a reason so to act.
I discuss the conceptualist claim that we cannot speak of perceptual content unless we assume it is objective content. The conceptualist argues that only conceptual content can meet the requirement of being objective, so that the view that perceptual experience has nonconceptual content is not tenable. I start out by presenting the argument from objectivity as it can be found in McDowell. I then present the following objections: First, perceptual objectivity cannot be due to the perceiver’s conception of objectivity; and second, even nonconceptual capacities of the individual cannot and need not be appealed to in order to account for objective perceptual content.
I present an argument for nonconceptualism based on animal and infant perception. I defend the argument against potential attacks from the conceptualist. I argue that there are indeed creatures which possess no concepts, but have perceptual experiences, and I attack McDowell’s view that we share perceptual sensitivity with animals and infants, but not genuine perceptual contents.
Book Chapters
- Stakes and Understanding the Decisions of AI Systems, in Juan Durán and Giorgia Pozzi (eds.), Philosophy of Science for Machine Learning: Core Issues, New Perspective (under contract with Synthese Library).
In epistemology, pragmatic intellectualists have argued for pragmatic encroachment: In high-stakes situations, better evidence is needed to know a relevant proposition. In the XAI debate, theorists have supported the need for explainable AI systems by claiming that stakeholders should be able to understand why a system provided a certain output, often exactly because stakes are high in the cases under consideration. A question that has not been discussed is whether understanding, like knowledge, is harder to achieve when the stakes are high. This issue is highly relevant for explainable AI: If explainability is needed because stakeholders need to be able to understand, but understanding is harder to achieve exactly in cases where explanability is called for, it seems that the standards for successful explanations - which are able to generate understanding - for AI systems will be rather high.
The purpose of this contribution is to investigate whether understanding is sensitive to the stakes of a situation, and to develop a model of how this might work; to provide suggestions how the (arguable) stakes-sensitivity of understanding should affect our theorizing about XAI; and finally, to connect this to discussions of inductive risk.
- Hume and the Unity of Reasons, in Scott Stapleford and Verena Wagner (eds.), Hume and Contemporary Epistemology (under contract with Routledge).
The reasons first program in epistemology has it that epistemic reasons are fundamental to epistemic justification and knowledge. Many of its proponents hold that we can draw insightful comparisons between epistemic and practical reasons, and should strive for a unified account of practical and epistemic reasons (Schroeder 2021, Schmidt 2021). However, this strategy seems to conflict with a stark Humean contrast between the practical and epistemic domains: With respect to practical reasons and reasoning, he highlights the role of the – non-representational – passions in motivating and rationalizing action, while downplaying the potential relevance of beliefs as providers of reasons. With respect to epistemic reasons and theoretical reasoning, he urges us to proportion our belief to the evidence (EHU, 10.4/110), which suggests that reasons to believe (i.e., evidence) are the contents of belief and perceptual experience. The aim of my contribution will be to investigate whether there can be a convincing Humean picture of the unity of reasons in the epistemic and practical domains. Specifically, I will examine whether the role of reason as ‘the slave of the passions’ (T 2.3.3.4/415) allows for believed or experienced facts to play the role of practical reasons, just as they play the role of epistemic reasons.
- Entry on Conceptualism and Nonconceptualism, in Kurt Sylvan (ed.), Blackwell Companion to Epistemology, (3rd edition) (forthcoming).
- Entry on Mentalism, in Kurt Sylvan (ed.), Blackwell Companion to Epistemology, (3rd edition) (forthcoming).
- Wie können wir autonomen KI-Systemen vertrauen? Die Rolle von Gründe-Erklärungen, in Karl-Heinrich Ostmeyer and Marcel Scholz (eds.) Gratwanderung Künstliche Intelligenz – Interdisziplinäre Perspektiven auf das Verhältnis von Mensch und KI. Stuttgart: Kohlhammer (2022), 11-29.
I present a novel argument for the claim that artificial intelligent (AI) systems ought to be constructed in such a way that they can be explained by appeal to reason explanations: Agents ought to start using autonomous AI systems only if they can trust the systems for good reasons. But users can only trust these systems for good reasons if it is epistemically accessible to them that the systems are trustworthy, or that trust in them is appropriate. After discussing different levels of appropriate trust, I argue that the right level of trust to extend to autonomous AI systems is goal-relative trust, which requires that the trustor be able to recognize that the trustee’s goal harmonizes with the trustor’s goals as well as that the trustee competently pursues the goal. This is just to say that users need to be able to recognize the
goals of autonomous AI systems and the information they have to go on in pursuit of these goals – users need access to reason explanations of the systems’ behavior.
- Religiöse Erfahrung: Inhalt, epistemische Signifikanz und Expertise, in Martin Breul and Klaus Viertbauer (eds.) Religiöse Erkenntnis? Gegenwärtige Positionen zur religiösen Epistemologie. Tübingen: Mohr Siebeck (2022), 11-31.
I discuss whether perception-like religious experience needs to have high-level contents so as to be able to justify typical religious beliefs, and whether it makes sense to think of religious experiences as expert perceptions.
I present an explanatory argument for the reasons-first view: It is superior to knowledge-first views in particular in that it can both explain the specific epistemic role of perception and account for the shape and extent of epistemic justification.
We explore whether a version of causalism about reasons for action can be saved by giving up Davidsonian psychologism and endorsing objectivism, so that the reasons for which we act are the normative reasons that cause our corresponding actions. We address two problems for ‘objecto-causalism’, actions for merely apparent normative reasons and actions performed in response to future normative reasons. To resolve these problems, we move from objecto-causalism to ‘objecto-capacitism’, which appeals to agential competences manifested in acting for a reason.
A (German) overview article over the nature and epistemic significance of religious experience.
- Quellen des Wissens: Wahrnehmung, in Martin Grajner and Guido Melchior (eds.) Handbuch Erkenntnistheorie. Metzler (2019), 122-128.
A (German) overview article over the nature of perceptual experience and its content, as well as different views on how it can be a source of knowledge and justification.
This chapter connects the traditional epistemological issue of justification with the what one might call the ‘new reasons paradigm’ coming from the philosophy of action and metaethics. I show that Conee and Feldman’s mentalism, a version of internalism about justification, can profitably be spelled out in terms of subjective normative reasons. On the way to achieving this aim, I argue that it is important to ask not just the oft-discussed ontological question about epistemic reasons – what kind of entities are they? – but also: Reasons in which sense are fundamental to justification?
I argue that epistemological disjunctivism, as defended by Pritchard or McDowell, faces a dilemma. To avoid collapsing into the “highest common factor view”, it has to combined with a metaphysical brand of disjunctivism. This is so because the epistemological disjunctivist’s contention, that veridical perception provides the perceiver with reflectively accessible epistemic reasons that are superior to those provided by hallucination, is tenable only if underwritten by the naïve realist claim that perception is partly constituted by the perceived fact. As I argue, this claim inexorably leads to metaphysical disjunctivism. So, epistemological disjunctivism cannot be advertised as a view that shares some of the advantages of metaphysical disjunctivism, but is less extreme and therefore more widely acceptable.
We argue that, to be trustworthy, Computational Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and graspable. We support this claim by drawing parallels with trustworthy human persons, and we show what difference this makes in a hypothetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be used to generate sufficiently accurate as well as graspable rationalizing explanations for CI behavior.
This (German) paper connects the thesis that knowledge and knowers are embodied with recent research on implicit bias and stereotypes.
Book reviews
- Review of Mark Schroeder, Reasons First, Oxford: Oxford University Press, Philosophical Review (2023) 132 (3): 515–519. https://doi.org/10.1215/00318108-10469603.
- Review of Julia Staffel, Unsettled Thoughts, Oxford: Oxford University Press, Deutsche Zeitschrift für Philosophie 70 (2022), 350-356.
- Jan G. Michel: Der qualitative Charakter bewusster Erlebnisse. Physikalismus und phanomenale Eigenschaften in der Philosophie des Geistes
Grazer Philosophische Studien 86 (2012), 279-283.
Work in Progress
I'm happy to share my drafts via e-mail: eva.schmidt [at] tu-dortmund.de. Comments welcome!
- The Epistemic Cost of Opacity: Why Medical Doctors Do Not Know when They Rely on Artificial Intelligence (with Paul Martin Putora and Rianne Fijten)
Artificial intelligent (AI) systems used in medicine are often very reliable and accurate, but at the price of their being increasingly opaque. This raises the question whether a system’s opacity undermines the ability of medical doctors to acquire knowledge on the basis of its outputs. We investigate this question by focusing on a case in which a patient’s risk of recur-ring breast cancer is predicted by an opaque AI system. We argue that, given the system’s opacity, as well as the possibility of malfunctioning AI systems, practitioners’ inability to check the correctness of their outputs, and the high stakes of such cases, the knowledge of medical practitioners is indeed undermined. They are lucky to form true beliefs based on the AI systems’ outputs, and knowledge is incompatible with luck. We supplement this claim with a specific version of the Safety condition on knowledge to account for how knowledge is un-dermined by opacity. We argue that, relative to the perspective of the medical doctor in our example case, his relevant beliefs could easily be false, and this despite his evidence that the AI system functions reliably. Assuming that Safety is necessary for knowledge, the practition-er therefore doesn’t know. We address two objections to our proposal before turning to prac-tical suggestions for improving the epistemic situation of medical doctors.
- The Reasons of AI Systems (with Kevin Baum, Maximilian Schlüter, and Timo Speith)
In this paper, we investigate problems with respect to ascribing reasons and reasoning to modern AI systems such as Artificial Neural Networks, and propose appropriate ways of doing so.
- Against Schellenberg’s Capacitism About Perceptual Evidence
Here I criticize Susanna Schellenberg’s (2018) capacitism, particularly her attempt to fully explain what perceptual evidence is by appeal to perceptual capacities to single out and discriminate particulars in the environment, which the subject employs in perceptual experience. Among other things, I argue that this proposal cannot distinguish between practical and epistemic reasons.