HUMANA.MENTE Journal of Philosophical Studies https://www.humanamente.eu/index.php/HM <p align="justify">Humana.Mente is a biannual journal focusing on contemporary issues in analytic philosophy broadly understood. HM publishes scholarly&nbsp; papers which explore significant theoretical developments within and across such specific sub-areas as: (1) epistemology, methodology, and philosophy of science; (2) Philosophy of mind and cognitive sciences; (3) Phenomenology; (4) Logics and philosophy of language&nbsp; (5) Normative ethics and metaethics. HM publishes special editions devoted to a concentrated effort to investigate important topics in a particular area of philosophy.</p> <p align="justify">ISSN: 1972-1293</p> en-US info@humanamente.eu (Humana.Mente Office) info@humanamente.eu (editorial assistant) Wed, 28 Dec 2022 00:00:00 +0100 OJS 3.1.1.0 http://blogs.law.harvard.edu/tech/rss 60 Introduction https://www.humanamente.eu/index.php/HM/article/view/439 Oisín N. Deery ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/439 Wed, 28 Dec 2022 07:35:44 +0100 A Discretionary Case for Preservationism about Free Will https://www.humanamente.eu/index.php/HM/article/view/407 <p>How does the term ‘free will’ refer? This question seems to lie at the center of debates about whether the attitudes and practices that depend on our successful attributions of basic-desert-entailing moral responsibility ought to be preserved or eliminated. In this paper I tackle questions about the way that different reference-fixing conventions might inform disagreement between preservationists and eliminativists about free will and moral responsibility, and argue that even recent elimination-friendly work on reference fails to offer much real support for eliminativism. In fact, making explicit the role that different <em>motivating concerns</em> play in rendering certain reference-fixing conventions operative for eliminativists and preservationists suggests at least one powerful reference-based argument in favor of preservationism.</p> Kelly McCormick ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/407 Wed, 28 Dec 2022 00:00:00 +0100 What’s the Relationship between the Theory and Practice of Moral Responsibility https://www.humanamente.eu/index.php/HM/article/view/409 <p>This article identifies a novel challenge to standard understandings of responsibility practices, animated by experimental studies of biases and heuristics. It goes on to argue that this challenge illustrates a general methodological challenge for theorizing about responsibility. That is, it is difficult for a theory to give us both guidance in real world contexts and an account of the metaphysical and normative foundations of responsibility without treating wide swaths of ordinary practice as defective. The general upshot is that theories must either hew more closely to actual practice than they appear to, or they must provide some normative foundation for responsibility that does not go through actual practice.</p> Henry Argetsinger, Manuel Vargas ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/409 Wed, 28 Dec 2022 00:00:00 +0100 Is Agentive Freedom a Secondary Quality? https://www.humanamente.eu/index.php/HM/article/view/413 <p>The notion of a secondary property is usefully construed this way: sensory-perceptual experiences that present apparent instantiations of such a quality have intentional content—<em>presentational</em> content—that is systematically non-veridical, because the experientially presented quality is never actually instantiated; but judgments that naively seem to attribute instantiations of this very quality really have different content—<em>judgmental</em> content—that is often veridical. Color-presenting experiences and color-attributing judgments, for instance, are plausibly regarded as conforming to such a dual-content secondary-quality account. In this paper we address the comparative theoretical advantages and disadvantages of two alternative versions of compatibilism about agentive freedom. <em>Illusionist</em> compatibilism is a dual-content secondary-quality view asserting that free-agency experience has presentational content that is libertarian and systematically non-veridical, whereas free-agency attributing judgments have non-libertarian, compatibilist, content. <em>Uniform</em> compatibilism instead asserts that free-agency experience and free-agency attributing judgments have uniform, compatibilist, content. We argue that uniform compatibilism fully accommodates the directly introspectable features of free-agency phenomenology, and is more plausible than illusionist compatibilism.</p> Terence Horgan, Mark Timmons ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/413 Wed, 28 Dec 2022 07:37:56 +0100 Empathic Control https://www.humanamente.eu/index.php/HM/article/view/401 <p>It has long been thought that control is necessary for moral responsibility. Call this the <em>control condition</em>. Given its pride of place in the free will debate, “control” has almost always been taken to be shorthand for <em>voluntary</em> control, an exercise of choice or will. Over the last few decades, however, many have been arguing for including a range of attitudes for which we seem to be responsible that, if controlled at all, must be controlled via a very different mechanism, namely, evaluative judgment. Call this second type of control <em>evaluative </em>control. In this paper I will present and discuss in detail an additional agential stance—reasonish regard—for which we treat one another as responsible, but that is ungoverned by either of the first two types of control. If we want to require a control condition for responsibility, then, we will need to introduce and include a third type of control, what I call <em>empathic</em> <em>control</em>.</p> David Shoemaker ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/401 Wed, 28 Dec 2022 00:00:00 +0100 Responsibility for Reckless Rape https://www.humanamente.eu/index.php/HM/article/view/410 <p>Sometimes persons are legally responsible for reckless behavior that causes criminal harm. This is the case under the newly drafted provisions of the U.S. Model Penal Code (MPC), which holds persons responsible for “simple” rape (nonconsensual sex without proof of force or threats of force), where the offender recklessly disregards the risk that the victim does not consent. In this paper we offer an explanation and corrective critique of the handling of reckless rape cases, with a focus on the U.S. criminal justice system, although our analysis is applicable more broadly. We argue that a wider group of reckless rapists are criminally responsible than is captured by the MPC and claim criminal punishment of reckless rapists must be justified by looking to both moral desert and instrumental aims achieved by criminal punishment. Part of the law’s job is to communicate and enforce society’s expectations regarding unacceptable behavior. In punishing reckless rape, we are not just giving people what they deserve, but also reinforcing and shaping norms regarding sexual behavior.</p> Katrina Sifferd, Anneli Jefferson ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/410 Wed, 28 Dec 2022 00:00:00 +0100 Punishment and Desert https://www.humanamente.eu/index.php/HM/article/view/411 <p>This paper explores the relationship between punishment and desert and offers two distinct sets of reasons for rejecting the retributive justification of legal punishment — one theoretical and one practical. The first attacks the philosophical foundations of retributivism and argues that it’s unclear that agents have the kind of free will and moral responsibility needed to justify it. I present stronger and weaker versions of this objection and conclude that retributive legal punishment is unjustified and the harms it causes are prima facie seriously wrong. The second objection maintains that <em>even if</em> one were to assume that wrongdoers are deserving of retributive punishment, contra concerns over free will, we should still abandon retributivism since there remain insurmountable practical difficulties that make it impossible to accurately and proportionally distribute legal punishment in accordance with desert. In particular, I present the <em>Misalignment Argument </em>and <em>Poor Epistemic Position Argument</em> and argue that, taken together, they create a powerful new challenge to retributivism called the <em>Retributivist Tracking Dilemma.</em></p> Gregg D. Caruso ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/411 Wed, 28 Dec 2022 00:00:00 +0100 Children’s Developing Beliefs About Agency and Free Will in an Increasingly Technological World https://www.humanamente.eu/index.php/HM/article/view/415 <p>The idea of treating robots as free agents seems only to have existed in the realm of science fiction. In our current world, however, children are interacting with robotic technologies that look, talk, and act like agents. Are children willing to treat such technologies as agents with thoughts, feelings, experiences, and even free will? In this paper, we explore whether children’s developing concepts of agency and free will apply to robots. We first review the literature on children’s agency and free-will beliefs, particularly looking at their beliefs about volition, responding to constraints, and deliberation about different options for action. We then review an emerging body of research that investigates children’s beliefs about agency and free will in robots. We end by discussing the implications for developing beliefs about agency and free will in an increasingly technological world.</p> Teresa M. Flanagan, Tamar Kushnir ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/415 Wed, 28 Dec 2022 00:00:00 +0100 How Social Maintenance Supports Shared Agency in Humans and Other Animals https://www.humanamente.eu/index.php/HM/article/view/414 <p>Shared intentions supporting cooperation and other social practices are often used to describe human social life but not the social lives of nonhuman animals. This difference in description is supported by a lack of evidence for rebuke or stakeholding during collaboration in nonhuman animals. We suggest that rebuke and stakeholding are just two examples of the many and varied forms of social maintenance that can support shared intentions. Drawing on insights about mindshaping in social cognition, we show how apes can be stakeholders of a different sort in joint action. Drawing on pluralistic social maintenance methods of behavior enforcement, we show ape joint action can be supported by different forms of positive and negative social pressures, and not just protest. We explain how diverse relationships, contexts, social structures, and forms of communication may play a role in forming and successfully fulfilling joint commitments for humans, great apes, and other animals.</p> Dennis Papadopoulos, Kristin Andrews ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/414 Wed, 28 Dec 2022 00:00:00 +0100 Varieties of Artificial Moral Agency and the New Control Problem https://www.humanamente.eu/index.php/HM/article/view/408 <p>Machine ethics is concerned with ensuring that artificially intelligent machines (AIs) act morally. One famous issue in the field, <em>the control problem</em>, concerns how to ensure human control over AI, as out-of-control AIs might pose existential risks, such as exterminating or enslaving us (Yampolskiy, 2020). A second, related issue — <em>the alignment problem</em> — is concerned more broadly with ensuring that AI goals are suitably aligned with our values (Gabriel, 2020). This paper presents a new trilemma with respect to resolving these problems. Section 1 outlines three possible types of artificial moral agents (AMAs):</p> <p><strong>Inhuman AMAs:</strong> AIs programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do.</p> <p><strong>Better-Human AMAs:</strong> AIs programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error.</p> <p><strong>Human-Like AMAs: </strong>AIs programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology.</p> <p>Sections 2-4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in <em>inhumane</em> ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of <em>magnifying</em> some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us (Schwitzgebel &amp; Garza, 2020). This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over <em>each other</em>. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term “circumstances of justice” between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve — indeed, far more difficult than the already-formidable problem of ensuring justice between humans —given the vast capability differences we can expect to exist between humans and Human-Like AMAs. I thus conclude on a skeptical note. Different approaches to developing “safe, ethical AI” generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be surmountable. To determine whether they are, and if so how, AI ethicists and developers must pursue more careful bodies of work on the problems this paper presents.</p> Marcus Arvan ##submission.copyrightStatement## http://creativecommons.org/licenses/by-nc-nd/4.0 https://www.humanamente.eu/index.php/HM/article/view/408 Wed, 28 Dec 2022 00:00:00 +0100