Huzeyfe Demirtas

I am a Postdoctoral Fellow in Philosophy at Harvard University, Department of Philosophy. Before coming to Harvard, I was a Postdoctoral Researcher in the Smith Institute for Political Economy and Philosophy at Chapman University. I earned my PhD in philosophy at Syracuse University, Department of Philosophy. I completed my postbaccalaureate studies in philosophy at SUNY Fredonia. And I hold a BS degree in computer science teaching from Firat University. I am a dual citizen of Turkey and the United States.

My primary research interests are moral responsibility, free will, ethics of AI, and applied ethics (esp. environmental ethics).

At Syracuse and Chapman, I taught courses like ethics, environmental ethics, happiness and meaning in life, theories of knowledge and reality, logic, and free will. At Harvard, I created and taught modules—embedded in undergraduate and graduate computer science courses—on topics such as the ethics of technological unemployment, distributive justice, and the ethics of hacking back, as well as designing and leading course-specific ethics bowl activities.

Click here for my interview with the American Philosophical Association.

Click here for my interview with the Turkish analytic philosophy journal, Öncül Analitik Felsefe Dergisi.

CV

Specialization

Responsibility, Free Will, Ethics of AI, Applied Ethics (esp. Environmental Ethics)

Competencies

Epistemology, Metaphysics, Classical Islamic Philosophy, Political Philosophy

Employment

2024 - Present
Harvard University, Department of Philosophy

Postdoctoral Fellow in Philosophy

2023-2024
Chapman University, Smith Institute for Political Economy and Philosophy

Postdoctoral Researcher

Education

2016 - 2023
Syracuse University

PhD Candidate & Teaching Associate

Dissertation: Responsibility Internalism and Responsibility for AI

Committee: Sara Bernstein, Ben Bradley (primary), Mark Heller, Hille Paakkunainen

2015-2016
SUNY Fredonia

Postbaccalaureate in Philosophy

2004-2009
Firat University

BS in Computer Science Teaching

Publications

Forthcoming

'Against the Degree-Scope Response to Moral Luck, or A Farewell to Responsibility for Consequences'

Forthcoming

'Take a Stand, You Don't Have to Make a Difference'

2025

'AI Responsibility Gap: Not New, Inevitable, Unproblematic'

2024

'Drawing a Line: Rejecting Resultant Moral Luck Alone'

2022

'Moral Responsibility is Not Proportionate to Causal Responsibility'

2022

'Against Resultant Moral Luck'

2022

'Causation Comes in Degrees'

Public Philosophy

2020

Talks

Speaker (*=refereed, +=invited)

'Responsibility Doesn't Require Alternative Possibilities'

  • +SPAWN Returning Home: Ethics at Syracuse, Syracuse University (July 2025)
  • *American Philosophical Association, Central Division (Feb 2025)
  • Faculty Research Presentations, Harvard University (Oct 2024)
  • Responsibility Workshop, Chapman University (Apr 2024)

‘AI Responsibility Gap: Not New, Inevitable, Unproblematic’

  • AI and Data Ethics Workshop, Northeastern University (July 2024)
  • *Midwest Ethics Symposium: Ethics and AI, The Prindle Institute for Ethics (Apr 2024)
  • *Penn-Georgetown Digital Ethics Workshop, University of Pennsylvania (March 2024)
  • Brown Bag Workshop, Chapman University (March 2024)

‘Current Debates in Ethics of AI and Technology’

  • +Department of Computer Engineering,
    Kütahya Health Sciences University
    (Dec 2024)

‘Take a Stand, You Don’t Have to Make a Difference’

  • +SOPhiA 2023: Collective Harm and Responsibility in the Climate Crisis, University of Salzburg (Sep 2023)
  • *Young Philosophers Read-Ahead Conference, DePauw University (Jan 2023)
  • *International Society for Environmental Ethics, American Philosophical Association, Eastern Division (Jan 2023)
  • *Young Philosophers Lecture Series, DePauw University (Sep 2022)

‘Drawing A Line, Rejecting Resultant Moral Luck Alone’

  • *American Philosophical Association, Pacific Division (Apr 2023)
  • *Free Will, Moral Responsibility, and Agency, Florida State University (Feb 2023)
  • ABD Workshop Series, Syracuse University (Oct 2022)

‘Wrong but Praiseworthy, Right but Blameworthy’

  • *Rightness, Ignorance, Uncertainty, and Praise Workshop, University of Southern California (June 2022)
  • *72nd Annual Meeting of the New Mexico Texas Philosophical Society, Baylor University (Apr 2022)
  • ABD Workshop Series, Syracuse University (Feb 2022)

’Causation Comes in Degrees’

  • *American Philosophical Association, Eastern Division (Jan 2022)
  • *Society for the Metaphysics of Science, 6th Annual Conference (Sep 2021)

’Against Resultant Moral Luck’

  • *Summer School on Causation and Responsibility, University of Bern (July 2021)
  • *Great Lakes Philosophy Conference—Ethics in Action, Siena Heights University (Apr 2021)
  • +Philosophical Society of Fredonia, SUNY Fredonia (Nov 2020)
  • *94th Joint Session of the Aristotelian Society and the Mind Association, University of Kent (July 2020)
  • *International Conference on Ethics, University of Porto (June 2019)

‘Moral Responsibility Is Not Proportionate to Causal Responsibility’

  • *American Philosophical Association, Eastern Division (Jan 2021)
  • ABD Workshop Series, Syracuse University (Feb 2020)
  • *AGENT, Ethics and Normativity Talks, University of Texas at Austin (Nov 2019)
  • *20th Annual Pitt-CMU Graduate Student Philosophy Conference, University of Pittsburgh  & Carnegie Mellon University (March 2019)

‘Stocker’s Schizophrenia, Alienation, and a Solution’

  • *Fundamentality in Philosophy, The 7th International Philosophy Graduate Conference, Central European University (Apr 2018)

‘Against Reliabilism: In the Face of Skepticism’

  • *Northwest Student Philosophy Conference,
    Western Washington University
    (May 2017)

Commentator

Mar 2025

On Selim Berker’s ‘How Your Vote Determines a Winner: On the Metaphysics of Voting’

Edmond & Lily Safra Center for Ethics, Harvard University

July 2024

On Kendra Chilson’s ‘Keeping Our Hands Clean? Autonomous Systems and Diversion of Responsibility’

AI and Data Ethics Workshop, Northeastern University

Mar 2023

On Itamar Weinshtock Saadon’s ‘Responsibility, Causation, and Reversing the Order of Explanation’

Syracuse Graduate Philosophy Conference

Feb 2023

On Joshua Tignor’s ‘Theorizing About Moral Responsibility As Such’

ABD Workshop Series 2021, Syracuse University

July 2022

On Jules Salomone-Sehr’s ‘Complicity: A Minimalist Account for Our Maximally Messy Social World’

Vancouver Summer Philosophy Conference

Apr 2022

On Peter Zuk’s ‘Reconciling Experiential Theories of Pleasure’

72nd Annual Meeting of the New Mexico Texas Philosophical Society, Baylor University

Apr 2022

On Hannah Winckler-Olick’s ‘Simone de Beauvoir on Value-Creation as a Mode of Complicity’

Centennial Conference of the Creighton Club

Jan 2022

On David Sackris and Rasmus Rosenberg Larsen’s ‘Are There Moral Judgements?’

APA Eastern Division Meeting 2022

Oct 2021

On Joshua Tignor’s ‘Moral Growth and Moral Responsibility’

ABD Workshop Series 2021, Syracuse University

July 2021

On Alex Kaiserman’s ‘Responsibility and the ‘Pie Fallacy’’

Summer School on Causation and Responsibility, University of Bern

Apr 2021

On Perry Hendricks’s ‘The Impairment Argument Reconsidered’

Syracuse Graduate Philosophy Conference

Mar 2019

On Caner Turan’s ‘On Greene’s Evolutionary Challenge to Deontological Ethics’

Syracuse Graduate Philosophy Conference

Works in Progress

A paper on moral rightness and wrongness versus moral praise and blame

Under Review

A paper about the flicker defense against Frankfurt-style cases

Under Review

'Responsibility Doesn't Require Alternative Possibilities'

Polished Draft

'(How) Does Accountability Require Explainable AI?'

Draft

Teaching

Harvard University (Modules Embedded into Undergrad/Grad Computer Science Courses)

Spring 2025

Ethics—Deep Integration (Co-created)

CS50: Introduction to Computer Science

Spring 2025

Ethics Bowl

CS1060: Software Engineering with Generative AI

Spring 2025

Distributive Justice

CS1360: Economics and Computation

Fall 2024

Ethics of Technological Unemployment

ES159/259: Introduction to Robotics

Fall 2024

Ethical Implications of Interpretability (Co-created & Co-run)

CS2822R: Topics in Machine Learning - Interpretability

Fall 2024

Ethics of Hacking Back

CS2630: Systems Security

Chapman University (Lead Instructor)

Spring 2024

PHIL303: Environmental Ethics

Syracuse University (Lead Instructor)

Spring 2022/23

PHI394: Environmental Ethics

Fall 2023

PHI191: The Meaning of Life

Spring 2020, Summer 2021/22/23

PHI251: Logic

Spring 2021

PHI383: Free Will

Winter 2021

PHI200: Happiness and Meaning in Life

Fall 2020

PHI197: Human Nature   

Summer 2020

PHI107: Theories of Knowledge and Reality 

Fall 2019

PHI192: Introduction to Moral Theory

Syracuse University (Teaching Assistant)

Fall 2021

Human Nature (Christopher Noble)   

Spring 2019

Theories of Knowledge and Reality (Janice Dowell)

Fall 2018

Logic (Mark Heller) 

Fall 2017

Introduction to Moral Theory (David Sobel)

Fall 2017

Introduction to Moral Theory (Hille Paakkunainen) 

Spring 2017

Human Nature (Neelam Sethi) 

Fall 2016

Theories of Knowledge and Reality (Robert Van Gulick) 

Honors & Awards

2022

Summer Research Fellowship

Syracuse University

2021

Outstanding Teaching Assistant Award

Syracuse University

2016

The Philosophical Society, Student Achievement Award

SUNY Fredonia

Service

Referee

American Philosophical Quarterly, Australasian Journal of Philosophy, Ergo, Erkenntnis, Ethics and Information Technology, European Journal of Philosophy, Journal of Philosophy, Journal of the American Philosophical Association, Synthese

Thesis Advising

Harvard University

Diana L. Yue's senior thesis, "Thinking Outside the Black Box: Justifying Beliefs in the Age of Opaque Autonomous AI Systems." (Spring '25)

Thesis Committee

Harvard University

Peter A. H. Jin's senior thesis, "Forgiveness, Atonement, and The Edge of Desert: Responding Morally to What We Don’t Deserve." (Spring '25)

Lead

Harvard, Embedded EthiCS Research and Engagement (Fall 2025)    

Co-Lead

Harvard, Embedded EthiCS Teaching and Learning (Spring 2025)

Co-Lead

Harvard, Embedded EthiCS Research and Engagement (Fall 2024)    

Co-Organizer

Responsibility Workshop, Chapman University (April 2024)

Judge

Southern California High School Ethics Bowl Competition (2024)

Senator

Syracuse Graduate Student Organization (2020-2021)    

Co-Organizer

Syracuse Graduate Philosophy Conference (July 2020)

Graduate Coursework

Ethics (*=audit)

Moral and Political Philosophy (Hille Paakkunainen)

Constructivism in Metaethics (Hille Paakkunainen)

Anti-Realism and Pragmatism in Ethics (Nate Sharadin)

Anti-Theory in Ethics (Independent study with Hille Paakkunainen)

Ethics of Nudging (Independent study with Ben Bradley)

*Motivation (Hille Paakkunainen)

*Animal Ethics (Ben Bradley)

*Free Will (Mark Heller)

*Prudence (Ben Bradley)

Epistemology (*=audit)

Topics in Contemporary Epistemology (Nate Sharadin)

Language, Epistemology, Mind, Metaphysics (K. McDaniel, K. Edwards)

*Epistemology (Hille Paakkunainen)

Metaphysics

Beyond the Modal: Essence and Potentiality (Kris McDaniel)

Metaphysics of Ethics (Ben Bradley, Kris McDaniel)

Political Philosophy

Justice and Equality (Ken Baynes)

Philosophy of Social Sciences (Ken Baynes)

History of Philosophy

History of Philosophy (Frederick C. Beiser)

Classical Arabic Philosophy (Kara Richardson)

Logic and Language

Logic and Language (Michael Rieppel)
Concepts (Kevan Edwards)

Languages

English, Turkish (Native), Arabic (Reading, Intermediate)

Research

Publications

Against the Degree-Scope Response to Moral Luck, or A Farewell to Responsibility for Consequences (forthcoming, The Journal of Philosophy)

Abstract:

Resultant moral luck is typically considered to be the most problematic type of moral luck. Arguably the most popular response to the problem of resultant moral luck is the idea that resultant luck affects the scope but not the degree of responsibility. Call this the ‘Degree Scope Response’ (DSR). Philosophers also use DSR in responding to other types of moral luck and in contexts outside moral luck. In this paper, I argue that DSR fails. Then I suggest that we should hold that resultant luck affects neither the degree nor the scope of responsibility. Put differently, consequences are metaphysically irrelevant to responsibility. Further, I discuss various advantages of this view and show its various implications on questions about free will, theories of causation, and responsibility in contexts outside moral luck. I also defend this view against the worry that it is too revisionary.

Click here for the PhilPapers page.

Email me for a copy.

Take a Stand, You Don't Have to Make a Difference (forthcoming, Erkenntnis)

Abstract:

Many of our large-scale problems that arise only recently in human history and in an industrialized global world present us with a unique challenge. Often while people collectively make a difference, individual actions are inconsequential. Consider climate change. We all collectively contribute to its unwanted consequences. But individual actions are inconsequential: One more or one less person taking a joyride in a gas-guzzler on a Sunday afternoon makes no difference regarding these consequences. Donating to charity, voting, buying fair trade products, factory farming, and environmental pollution all present the same challenge. One more or one less vote doesn’t make a difference. But then it’s unclear why individuals should act against climate change or vote. This is the so-called problem of inconsequentialism. In this paper, I present a new solution to this problem by appealing to a type of action that is yet to receive philosophical attention—i.e., taking a stand. I show that taking a stand can be morally valuable and reason-giving even if it makes no difference.

Click here for the PhilPapers page.

Email me for a copy.

AI Responsibility Gap: Not New, Inevitable, Unproblematic (2025, Ethics and Information Technology)

Abstract:

Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for it. Two central questions in the literature are whether responsibility gaps exist, and if yes, whether they’re morally problematic in a way that counts against developing or using AI. While some authors argue that responsibility gaps exist, and they’re morally problematic, some argue that they don’t exist. In this paper, I defend a novel position. First, I argue that current AI doesn’t generate a new kind of concern about responsibility that the older technologies don’t. Then, I argue that responsibility gaps exist but they’re unproblematic.

Click here for the PhilPapers page.

Email me for a copy.

Drawing a Line: Rejecting Resultant Moral Luck Alone (2024, Canadian Journal of Philosophy)

Abstract:

The most popular position in the moral luck debate is to reject resultant moral luck while accepting the possibility of other types of moral luck. But it’s unclear whether this position is stable. Some argue that luck is luck and if it’s relevant for moral responsibility anywhere, it’s relevant everywhere, and vice versa. Some argue that given the similarities between circumstantial moral luck and resultant moral luck, there’s good evidence that if the former exists, so does the latter. The challenge is especially pressing for the large group that exclusively deny resultant moral luck. I argue that rejecting resultant moral luck alone is a stable and plausible position. This is because, in a nutshell, the other types of luck can but the results of an action cannot affect what makes one morally responsible.

Click here for the PhilPapers page.

Email me for a copy.

Moral Responsibility is Not Proportionate to Causal Responsibility (2022, Southern Journal of Philosophy)

Abstract:

It seems intuitive to think that if you contribute more to an outcome, you should be more morally responsible for it. Some philosophers think this is correct. They accept the thesis that ceteris paribus one's degree of moral responsibility for an outcome is proportionate to one's degree of causal contribution to that outcome. Yet, what the degree of causal contribution amounts to remains unclear in the literature. Hence, the underlying idea in this thesis remains equally unclear. In this article, I will consider various plausible criteria for measuring degrees of causal contribution. After each of these criteria, I will show that this thesis entails implausible results. I will also show that there are other plausible theoretical options that can account for the kind of cases that motivate this thesis. I will conclude that we should reject this thesis.

 

Click here for the PhilPapers page.

Email me for a copy.

Against Resultant Moral Luck (2022, Ratio)

Abstract:

Does one’s causal responsibility increase the degree of one’s moral responsibility? The proponents of resultant moral luck hold that it does. Until quite recently, the causation literature has almost exclusively been interested in the binary question of whether one factor is a cause of an outcome. Naturally, the debate over resultant moral luck also revolved around this binary question. However, we’ve seen an increased interest in the question of degrees of causation in recent years. And some philosophers have already explored various implications of a graded notion of causation on resultant moral luck. In this paper, I’ll do the same. But the implications that I’ll draw attention to are bad news for resultant moral luck. I’ll show that resultant moral luck entails some implausible results that leave resultant moral luck more indefensible than it was previously thought be. I’ll also show that what’s typically taken to be the positive argument in favor of resultant moral luck fails. I’ll conclude that we should reject resultant moral luck.

Click here for the PhilPapers page.

Email me for a copy.

Causation Comes in Degrees (2022, Synthese)

Abstract:

Which country, politician, or policy is more of a cause of the Covid-19 pandemic death toll? Which of the two factories causally contributed more to the pollution of the nearby river? A wide-ranging portion of our everyday thought and talk, and attitudes rely on a graded notion of causation. However, it is sometimes highlighted that on most contemporary accounts, causation is on-off. Some philosophers further question the legitimacy of talk of degrees of causation and suggest that we avoid it. Some hold that the notion of degrees of causation is an illusion. In this paper, I’ll argue that causation does come in degrees.

Click here for the PhilPapers page and to download the paper.

Epistemic Injustice (2020)

Click here for my entry on epistemic injustice in 1000WordPhilosophy: An Introductory Anthology.


Dissertation Summary

Dissertation Summary

In my dissertation, I argue for responsibility internalism. That is, moral responsibility  (i.e., accountability, or being blameworthy or praiseworthy) depends only on factors internal to agents. Employing this view, I also argue that no one is ever blameworthy for what AI does but this isn’t morally problematic in a way that counts against developing or using AI.

Here’s a brief overview of my arguments. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue that causal responsibility is irrelevant for moral responsibility, and that the control condition and the epistemic condition depend only on factors internal to agents. Moreover, since what AI does is at best a consequence of our actions, and the consequences of our actions are irrelevant to our responsibility, no one is responsible or blameworthy for what AI does. That is, the so-called responsibility gap exists. However, I argue, this isn’t morally worrisome for developing or using AI. Below, I present summaries of each chapter of my dissertation.

Some philosophers hold that, all else equal, one’s degree of moral responsibility is proportionate to one’s degree of causation (or causal contribution). Call this thesis Proportionality. If causation doesn’t come in degrees, Proportionality is false. So, in chapter one, I discuss whether causation comes in degrees. I argue that it does by showing that all the main objections against graded causation fail and that denying graded causation is theoretically too costly. This chapter of my dissertation has been published in Synthese.

In chapter two, I argue that Proportionality is false despite the fact that causation comes in degrees. To establish this, I employ six plausible criteria for measuring degrees of causation and show that Proportionality understood according to each of these criteria entails implausible results. I also show that there are other plausible theoretical options to account for the kind of cases that motivate Proportionality. This chapter of my dissertation has been published in Southern Journal of Philosophy.

In chapter three, I argue that there is no resultant moral luck (RML). What’s at stake in the debate over RML is best cast in terms of whether causal responsibility increases one’s moral responsibility. I draw attention to previously unexplored implications of RML and argue that these implications leave RML more indefensible than it was thought to be. I also show that what’s typically taken to be the positive argument in favor of RML fails. I conclude that we should reject resultant moral luck. This chapter of my dissertation has been published in Ratio.

Proportionality and RML are the two most plausible positions one could take if causal responsibility is relevant for moral responsibility. Hence, in chapter four, I conclude that causal responsibility is metaphysically irrelevant for moral responsibility, clarify and develop this thesis, and defend it against potential objections.

In chapter five, I argue that neither the epistemic condition nor the control condition presupposes anything external to agents. The epistemic condition rests on the idea, roughly, that one can be morally responsible only if one is aware of certain morally relevant factors. The awareness in question can be knowledge, justified (true) belief, or (true) belief. As it is commonly accepted, knowledge is too strong a requirement for moral responsibility. I follow the reasoning behind this and show that justified (true) belief is also too strong a requirement. I further argue that moral responsibility doesn’t require even true belief. And since the awareness requirement in question presupposes neither justification nor truth, it doesn’t presuppose anything external to agents.

The control condition is the subject matter of the classic free will debate. I survey the leading compatibilist and incompatibilist theories of control and argue that none of them, at least in their most plausible forms, presupposes anything external to agents. A major concern for my argument is that the debate between compatibilists and incompatibilists mainly revolve around determinism. Compatibilists argue that the kind of control required for moral responsibility—i.e., free will—is compatible with determinism, and incompatibilists reject this. Determinism is the idea that at any moment the state of world and the laws of nature entail one unique future. As it stands, determinism is not only a feature internal to agents but a feature of the world. However, I argue, (in)determinism external to agents is irrelevant for the control condition—what matters is only (in)determinism internal to agents. That is, what matters is only whether the mental events in agents are (un)determined, not whether anything else in the universe is.

I conclude that the epistemic condition and the control condition depend only on factors internal to agents. Since I also argued that causal responsibility is irrelevant to moral responsibility, there remains no condition of moral responsibility that depends on anything external to agents. Hence, responsibility internalism is true.

In chapter six, I employ responsibility internalism to weigh in on a debate about responsibility in the context of artificial intelligence. Consider autonomous systems or machines that rely on artificial intelligence such as self-driving cars, lethal autonomous weapons, candidate screening tools, medical systems that diagnose cancer, and automated content moderators. Who is responsible for it when such machines or systems (or AI for short) causes a harm? Given that current AI is far from being conscious or sentient, it is unclear that AI is responsible for a harm it causes. But given that AI gathers new information and acts autonomously, it is also unclear that those who develop or deploy AI are responsible for what AI does. This leads to the so-called responsibility gap: that is, roughly, cases where AI causes a harm, but no one is responsible for it. Two central questions in the literature are whether responsibility gap exists, and if yes, whether it’s morally problematic in a way that counts against developing or using AI. While some authors argue that responsibility gap exists, and it is morally problematic, some argue that it doesn’t exist or that it’s dubious that it exists. Drawing from discussions in the earlier chapters, I defend a novel position. I firstly argue that current AI doesn’t generate a novel concern about responsibility that the older technologies don’t. Then, I argue that responsibility gap exists—that, more precisely, responsibility gap is inevitable and ubiquitous. I also argue this is not morally worrisome for developing or using AI. This is because neither responsibility gap, nor my argument for its existence, entails that no one can be justly held accountable, or no one has a duty in reparations, once AI causes a harm.


Works in Progress

A paper about the flicker defense against Frankfurt-style cases (Under Review)

Abstract: The Principle of Alternate Possibilities (PAP) says that one is responsible for an action only if one could have acted otherwise. Flicker defense is one promising line of response to Frankfurt-style cases (FSCs) in defense of PAP. Flicker defense is almost as old as FSCs. However, in recent years, it has made an intriguing comeback as some philosophers developed stronger versions of this defense. But this ‘revived’ flicker defense has also recently been criticized. One aim of this paper is to respond to these criticisms. But part of my response requires revising flicker defense. Hence, the other aim is to revise and build an even stronger version of this defense.

A paper on moral rightness and wrongness versus moral praise and blame (Under review)

In this paper, I argue that one can be blameworthy for performing an action that’s right, and praiseworthy for an action that’s wrong. It’s relatively uncontroversial that basic desert responsibility (being apt for praise or blame) is distinct from responsibility in the duty sense (i.e., what’s morally right/wrong). But the extent to which they come apart can be controversial. For instance, it’s typically accepted one may not be praiseworthy (/blameworthy) for an action that’s morally right (/wrong). Yet, it’s also common to think that one can be praiseworthy (/blameworthy) for an action only if it’s morally right (/wrong). But this is false—or so I argue in a novel argument that I call the Argument from Moral Encouragement.

Responsibility Doesn't Require Alternative Possibilities (Draft)

Abstract: The Principle of Alternate Possibilities (PAP) says that one is responsible for an action only if one could have done otherwise. The most widely discussed challenge to PAP comes from Frankfurt-style cases (FSCs). The decades long debate between PAP and FSCs has proved philosophically fruitful in many respects. But it’s also difficult not to get the impression from the literature that the debate has run its course or reached an impasse. In this paper, I present a novel argument that PAP is false.

(How) Does Accountability Require Explainable AI? (Draft)

Abstract: Autonomous systems powered by artificial intelligence (AI) are said to generate responsibility gaps (RGs)—cases in which AI causes harm, yet no one is blameworthy. This paper has three aims. First, I argue that we should stop worrying about RGs. This is because, on the most popular contemporary theories, blameworthiness is determined at the development or deployment stage, making post-deployment outcomes irrelevant to blameworthiness. Another upshot of this argument is that questions about blameworthiness do not motivate the demand for explainable AI. Second, I distinguish blameworthiness from liability and show that blameworthiness is not necessary—nor is it sufficient—for liability. Third, I explore how AI opacity complicates identifying who caused harm—an essential step in assigning liability. I end on an optimistic note by suggesting that identifying who caused the harm—even if we use opaque AI models—is within our reach and not too costly. I also note that liability in the context of AI requires further inquiry, which again suggests that we should stop worrying about RGs and focus on questions about liability.

Teaching

From Fall 2019 to Spring 2020, I attended Syracuse University’s Future Professoriate Program and was awarded the Certificate in University Teaching. In 2021, I received Syracuse University’s Outstanding Teaching Assistant Award for my teaching achievements. I have designed and independently taught courses or modules at Harvard University, Chapman University, and Syracuse University.

Listed below are (i) courses I designed and taught at Chapman and Syracuse; (ii) modules—embedded in undergraduate or graduate computer science courses—that I (co-)designed and taught at Harvard; and (iii) screenshots of select student emails and evaluations.

Courses Taught

PHI394, PHIL303: Environmental Ethics (Spring 2022/23/24)

Course Description:

This course addresses a range of questions surrounding environmental ethics. We will begin by examining some of the major ethical theories about moral rightness and wrongness. What makes an action morally right or wrong? What considerations do we need to take into account in making moral decisions? We will then address various ethical questions regarding climate change. Does climate change generate moral obligations for individuals or only for governments? If it generates moral obligations for individuals, how demanding are these obligations? After that, we will discuss questions regarding (non-human) animals. Do we have moral duties towards animals? Do animals morally count less than humans or are they morally equal to us? Is it wrong to consume animal products? We will finish the course by addressing moral questions about non-animal objects in nature such as trees and rivers. Do we have moral duties to them? If yes, what further moral implications follow?

PHI191: The Meaning of Life (Fall 2022)

Course Description:

The goal is to investigate some of the central topics concerning the meaning of life. We will be interested in these topics not only for possible answers. Our journey will be at least as significant for the genuine questions that we will encounter or raise along the way. Our journey will also help you develop reasoning and argumentative skills, and learn how to write reasonably and clearly. Some of the central questions we will discuss are as follows:

  • What are we after when we inquire about life’s meaning?
  • Is the meaning of life a subjective or an objective matter?
  • What, if anything, constitutes a meaningful life?
  • Can life still be meaningful if there is no God?
  • How do different conceptions of God bear on life’s meaning?

PHI251: Logic (Spring 2020, Summer 2021/22/23)

Course Description:

After a brief review of basic concepts like validity and soundness, the course covers truth tables and proofs in both statement logic and predicate logic.

Goals:

(1) To improve reasoning skills by practicing within a formal structure. (2) To develop a fuller appreciation of the meanings of English sentences by analyzing their formal structure and tracing their logical consequences. (3) To improve skills in written and oral communication by accomplishing the first two tasks.

 

PHI383: Free Will (Spring 2021)

Course Description:

Is it up to you take this course? Or is it determined beforehand? Or could both of those be true together? Would the absence of prior determination help, or would that just turn your actions into chance events? This course explores the concept of free will, asking: what is it, can we have any, and why should we care?

Goals:

After taking this course, the students will be able to:

… explain and intelligently discuss the major theories of free will.
… formulate and defend their own views on free will.
… better appreciate subtle distinctions and arguments.
… read, write, and converse at a higher level.

PHI200: Happiness and Meaning in Life (Winter 2021)

Course Description:

What does it mean to live a meaningful life? Is a meaningful life a happy life? Can the answers to these questions help us reconcile to all that is wrong with the world? These questions become especially interesting now that we’re going through challenging times where quarantine, anxiety, and limited mobility and sociality are among the defining features of our lives. In this course, we will examine some of the influential philosophical perspectives on the meaning of life and happiness. The course will also aim to improve your critical thinking and writing skills.

PHI197: Human Nature (Fall 2020)

Course Description:

This course covers some of the central topics that concern us as human beings. We will be interested in these topics not only for possible answers. Our journey will be at least as significant, if not actually more, for the genuine questions we will raise along the way. Our journey will also help you develop reasoning and argumentative skills, and learn how to write reasonably and clearly. We will discuss these questions:

  • What is knowledge and how do we obtain it?
  • What sort of cognitive biases do we have? Are we blind to the obvious?
  • What is it that makes us what we are?
  • What is the meaning of life? What, if anything, matters?
  • Is morality objective? Why be moral?
  • Is death bad for us? Can we cheat death?

PHI107: Theories of Knowledge and Reality (Summer 2020)

Course Description:

The primary goal is to help you develop reasoning and argumentative skills. You will learn how to write reasonably and clearly. The secondary goal is to introduce you to the main topics in philosophy. We will discuss these philosophical issues:

  • What is knowledge and how do we obtain it?
  • Do we have free will? Are we morally responsible for our actions?
  • Is there a God?
  • What is a mind?
  • What is it that makes you what you are?

PHI192: Introduction to Moral Theory (Fall 2019)

Course Description:

This course is an introduction to major theories about moral rightness and wrongness, about virtue and vice, and about value and disvalue. We examine historically influential theories in the Western philosophical tradition that continue to be of contemporary interest, such as utilitarian, Kantian, and Aristotelian theories. Along the way, we discuss the relationship between morality and self-interest, as well as some disputed moral issues, such as our duties to non-human animals, the obligations of the affluent towards the poor, the ethics of abortion, hate speech and free speech. We use both historical and contemporary readings.

Goals:

To enable students to (a) gain a basic understanding of major moral theories, and of their merits; (b) gain a firm understanding of core ethical concepts and distinctions; (c) gain a facility for independently grappling with ethical issues in an articulate and informed manner; and (d) gain improved critical reading and analytical writing skills.

Modules

Ethics—Deep Integration (CS50: Introduction to Computer Science) (Spring 2025)

Module Description: We design and integrate short ethics snippets directly into the problem sets in CS50. These snippets prompt students to reflect on real-world dilemmas they may encounter as programmers—such as data privacy, accessibility, and responsible use of technology—right alongside their technical assignments. Students consider embedded ethical questions as part of their project submission process, fostering a habit of ethical reasoning early in their computer science education. Each year, CS50 reaches several hundred thousand students worldwide, allowing our integrated ethics materials to help foster a culture of ethical awareness among the next generation of computer scientists around the world.

(Co-created)

Ethics Bowl (CS1060: Software Engineering with Generative AI) (Spring 2025)

Module Overview: This module immerses students in the ethical evaluation of real-world software projects through a collaborative, “red teaming” format inspired by the Ethics Bowl. As part of the course, students are already working in teams on software projects, which form the basis for this exercise. Prior to the module, each team prepares a concise summary of their project, detailing aims, stakeholders, deployment plans, and data sources. This summary is then shared with another team playing the role of their “red team”—i.e., a group acting as adversaries who are looking for possible problems with the software. The red team’s job is to critically assess the project from a moral standpoint, identifying possible ethical pitfalls, stakeholder concerns, and potential public relations risks. During the class session, teams meet in pairs to present their critiques, engage in constructive discussion, and practice responding thoughtfully to ethical challenges. This interactive process encourages students to anticipate objections, refine their ethical reasoning, and consider the broader impact of their work. The session culminates in classroom-wide presentations, offering further opportunities for peer feedback and exchange of ideas.

Distributive Justice (CS1360: Economics and Computation) (Spring 2025)

Module Overview: How should economic resources be distributed in society, and what makes a distribution fair? In this module, we explore competing philosophical perspectives on distributive justice and examine their practical implications using interactive small group discussions and structured debates. Students begin by considering which hypothetical society they would prefer to inhabit, not knowing their own place within it. This exercise serves as a springboard to introduce and critically evaluate three major theories of distribution: egalitarianism, which values equality in resource allocation; libertarianism, which emphasizes individual entitlement and the justice of acquisition and transfer; and the Difference Principle, which permits inequalities only when they benefit the least advantaged. One key goal of the module is to illustrate how we can build better theories of distributive justice by carefully weighing the merits and shortcomings of each perspective, and moving toward more satisfactory views at each step. By the end of the module, students will revisit their initial preferences for a just society and reflect on how their thinking has evolved in light of the philosophical arguments encountered.

Ethics of Hacking Back (CS2630: Systems Security) (Fall 2024)

Module Overview: In the face of a digital security threat, you are mostly on your own. There is no equivalent of state protection or police patrols. What is worse, you will likely receive no justice if your digital assets are attacked, destroyed, or stolen. Generally, these cases have seen no arrests, prosecution, or restitution. Unsurprisingly, some victims consider taking matters into their own hands by hacking back or striking against their digital attackers. In this module, we discuss the potential risks and benefits of hacking back and consider whether hacking back can be morally justified on the grounds of self-defense. We also discuss whether hacking back should be made legally permissible.

Ethical Implications of Interpretability (CS2822R: Topics in Machine Learning - Interpretability) (Fall 2024)

Module Overview: This module examines the ethical importance of being able to explain the human targets of machine learning decision systems exactly how the algorithms made their determinations. This feature, often referred to as “interpretability,” is generally seen as an important aspect of algorithmic systems that impact human lives and livelihoods. Some policy makers have even suggested that ML systems that aren’t interpretable shouldn’t be used. That said, many decisions made by humans that intuitively seem unproblematic aren’t interpretable, which raises concerns that ML faces a problematic double-standard. Furthermore, interpretability is subject to seemingly relevant trade-o s, e.g., a “black box” ML algorithm that is extremely accurate may, in some situations, be preferable to a less accurate algorithm that is interpretable. Students are given the opportunity to discuss the ethical importance of interpretability, the possible double standard charge in AI governance, and the ethical trade-o s regarding interpretability in extensive small group discussion.

(Co-created and co-run.)

Ethics of Technological Unemployment (ES159/259: Intro to Robotics) (Fall 2024)

Module Overview: Evidence  suggests that in the past three decades tech-driven task displacement across US  sectors  has significantly outpaced the replacement of those tasks by others, thus leaving less total work to be done by human workers. Robots and AI are expected to continue this trend, and some have argued that recent advances in those technologies will greatly accelerate the trend. Some researchers predict large scale unemployment due to novel technologies in the next several decades if the trend continues. In this module, students have the opportunity to consider and discuss the potential moral and practical reasons that might compel us–as a society–to take action to address this so-called technological unemployment problem. Students will also brainstorm and discuss what steps society might take to either prevent or slow down the pace of technological unemployment and what mitigation strategies could be deployed to address its bad consequences if we fail to prevent it.

Student Emails & Evaluations

Student Email #1

Student Evaluations #1

Student Email #2

Student Evaluations #2

Student Email #3

Student Evaluations #3

Student Email #4

Student Evaluations #4

Contact

Get in Touch