The Strength of Moral Intuitions
In the study of moral philosophy, there have been many ethical theories developed. The theories have been devised with the end goal of establishing a system that can properly advise us on which actions are right and which are wrong. The totality of ethical systems comes with a diversity of definitions, logic, and intuitions. Among all these systems, no matter how fundamentally different or slightly augmented from one another, they all seem to fail under specific cases and scrutiny. One edge case or one extension of what the system entails seems to be enough to thrust the entire theory into jeopardy. This exposed problem from the ethical theory is not one that can simply be appended or discarded upon its flaws. These flaws often do not reveal a bad logic in the system or even flawed definitions. Rather the systems seem to fail because of these consistencies, not being able to bend the rules to fit an edge case. If the systems are not flawed in their creation, how can there still be problems with them? How are we able to find examples where the theory is wrong? The errors are not in the theories themselves but rather in our interpretation of them as humans. Are we trying to create a system that excels at resembling our human intuitions about right and wrong or are trying to extend our ethical calculations past our fallible intuitions?
To see this more clearly, I want to show the popular counterexamples against two ethical theories, Utilitarianism and Deontology. First, with Utilitarianism it will benefit us to briefly define the system. Utilitarianism is a form of consequentialism where we as actors ought to make the decisions that carry consequences that are consistent with the maximum good for all sentient beings. The word “good” here is defined differently depending on which specific utilitarianism you subscribe to, but for our example, it does not matter. This system therein is very strong in its logic and gives us a concrete way to act. If happiness is good, then we ought to maximize this good with every one of our actions. If we accept the definitions it is hard to challenge the conclusions with the simplicity of this system. However, there exists a popular counterexample that seems to cast our hope for the system into disarray. Imagine a mean old man, who has no living relatives, and a minimal if not negative impact on society. Then five people need an organ transplant. If the old man’s organs were harvested, then the five people would be saved at the cost of his life. Should we kill the old man? If you are following utilitarianism it is simple to see that the death of the old man will result in an immense amount of happiness for the five people if not in the immediate result, then for the rest of their lives. This result however strikes a sour feeling within most of us. How can an ethical theory prescribe us such an immoral act as murdering the innocent? Surely there must be something wrong. However, this problem does not expose some fault of logic but rather reveals a consequence which our moral intuitions do not agree. If we really did believe in our definitions, then on what grounds do we have for dismissing Utilitarianism? We would only be able to disagree with it by claiming that it feels wrong in this case.
We can see in Deontology similar edge cases and consequences can arise which we do not intuitively agree with. For example, in Kant’s writing “On the Supposed Right to Lie From Benevolent Motives” he discusses the counter-example of not lying to the murder at your door. The example follows the form that we ought to honor our duty, to tell the truth even if it seems to be in the accomplice to a crime such as murder. We can see that once again our problem does not exist within the logic of the system, but rather it comes from the logic of the system creating a consequence in which our intuition and feelings do not agree. Does it seem right to dismiss all of Kant’s pure reason with the categorical imperative, no matter how sound it may seem, because we are brought to one conclusion, we find unsettling?
Seeing that we feel compelled to discard two heavily supported and reasoned systems it seems imperative to consider if we should continue to value our moral intuitions this strongly. As shown in the past two examples, we feel that the theories are wrong because of their divergence from our moral intuitions in certain instances. But we should remember that in reading the theories we agreed with all the premises and definitions. This boils down to the question of what should our goals be with ethical systems? Should it be to create a grounded and logical representation to assign moral judgments or should we find a system that consistently aligns our intuitions? It seems that if we are willing to give argumentative weight to the counterexamples above, we must be practicing the latter. This is because we are questioning a theory that logically holds because of a disagreement from a moral intuition. However, if we are to cherish our moral intuition above all else as the gold standard of moral judgment, then why should we even devise a moral system? We would know what the right and wrong thing to do is all the time without much philosophizing if we just depended upon what felt right to us.
Aligning our moral system with our intuitions is not a strong way to form an ethical system. The strengths from philosophy and logic fall to the wayside in this circumstance. We suddenly only value feelings in our moral considerations. These feelings are poor ethical determiners for a few reasons. One is they are simply unreliable. The feelings I have about a certain action are dependent on a bevy of factors in which we don’t control. According to Jillian Craigie “emotion’s capacity to narrow our focus to a set of possible judgments without explaining whether and how it opens up the possibility for extensive reflection… Emotional intuitions, on her view, present judgments as certain and not-to-be-questioned” (Steiner 2019).
We can see that in our own emotion ruled moral intuition we can be quick to cast judgment and unable of the important ability to understand it externally and for what it truly is. For example, in the Utilitarian counter scenario with the surgeon, I might feel it is wrong in the abstract, but when I consider that my dear friend is one of the five who needs the organs, I may see it in a different light. Or from the other side, maybe I feel as though the action is ethical, but I now disagree with it because one of the people to be saved is my horrible boss. From this we can see that our personal presumptions cause us to flip flop on the morality of a case, meanwhile, a pure system would not allow for this. This ethical view where intuition is the driver of judgments begins to breed a sense of ultra-subjectivity around the moral quality of actions. Utilitarianism is already founded on subjective definitions of words like happiness and good. When we consider our feelings on a specific instance of an action, we find that these subjective definitions are continually amplified by our feelings about these circumstances. This is like the case above with the surgeon and our friend or horrible boss. These two layers of abstraction can cause us to assign two identical acts with different moral values dependent on minimal and uncontrollable factors. This sense of morality begins to feel increasingly arbitrary and random in its assignment.
If we create and commit to a more structured logic for our ethical systems, we will find more valid results. First and foremost a structured system allows for our ethical judgments to be strongly grounded. If Utilitarianism is our system of choice, then we accept and can explain why we believe or act in a certain way. I can say that I flipped the trolly lever because it maximizes the most good and therefore was the right decision. Rather than acting on our feelings and explaining by saying “it just felt right” or “it felt wrong”. This system also creates a framework to recognize and defend our actions. From the prior point, performing an act and explaining through the lens of a framework will hold better as a means of explanation rather than just someone’s gut feeling. Additionally, by using a structured system we can extend and strengthen our moral considerations. If we wield our reason, we can establish a system of ethics that we can support in a clear-headed way. We can now proceed to trust in this system rooted in our best logic and reason over what direction our volatile emotions may steer us in. We are fallible human beings whose emotions and feelings can easily cloud our judgments. Singer presents this idea as follows, “We have developed immediate, emotionally based responses to questions involving close, personal interactions with others. The thought of pushing the stranger off the footbridge elicits these emotionally based response” (Singer 2005). It is this commitment to clouded reasoning which creates circumstances where we feel justified in dismissing our previous pledge to doing right according to what our clear-headed thinking told us to do.
Ultimately, there have been many ethical theories and systems devised. They cover the gambit in all sorts of different definitions and assumptions. In the face of this, there does not exist a theory that is free from strong counterexamples. Two theories that fall prey to this were Utilitarianism and Deontology. Both systems do not fail by a matter of their logic, but rather our feelings about how a specific case’s moral judgment should be otherwise. Instead of using these specific cases as a means of discrediting these theories, they can be used as evidence to discount the strength and validity of our own moral intuitions.
Sayre-McCord, Geoff, “Metaethics”, The Stanford Encyclopedia of Philosophy (Summer 2014 Edition), Edward N. Zalta (ed.)
Singer, Peter. “Ethics and Intuitions.” The Journal of Ethics, vol. 9, no. 3/4, 2005, pp. 331–352. JSTOR, www.jstor.org/stable/25115831. Accessed 23 Dec. 2020.
Steiner, Corey. “Emotion’s Influence on Judgment-Formation: Breaking down the Concept of Moral Intuition.” Philosophical Pyschology, vol. 33, no. 2, 2019.