top of page
  • Writer's picturePriti Rangnekar

Artificial Intelligence and Ethical Decision-Making

Updated: Oct 5, 2020

Author Credit: Priti Rangnekar

With advancements in self-driving cars, political chatbots, and artificial intelligence platforms, humans must now reexamine obscure ethical standards and program them as universal principles into robots. In the article “Can We Trust Robots to Make Moral Decisions?” published in Quartz, science reporter Olivia Goldhill examines the challenge of transforming abstract morality into concrete decision-making processes for machines.

The article begins with an example of the failures that may result when designing artificially intelligent robots. When Microsoft designed Tay, a chatbot version of a teenage girl, the bot quickly adopted users’ immoral language, including racist slurs. Nevertheless, with the growing replacement of humans with autonomous processes in fields such as healthcare and finance, robots will need to develop a moral decision-making system. For example, machines must make calculated choices when prioritizing treatments for fatal conditions by analyzing past trends in the patient’s profile.

The author describes two methods for creating ethical machines. One method would be to hardwire an absolute goal, such as maximizing happiness, that the machine should always pursue. However, this method can lead to seemingly immoral decisions, as utilitarian calculus can ignore human rights in some cases. In fact, as evidenced by the famous “Trolley Problem," even humans have not reached a consensus as to whether one life can be sacrificed to save several more.

Conversely, a machine learning approach would allow the robot to learn and modify moral responses by adopting ethical paradigms from all humans in many scenarios. However, Tay’s example demonstrates that human behavior is not always an ethical role model for machines to imitate. In fact, engaging with all humans may result in the domination of certain ideologies, causing severe bias in artificial intelligence models. These effects have already manifested in artificial intelligence in the criminal justice system, which was found to accuse African Americans and women as showing suspicious features at disproportionate rates.

In practice, researchers have been combining machine learning with controlled interaction with only humans deemed “moral." As a result, they create general moral guides specifying how a machine should algorithmically weigh factors such as freedom, potential benefit, and risks when making decisions. Through this process, machines have been able to create generalized ethical principles and apply them to novel scenarios.

In reflection, we can see the need for machines to develop some kind of moral compass. However, we must account for findings that AI has developed biases in cases such as facial detection software. In addressing this problem, engagement with an “ethical” minority of researchers, as advocated by researchers cited in the article, does not expose the robot to the diverse population’s ideas, ranging from benevolent views to extreme hatred. Instead, robots should have their ethical paradigms challenged and reaffirmed, similar to how parents assure children that their ethics are correct, regardless of bullies’ immoral behavior. This process would prepare AI to engage in the human world and develop as a human’s mind would. Additionally, reinforcement learning can be used to guide artificial intelligence models towards ethical decision making. Reinforcement learning is a technique in which rewards and punishment result from positive and negative actions respectively. Markov Decision Processes, or MDPs, have been commonly used for AI that can play strategy games and are based on maximizing rewards by choosing correct actions to take. In fact, in a 2016 study conducted by researchers from Brown University, reinforcement learning was applied to 2 intriguing ethical scenarios — “Cake or Death” and “Burning Room.” The researchers concluded that “RL [reinforcement learning] achieves the appropriate generality required to theorize about an idealized ethical artificial agent, and offers the proper framework for grounding specific questions about ethical learning and decision making that can promote further scientific investigation.” In 2019, IBM, CMU, and Tulane University developed techniques applying reinforcement learning to enable AI models to “learn and follow the implicit constraints of society.” This approach can be applied to ethical decision-making in problems such as stock market trading or decisions featuring multiple stakeholders, such as companies, individuals, and governments.

For years, we have been struggling to identify a universal code that advises us to act morally across circumstances, with conflicts between consequentialism, virtue ethics, and Kantian deontology. On the other hand, by discussing how to develop precise, adaptive software that aligns with moral instincts, we can evaluate human obligations and the psychology that defines our ethics. The process of designing ethical machines may provide a more profound understanding of the nature of human reasoning and the factors that influence our judgment in a myriad of situations.


Abel, David et al. “Reinforcement Learning as a Framework for Ethical Decision Making.” AAAI Workshop: AI, Ethics, and Society (2016).

Goldhill, Olivia. “Can We Trust Robots to Make Moral Decisions?” Quartz, Quartz, 4 Apr. 2016,,decisions%20for%20the%20foreseeable%20future.

R. Noothigattu et al., "Teaching AI agents ethical values using reinforcement learning and policy orchestration," in IBM Journal of Research and Development, vol. 63, no. 4/5, pp. 2:1-2:9, 1 July-Sept. 2019, doi: 10.1147/JRD.2019.2940428.

Osiński, Błażej. “What Is Reinforcement Learning? The Complete Guide.”, Błażej Osiński Https://, 5 July 2018,


bottom of page