AI and Morality: A Dilemma and the Ongoing Struggle

by | Mar 6, 2025 | Artificial Intelligence and Robotics, Blog | 0 comments

Photo by Alex Knight on Unsplash

The ability of a computer or computer-controlled robot to carry out actions typically performed by intelligent beings is known as artificial intelligence (AI).  This pertains to the endeavor of creating systems that possess the cognitive functions that distinguish humans, including reasoning, meaning-finding, generalization, and experience-based learning.

This ability of AI to learn has become the inspiration for many novels, movies, and series, such as a sci-fi novel about the dangers of robotics. The rise of AI has sparked a meaningful debate of AI and morality, especially of its use and possible risks of such in the future.

Artificial intelligence is used in a variety of learning contexts.  Trial and error is the most straightforward.  A basic computer software that solves mate-in-one chess situations may randomly try moves until it finds a mate.  For the computer to remember the solution the next time it came across the same place, the program may then save the solution along with the position.  On a computer, rote learning—the straightforward memorization of certain objects and processes—is very simple to apply.

The Booming Industry of AI

For many years, artificial intelligence (AI) drove advanced STEM research. Most learned about the capabilities and possibilities of tech retailers like Amazon and social media sites like Facebook and Google. AI is crucial to many businesses, such as manufacturing, banking, retail, and health care.

Its revolutionary potential to increase productivity, reduce expenses, and speed up research and development has recently been dampened by concerns that these intricate, opaque networks can cause more social harm than economic benefit. Private companies use AI software to make decisions about employment, health and medicine, creditworthiness, and even criminal justice without being held accountable for how they ensure that programs aren’t encoded, consciously or unconsciously, with structural biases. The U.S. government does not oversee these decisions.

Early on, it was widely believed that artificial intelligence would primarily automate basic repetitive tasks involving low-level decision-making. However, because of the booming industry of AI, increasingly powerful computers, and the accumulation of massive data sets, artificial intelligence has advanced quickly. One subfield, machine learning, has revolutionized several domains, including education. It is renowned for its capacity to sort and analyze vast data volumes and learn over time.

AI software is used in the hiring process to assess resumes, evaluate interviewees’ voices and facial expressions, and fuel the expansion of so-called “hybrid” positions. Instead of replacing labor, AI handles crucial technical duties like package delivery truck routing, which may allow workers to concentrate on other responsibilities and increase productivity.

The Dilemma of AI and Morality

Robot With Human-Like Features

Photo by Gabriele Malaspina on Unsplash

Because of the booming industry and artificial intelligence and people diving into AI to do some of the menial tasks in the future, our safety and security sector may resort to AI.  So many online scams are present because of AI technology, and conversely, AI is already used to catch criminals or detect criminal activity, especially with tagging and catching online phishing scams or facial recognition technology used by governments and agencies.

In this sense, it is not farfetched to believe that AI will one day enforce security, especially given the known risks of working in military-related fields. This causes the dilemma of AI and morality,  and begs the question: How can we trust computers to make the same moral decisions as humans regarding complex situations or questions heavily grounded in ethical values?

However, one crucial thing to consider when dealing with AI and morality is that our morality is complex. It is also influenced by many things, such as where we live, how we grew up, our experiences, and sometimes even biases in our culture. This means that even humans struggle to understand and solidify the concept of morality because different people can view situations from various points of view and have different action plans.

Does this then mean that since humans cannot objectively define morality to make it uniform with all people, we cannot teach AI about it? This is where a lot of different opinions about AI and morality come into view because many believe that though each person or nationality or culture may have a different point of view with certain complex situations we all always follow some moral rules like to do the least harm to a person if harm is necessary, not to kill, not to lie or cheat others, and likes.

Because of this, many people, especially those involved in software development, can testify that you can teach some of the most fundamental moral values to technology and program it to ensure that it never intentionally kills or harms people.

However, this still does not fully grasp the concept of making decisions regarding AI and morality in complex situations. There are some moral points of view that even humans cannot fully grasp. One example is the prevalent trolley dilemma. The trolley dilemma is a hypothetical ethical thought experiment in which a bystander can choose to save five individuals who are at risk of being struck by a trolley by directing the trolley to kill only one person.

Some people would say that we expect artificial intelligence to be able to answer this question when we cannot even do that. The human population will always have different answers to the trolley problem. This is where the debate lies when we are trying to develop artificial intelligence and the dangers that there may be in the future because of their ability to comprehend complex situations. Do you want to learn more and read books about AI and the concept of morality? Purchase Angel of Mortality today!

0 Comments

Trackbacks/Pingbacks

  1. What are the Dangers of Abusing Technology: First Seen in Novels - David Stewart - […] While in personal endeavors, this may lead to success, in this matter, it has led to the many negative…

Submit a Comment

Your email address will not be published. Required fields are marked *

Share This
Skip to content