Artificial Intelligence needs a morality engine

Artificial Intelligence needs a morality engine
The ‘trolley question’ is one that keeps AI experts up at night

What is it that makes us human? There is no simple answer to this question, however, if one were to attempt at an explanation it would invariably include traits such as kindness, morality and empathy. These traits that are so often claimed as what makes us different from beast and machine is what scientists are trying to conquer next. AI has advanced over the years to the point that it can beat humans at many complex tasks including mathematics, chess and in a recent development music.

This is a pivotal time for Artificial Intelligence, now more than ever, we are seeing global tech giants such as Alphabet (Google), Amazon, IBM and Microsoft come together to work on AI. This think tank of tech titans is weighing in on the ethics of AI as well as it’s tangible effects on jobs, transportation, and even warfare.

Artificial intelligence, trolley question, tech,AI, AI Think tank, Asimov

From left, Jeff Bezos of Amazon, Virginia Rometty of IBM, Satya Nadella of Microsoft, Sundar Pichai of Google, and Mark Zuckerberg of Facebook.

Image Courtesy:Eric Risberg/Associated Press

This future where AI might save your life or put it at risk is not so far-fetched as you think. One of the most interesting and promising AI developments in the past few years have been the emergence of self-driven cars, such as Google’s Self-driving car  or even Uber’s driverless cabs.

Artificial intelligence, trolley question, tech,AI, AI Think tank, Asimov

The trolley question is daunting even for the most advanced AI

The age old ‘trolley question’ is one that self-driven cars and AI, in general, is still grappling with. To jog your memory “A trolley is heading down a path that has 5 people on it. You control a lever that can shift the tracks and set it on a path that kills 1 person. Would you pull the lever or wouldn’t you?”

In the case of AI driven cars, the algorithm could become responsible for a number of calculations that may involve human lives, making it a legal and ethical nightmare. Should AI have an utmost loyalty to its own passenger no matter how many pedestrians it puts at risk? Should the AI factor in the number of people at risk and assign a value to them? How can the value of one human life over the other be objectively calculated? Can this algorithm can be universally applied or would each car manufacturer have their own version of this AI?

There is a dire need for AI think tanks to reach a consensus on what drives the morality engine of AI. In any discussion of AI, it is helpful to go back to the roots and the three cardinal rules of robotics laid down by Asimov.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

In Asimov’s fiction, all robots were ingrained with these fundamental rules. Now that fictional universe is fast approaching, we are yet to reach a consensus on what should be ‘hard coded’ as part of AI’s morality engine. Indeed, we are yet to find out if there can be such a way to ensure that AI does not bypass these instructions, but that is a dilemma for another day.

Become an AUgmented Reality insider

Add Comment

Your email address will not be published. Required fields are marked *


Phone: +91-9746128857
Email: info@empowerlabs.ooo