Source Coding Life

An algorithm’s definition is a self-contained, step-by-step set of operations to be performed. The purpose of its existence is to perform calculations, data processing, and automated reasoning.

The way a program is written can have important consequences for its maintainers. Calculations, processing , and automated reasoning affect the priorities of the program itself. Priorities also such as the speed of the program’s execution, or the ability to compile , as code generally depends on its purpose. This complex relationship of prioritizing is done for the the efficiency of the system in which it chooses the most systematic course of action. This is used perform the outcomes of the source code program directly on the fly. There’s a catch: what happens when human life is on the line in the model of the most efficient course of action? Who receives the final say in a program that is designed to sacrifice for the greater good?

For ages, the trolley problem has been presented to the general public, dangling it in front of our minds. What would you do in this situation? Would you enter a car that—in a greater good scenario—is programmed to sacrifice the car’s occupants, or that is designed to protect at all costs? For a more in depth understanding at what the dilemma is, click the link for an interactive game experience: .

Disturbing, right? So here’s the moral and ethical dilemma that has been pursed by a team of researchers, led by Jean-Francois Bonnefon from the Toulouse School of Economics. Their discussion was published in Arxiv on October 12 , 2015(

The scenario is as follows: You’re in an autonomous vehicle and, after turning a corner, find that you are on course for an unavoidable collision with a group of 10 people in the road with walls on either side. Should the car swerve to the side into the wall, likely seriously injuring or killing you, its sole occupant, and saving the group? Or should it make every attempt to stop, knowing full well it will hit the group of people while keeping you safe?

Through their reports, it is noted that there are some situations where obstacles like this are inevitable with the rise in autonomous vehicles— and what the cars are programmed to do in these situations could play a huge role in public adoption of the technology in regards to public health and safety.

The researches also pressed the test subjects for their opinion on legal matters, which are in limbo at the current time regarding autonomous vehicles. Some of the questions in the experiment include: “Will new laws be introduced that mean the car must swerve, as they can make an emotionless ‘greater good’ response?” and “If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”

The researchers went on to test the subjects’ opinions. They learned on average that for the greater good , people were more willing to sacrifice the driver in order to save others. The twist of this answer is that most were most were only willing to do so if they did not consider themselves to be the driver. The analysis also revealed that 75% of respondents thought it would be more moral to swerve, but it also revealed that only 65% thought the cars would actually be programmed to swerve in the event of the most efficient situation scenario.

Our culture is shifting more and more to where technology is replacing tasks such as driving. Technology is revolutionizing the human experience and there is little doubt that society will eventually convert to an autonomous driving world. Before this can occur , the researchers put a spot light on that fact that there are still moral barriers that need to be addressed.

“Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today,” said the researchers. “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.” So what would you do?