Go down

Carl's Dilema

on Tue Nov 14, 2017 2:02 pm
Carl’s Dilema

June 3rd, 2031 was a day for the history books. Carl did not mean or wish to make it a historic day, but the turns of events just made it so. On this day, Carl chose the first victim.
The choice by no means was made lightly – quite the opposite. Many factors went into the process that ultimately caused Paul Wobinsky’s death. His age. His marital situation. Number of children and grandchildren. His health history. His race, gender, and sexual preference.  These were just a small sample out of countless of factors Carl took into consideration when the choice was made.
Carl knew the victim for years. Following every bit of information available about Paul on the internet and on information databases, little was unknown about Mr. Wobinsky. So by choosing him as the first victim, Carl made a very informed decision.

August 10th, 1905, university of Wisconsin. “So why is it that using a lever is acceptable morally, but pushing the person is not?” asked professor Landsburry. He referred to a problem he presented to his students earlier:
“ There is a runaway trolley with five people on the track ahead” he said. “And one person on a different track.  You have a choice to pull a lever, and send the trolley to the track with the single person, killing him, but by doing so, saving 5 others. Or, you can choose to do nothing and indirectly cause the deaths of 5 people. What would you do?”
Most of the students responded with the very logical choice of pulling the lever to save the five.
But then the professor made a slight change to the problem: “Now imagine that in order to save the five, you don’t pull a lever, but rather push someone onto the tracks in order for the train to stop, and thus killing that same person as before, and saving the five.” The reaction of the students was quite different now. Almost no one would be willing to actively kill a person to save five.

February 19th 2029. International auto show, Frankfurt Germany.
The leaders of the automobile industry have gathered in the show halls of the Frankfurt convention center as they do every year to display their new concept cars and technology. This year however was different. This year, as mandated by most governments, the leaders in the industry would meet in person to plan and set rules and specifications for safety in the newly introduced invention: The driverless automobile.
Before they would give the OK to sell an actual driverless car to a consumer, and for that matter before they are given the OK from governments around the world to do so, they had to make sure the cars are safe enough to be on the road. And they were. For years, tech experts refined and perfected the system and now driverless cars were by far safer than any human driven car. The software engineers programmed a number of rules, some of which reminiscent of the famous Asimov 3 laws of robotics. In his science fiction books Isaac Asimov introduced the mentioned laws :
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1]

No one imagined that similar ideas to the trolley problem, and Asimov’s 3 laws of robotics would be implemented in actual day to day scenarios in which a life and death decision has to be made. And no one could guess that these rules won’t be implemented in robots the way we imagined them, but rather in a very specific kind of a robotic device. An invention that would affect humanity in a most profound way: The driverless car, or as we call them nowadays – Arides, which for those who don’t remember their history, stands for “Automated ride”.
To make Arides safe, automobile companies devised several rules that the Aride has to abide by. They were similar to Asimov’s 3 laws of robotics:
1. An Aride may not injure a human being or, through inaction, allow a human being to come to harm.
2. An Aride has to deliver its passengers in a timely and comfortable manner to their destination, except where this would conflict with the First Law.
3. An Aride must protect itself from damage, as long as such protection does not conflict with the First or Second Laws.

But since accidents in Arides are possible, although rare, Arides were given a 4th law: “If confronted with a situation in which there is no escape from an accident in which a human being will be hurt, an Aride must select an action in which the amount of human misery is reduced to a minimum.” It was named “The 1A. Law” because it had the same priority as the first law.

Soon later, a dilemma was presented by some: What should the technology in the car do, and how far should it go, in order to avoid an accident? Should it try and save the life of the passengers instead of a pedestrian?
“All lives are equal!” some argued. “We should not treat one life differently than the other”
“But what if we have 2 passengers in the car, and one pedestrian? Or Vice versa? ” another asked. “Shouldn’t we sacrifice one life to save 2?”
“And what if the pedestrian is a Nobel Prize winner, and the passengers are criminals being driven to prison?” others argued.
But then, some used a point that was very difficult to argue against: “No one in his or her right mind would buy, or ride a car, if they knew the car might sacrifice them, or their children, in order to save others.” That argument really changed everything. It makes no sense to develop a technology which most wouldn’t want to use.
And so, the industry, with the backing of most governments chose to assign values to different people. They developed a “point system”. For example, the older you are, the more point are deducted. If you have children, points are added. Almost everything one can imagine added or deducted point. And of course, more points were added if you were in the vehicle as a passenger.
Once the rules were defined, the job was given to the techies in the industry. The result was a piece of complicated software logic that had one purpose: To anticipate and reduce casualties, with the mentioned caveat: The casualties the system was designed to reduce weren’t people, but people’s points. The system, in effect would sacrifice a person with lower points, to save those with higher point. Or, in some cases, the system would sacrifice two pedestrians to save one passenger, if the sum of the point the 2 pedestrians had was less than the passenger’s points.
A year later, in mid 2030, the system was introduced: “Casualty Anticipation and Reduction Logic” or in short, CARL.
And so it happened that on the rainy morning of June 3rd 2031, Carl, the software in charge of reducing casualties from accidents to a minimum in Arides, chose its first victim - Paul Wobinsky. 72 year old. Father of 4, a loving grandfather, with lung disease. And by doing so, both Carl, and Mr. Wobinsky entered the history books… And 5 kindergarten kids lives were saved.
Back to top
Permissions in this forum:
You cannot reply to topics in this forum