Recently, there was a fatal car accident involving a Tesla
model being operated by the Autopilot system, and last month, the NTSB cleared the car’s software of responsibility, citing that there
were no deficiencies in the software that caused the crash.
Eventually, though, there will come a time where the
software of a self-driving car will be found to be either negligent (passively
allowing a crash) or responsible (actively causing a crash) in a motor vehicle
fatality.
I say it’s inevitable because the world we live in is
chaotic, mainly because of us humans.
People run into streets without warning, change lanes or make
bizarre-seeming turns while driving, or drive inattentively, causing
instability in our traffic. The best
programmed self-driving cars will never be able to account for every possible
circumstance, but only the predictable ones that their sensors and artificial
intelligence can process.
What follows will be the result of pre-programmed
split-second decision making. Take the
following situation: A self-driving car is going down a two lane road at speed
with traffic on both sides. Coming to an
overpass, the car notices a person stepping out into the road. The car has three options: hit the
pedestrian, swerve into on-coming traffic and hit another car, or swerve to the
right and crash the car (risking the life of the driver). In this type of no-win situation, which
option do we find best?
Counting on the driver isn’t going to work – the
self-driving car is going to lull the driver in the same level of attentiveness
to the road as they would have in a taxi.
They just would not be able to assess the situation and act in time to
be useful. So it’s up to the car, or
more clearly, the programmers behind the car’s controls, to define the best
approach to these types of situations. Whatever
they choose, someone is going to be put at risk because of a decision that the
automation system is forced to make.
When it happens, there will be an uproar, and significant
expressed concern that ‘our cars are out to kill us’. It will be incumbent upon the makers and
promoters of these automations to ensure that they can demonstrate the net
positive benefit, and the lives saved through the implementation of
self-driving capabilities. Tesla and other manufacturers have taken the first step in demonstrating that there have been fewer
accidents in their cars when being driven by the automation than when being
driven by humans. As more driving
becomes automated, and vehicles become networked, this trend is likely to
continue.
If it does, then fatalities on our highways may become the
rare occurrence, and may only be the result of pre-programmed split-second
decisions. Before we face that, let’s
make sure that we all agree on how those types of decisions get made.
No comments:
Post a Comment