Skip to content

Conclusion: AI is not smart enough

“The only thing that could cause AI to misbehave is human error.” Words of Nick Ismail in his article Why AI is set to fix the human error http://www.information-age.com/artificial-intelligence-set-fix-human-error-123466675
Well, he might be right. Human Error can be one of the theories behind the ….. politically correct content here… accident.

Now, which human in the chain?

As a human reliability expert for close to 20 years, I have been studying the underlying reasons for humans to make mistakes and yes, AI can help. However, does it really take care of the most critical human factors? Why is it that even with so sophisticated technology these things happen?

Because Ismail is right. AI is not smart enough. At least not yet.

In September 2017 GM’s self-driving cars were involved in six accidents. https://www.reuters.com/article/autos-selfdriving-crashes/gms-self-driving-cars-involved-in-six-accidents-in-september-idUSL2N1MF1RO “The accidents did not result in any injuries or serious damage, according to the GM reports, but did demonstrate the challenges for developers of self-driving cars confronted by crowded urban streets.”

In 2006 the first fatality associated with self-driving cars when the driver of a Tesla Self Driving Vehicle had a “technical failure.” https://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html

Still, efforts continue to improve reliability, and in 2016 Erik Coelingh, Volvo’s Senior Technical Leader for Safety and Driver Support Technologies told Tech Insider. “There’s huge, enormous potential in making traffic safer than it is today,” Coelingh said. “That is one of the big reasons for us that we entered this field.” What was the promise? http://www.businessinsider.com/advantages-of-driverless-cars-2016-6

  • Safer
  • More efficient
  • Time for other things

While this is all things that we do want we are not there yet, we all know it.

This tragedy that has shaken our core was a disaster waiting to happen.

Today, this promise was tainted with tragedy before it even started. Uber’s technological advancement fell into the infamy of taking a life of a pedestrian. So, what happened? Human error? An equipment failure? Design flaws? Risk evaluation shortcomings? Well yes, all the above.

For some reason, we have come to believe that technology is always better than a human as evidenced by the development of technology intended to control the inherent unreliability of people. Robots, computers, GPS, and many other gadgets intended to help us make fewer mistakes and make our lives easier. Now, there is no truth to that statement. Not only in not correct, but it also will never be.

Whether we like it or not, humans are not going anywhere. Human error is always going to be a factor to be addressed. From the moment the concept becomes a reality all bets are off. You can inject errors from known people like designers, engineers, suppliers, operators, and users all the way to external unwanted, not predicted random conditions.

On the other hand, the promise was never to eliminate … accidents happen but what could’ve happened here? From the info that we have there is a list I have come up with:

  1. The car failed to detect the danger: the equipment is not reliable, and/or the design did not consider “unexpected circumstances”.
  2. This triggers a human action from the safety driver.
  3. Safety driver did not react in time or did not have the time to react (hence, the car under study) reaction time was sub-par or timely recovery was impossible under the circumstances.
  4. Had the time to react but had a delayed response due to a “Psychological Delegation of Responsibility”: As a safe driver, the role is to take control if, and only if, there is a failure or danger detected/perceived. Selection of action takes some time, and even though to the “naked eye” the time is almost negligible, those tiny milliseconds can make a world of a difference. Not everybody has the same reaction time. My average reaction time is 347 milliseconds. You can try yours here: https://www.humanbenchmark.com/tests/reactiontime/
  5. Trust: the perceived perfection of technology and the false narrative of the inadequacy of human execution could create conditions to “over the trust.” If I unconsciously trust the technology more than I trust my actions/decisions, I might momentarily stop until my brain detects the danger as real triggering a reaction in survival mode.
  6. Overestimation of barriers of defense: self-driving cars, software, robots, AI, and many other great inventions, have one goal: be better, safer, faster, more profitable, and innovative. On the other hand, we expect humans to compensate for equipment shortcomings. Double weakness compensation is a danger when designing redundant “systems.”
  7. Risk Assessment for the testing environment: prediction exercises and failure analysis are used to identify possible dangers. The decision to test in a natural environment without scientific proof that will guarantee the safety of pedestrians and drivers. These assessments must assure that they mimic reality as much as possible. If we use ideal circumstances as our canvas, unexpected random circumstances, meaning real life, might arise. Even though humans are better than technology analyzing unlimited variables, the danger in question was predictable and not necessarily an “unexpected condition difficult to predict.”
  8. Access to public environments allowed: Management systems, rules, laws, regulations, permits, all are necessary when deciding possible danger exposure. The intention is not to incorporate bureaucracy but to have enough barriers of defense to avoid “unconsidered or unpredictable conditions.”
  9. Learn from failures: this has happened before, not necessarily because of the same root cause, but events all share similar circumstances. Technical Failure and Human Error

Let’s not forget that humans will make mistakes and break the rules, especially the ones justify the invention. The driver’s safety was at stake; also a victim.

So, no. AI is not smart enough, at least not yet, so back to the drawing board learn and come back when you graduate. We can wait.