The Future of True AutoPilot
With companies such as Tesla Motors and Google creating self-driving car technology capable of driving in any scenario without any user input, people often wonder why no company has made mainstream news for autonomously flying commercial planes.
Surely it must be easier to autonomously fly a plane in the sky with less obstacles? Doesn’t auto pilot do most of this already? It may seem relatively straightforward to program a plane to take off, fly to a destination, and land without any user input while accounting for variables such as changing weather, wind speed, and mechanical malfunctions.
However, robotic systems and autonomous software are only capable of doing what they are programmed to do.
As depicted in the hit 2016 movie Sully starring Tom Hanks, on January 15, 2009 US Airways Flight 1549 was struck by a flock of geese causing it to lose power in both engines 3,000 feet above New York City.
With only seconds to react and avoid catastrophe, Captain Chesley “Sully” Sullenberger decided he had no other solution and landed the Airbus A320 on the Hudson River in Midtown Manhattan.
This quickly became known as “The Miracle on the Hudson” as all 155 passengers and crew survived the crash landing.
However, simulations carried out by the National Transit and Safety Board (NTSB) found that in the majority of these situations the plane would have had the time to turn around and land safely at several different airports on the outskirts of the city.
The NTSB initially wanted to ban Sullenberger from flying as a result of these simulations.
Nevertheless, after continuing their investigations, they concluded that the pilots in these simulations were only able to make it to nearby airports because they knew exactly what to do when the geese struck. In every simulation where the pilots had time to follow emergency procedures and decide what to do, the planes crashed before reaching the airport and everyone aboard the plane suffered fiery deaths.
While an autonomous plane would have been able to conduct millions of calculations in an emergency, it most likely would have crashed before it could reach the airport resulting in hundreds dead.
When Capt. Sullenberger was asked how he knew he was not going to make it to an airport safely and to land in the Hudson, he replied that it was due to his experience gained by spending hundreds of hours behind the controls of an airplane.
Though an autonomous system can solve calculations instantly, it is unable to develop this ‘instinct’ and cannot do something it is not programmed to do.
Unless the developers of the system happened to add code telling the airplane to land in a river in the middle of the most populated American city, it would have attempted to make it back to the airport, which again would have resulted in death for everyone aboard.
Even if the system was set so that in an emergency situation control of the aircraft was given to on call emergency pilots capable of fully controlling the plane in an emergency situation, by the time they were alerted, understood the situation, and reacted it would have been too late.
Until we can create a fully sentient AI system – which would probably raise even more problems – we will always need someone at the controls of an aircraft. A sentient AI would be able to quickly analyze the situation, calculate various outcomes, faster than any human and then choose the best solution and save everyone aboard the airplane.
However, while there currently exists highly advanced Artificial Intelligence systems, capable of doing astonishing things, they are all only simulating a “true” AI.
Because these AI systems cannot “think” for themselves, one way to bridge this gap would be by combining the AI with Machine Learning.
By taking in and synthesizing data of successfully resolved emergency situations and their solution, we could potentially teach the AI system to respond to unexpected problems.
However, this does not mean that the plan would have been able to conclude that the best course of action would be to crash land into the Hudson river. It could also cause false positives where an AI system might think that a solution that worked in another situation in the past would work here.
In conclusion, while current AI systems may be able to take off, fly, and land a plane, they are nowhere near replacing pilots as they are unable to respond to emergency scenarios in ways they are not programmed to do.
The only way to solve this problem would be by developing a true, sentient Artificial Intelligence that can think of out of the box solutions. However, if this technology is even possible, we are years away from developing it and this technology may create more problems for humanity then it solves.
Written by Thomas Braun, Edited by Shaw Rhinelander, Benjamin Stick & Alexander Fleiss