Background
The context for this post is the twitter thread started by @MFordFuture here:https://twitter.com/MFordFuture/status/585345512783478784
Contingency Management
"Would robots be able to react to unplanned incidents".The plan of a commercial airline includes a predetermined route from an origin to a destination. While this may be the entirety of the journey in normal circumstances, it does not represent everything a flight may need to deal with. There exists a range of scenarios, which are not planned, but which represent known failure modes that a flight may have to deal with. These are known as contingencies.
There may never be a plan, for example, for the airplane to lose communication with air traffic control, but this is a contingency that autonomous aircraft software would need to handle. The proper response to this contingency would be discussed by engineers, pilots, and other specialists, during the design of the autonomous system software. While the response may never need to be put into action, the logic for handling it will be coded into the software, to be invoked on demand.
Aircraft electronics (avionics) systems include health monitoring functions. In a fully-piloted environment, avionics failures would be displayed to pilots so they can take appropriate action, based on their training. The autonomous software would also have all these health monitoring inputs available, so that contingencies can be identified, and handled as planned by the systems' designers.
There are perhaps no machines' failures studied as carefully as the failures of commercial aircraft. Designers have a large database of failure history from which to identify all previously experienced contingencies, plus some that surely have not yet been seen in practice.
The measurements that indicate the beginning of these contingencies can be coded into the system software along with appropriate aircraft responses. These responses can then be tested in flight simulators again and again, until engineers have achieved a system response that meets the flight safety requirements.
Artificial Intelligence
"Can if-then-else statements handle all possible scenarios"The
if-then-else
construct in programming represents a straightforward response to a particular test condition. If the condition is met, one response is chosen. If the condition is not met, another response is chosen. This is a basic logic flow present in all software systems, but does not qualify as "Artificial Intelligence". For most contingencies, this type of simple logic will work well. For example, system engineers may decide that if a loss of communications is detected, the appropriate response is to retry redundant communications channels, and if that fails, begin descending for an unplanned landing at the nearest airport. This logic is simple and easily represented in software.There may be more complicated decision making processes for which this is not sufficient. This is where Artificial Intelligence (AI) is useful. AI includes many different techniques. Fuzzy Logic is one type of algorithm that autonomous systems may use when clear responses do not exist. For example, the aircraft may be responding to bad weather. Weather information may not be precise, and the chance of the flight path intersecting a particular storm may involve a probability (0 - 100%). In addition, there may be a sensor failure on the aircraft, which leaves the plane without an important piece of information (e.g. air pressure). Fuzzy Logic can take the imperfect data available, and calculate multiple responses for the aircraft, which can be estimated to have different probabilities of success. In fact, this general approach is how humans might respond in a traditional piloted environment.
The advantage that the autonomous software has is that it can perform statistical calculations in a fraction of the time required for a human, and select responses with higher probabilities of success than a human could, attempting to essentially solve a difficult math problem in their head.
There are other AI algorithms that can mimic a way that human brains learn. For the most part, humans learn through trial and error. Practicing a given response and observing the results of that response tells us how best to solve similar problems next time. In many situations, we do not understand the reasons for a particular response's success, only that we've observed it to succeed in the past. One way a computer can mimic this is by use of a Neural Network algorithm. In the most basic sense, a neural network algorithm is a set of equations. Multiple inputs are entered into variables in the equations, and the outputs are the results. The relationship between the inputs and outputs are a set of numbers. In math terms, the inputs and outputs are each a 1-D array, and the relationship between them is a matrix of numbers. Inputs times the matrix equals the outputs. In traditional physics, we have laws to tell us the relationship between inputs and outputs (e.g.
F = m*a
). In a neural network, we have to experiment to find the matrix. We do this by training the algorithm, experimenting with different values to find which matrix values give the best output (results) for given inputs. In a real-world example, suppose we want the aircraft to be able to monitor its own weather radar. The inputs would be the many blips on the radar map. We want the output to be the identification of storm clouds, or perhaps flocks of birds. Radar is a very imprecise sensor system. There's also no clear rule for what a flock of birds looks like. They're different sizes and move at different speeds, at different altitudes. However, we can use saved radar data from existing flights and manual identification of bird flocks to train this algorithm. If we know the inputs (radar data) and we know the outputs (cloud, birds, noise), then we can solve the equation to find the matrix of numbers that relates the inputs to outputs. In the future, we can then use the matrix in real time when we have radar data, but do not yet know whether the plane is approaching a flock of birds that needs avoiding.
These are a couple examples that do not fit simple
if-then-else
logic models, but can still be well-handled by autonomous systems via AI.Human Design - Computer Implementation
"Think and Act is better than Programmed"The first key to understand is that any programming an autonomous system might have is simply the coding of the thinking and action of a group of human beings who designed the system. In aircraft design, that would include a team of specialized software developers, mathematicians, aeronautical engineers, pilots and systems engineers. These people look at all the contingencies that the aircraft will have to manage and draw up logical plans, or perhaps use AI, to handle the events. They have the benefit of nearly all the inputs a pilot would have, as well as historical data from previous flights, ample computing resources to simulate alternate responses, and the time to think clearly.
A pilot can have all the training in the world, but when an emergency occurs, the human brain often fails to perform well, due to sensory overload and stress. This may be impossible to predict with a given pilot until a real emergency is encountered. Computers don't get tired or feel stress or fear. They can calmly execute the plan that their human designers laid out for them ahead of time.
Human Pilot Limitations
- Fatigue: humans get tired, especially on long flights, and during segments of flight without much planned activity on the pilot's part.
- Sensory Overload: presented with too many controls to simultaneously observe, humans can lose the ability to track important information.
- Limited training / experience: airlines do an excellent job of vetting pilots, but pilots are humans, and must accumulate knowledge by practice over time. An autonomous system can be programmed with the accumulated "experience/knowledge" of millions of flights before it, being an "experienced" flight controller the first day it's put into service.
- Poor math skills: when responses require making calculations about the probability of success of various options, or the proper value of a control input (e.g. how much to move the stick, or how quickly to turn), humans will never be able to perform calculations as quickly as computers.
Case Studies
http://en.wikipedia.org/wiki/US_Airways_Flight_1549
In US Airways Flight 1549, the airplane engines suffered a bird strike and the pilots were forced to perform a water landing. The pilots were hailed as heroes, but in reality, the accident should be viewed as a failure, and one that would better be avoided in the first place.
The flocks of birds that hit the aircraft was actually registered on radar systems; however the radar display was set to "filter" out certain data. This is a classic problem in user interface design for aircraft (or air traffic control) systems. Humans don't do well when overwhelmed by lots of data. A tradeoff, therefore, is made by software developers, to give the operators what they think is the right amount of data: not so much to clutter their display and overload them, but not so little as to miss important information. Unfortunately, this tradeoff is impossible to make perfectly.
Were autonomous software "monitoring" radar data, there would be no reason to filter out weaker signals. A modern computer can easily process all data available from the radar, regardless of how much "clutter" that data would produce if plotted on a graphical display. Recognizing the bird flock ahead of time, an autonomous system could have altered the flight path slightly to avoid the accident in the first place.
http://en.wikipedia.org/wiki/Air_France_Flight_447
In Air France Flight 447, turbulence was reacted to by a co-pilot excessively pulling back on the control stick/yoke. After entering a stalled condition, the pilot responded with additional "pull up" maneuvers, which is exactly what not to do when an aircraft stalls. This is one of the most basic lessons a new pilot learns (pitch down when the aircraft stalls). It's difficult to understand how this pilot could make such an extreme error. It's possible that he simply panicked. It's also possible that he misinterpreted the warning sounds from the aircraft. Again, aircraft designers have to produce user interfaces that they believe will best communicate information to pilots. In this case, the plane has a stall warning alarm (sound). When the plane stalls, the alarm sounds. However, when the angle-of-attack of the aircraft gets way out of its normal range, designers chose to turn off the alarm sound, perhaps fearing the possibility that angle-of-attack sensor data was incorrect.
Because of this alarm design, the pilot actually heard the alarm sound when he pitched down, as the angle-of-attack went back into a "normal" range. Pitching back up sent the angle-of-attack so high that the alarm turned off again. This was likely counterintuitive to the pilot. You would normally expect the alarm to go off when pitching down, and on when pitching up.
This is an example of miscommunication of information between machine and human operator. Were the system controlled autonomously, the software could be given the information that the alarm was being turned off because of excessively high angle-of-attack, and not because the stall was over. This distinction is crucial. Unfortunately, in trying to simplify the alarm for humans, a critical piece of information was lost. The pilot's error started the oscillations, and his likely misreading of the alarm made the problem worse, leading to a crash.
"Hacking"
More to come ...Comparisons
While autonomous airliners will largely be the product of two highly-competitive (and secretive) players in the commercial aircraft industry, and military implementations are not generally described publicly, we can look to the auto industry for similarities. In many ways, the problem of self-driving cars is like unmanned flight; in some ways, it's more complicated, due to the constant need for collision avoidance and unplanned maneuvering.http://www.consumerreports.org/cro/magazine/2014/04/the-road-to-self-driving-cars/index.htm
http://www.wired.com/2015/03/the-economic-impact-of-autonomous-vehicles/