Language selection

Presentation to ISAP 2017

 View document in PDF [1061 KB]

Kathy Fox
Chair, Transportation Safety Board of Canada
Dayton, Ohio
May 10, 2017

Check against delivery.

Slide 1: Title slide

Good morning.

Slide 2: Outline

Slide 3: About the TSB

Slide 4: 2 key questions

Today's panelists were asked to answer two questions:


Well, the first question is a tough question for the TSB to answer because the nature of our work means we don't get to witness “everyday operations.” We investigate accidents, which means we come in after things have gone wrong. Our data set therefore shows a very limited slice of abnormal operations and our investigations are in-depth looks at “really bad day operations.”

To see everyday operations, one must be much closer to the operation than we are. However, our investigations give us some good indications for the second question: what still needs to be fixed?

In my opinion, the biggest issue that the Human Factors community (and aviation psychology) needs to address is: how to improve the uptake of Human Factors methods and knowledge in the design and monitoring of the aviation system.

Let's look at some examples from recent TSB investigations.

Slide 5: A13H0002: M'Clure Strait, Northwest Territories

On 09 September 2013, a helicopter took off from the Canadian Coast Guard Ship (CCGS) Amundsen with 1 pilot, the vessel's master and a scientist on board for a combined low level ice measurement and reconnaissance mission in the M'Clure Strait, Northwest Territories.

Seventeen minutes after failing to arrive back at the ship as expected, its position was checked on the flight-following system, which was displaying the helicopter's position as 3.2 nautical miles from the vessel. The CCGS Amundsen's crew attempted to communicate by radio with the pilot, without success. The vessel then proceeded toward the helicopter's last position displayed, and quickly spotted debris. The 3 occupants were recovered using the vessel's fast rescue craft; none of them survived.

Slide 6: A13H0002: Flight following system

The helicopter likely crashed because of spatial disorientation or distraction during the low level flight. However, the investigation also looked at survivability issues. Although the vessel's new flight following system, shown here on screen, displayed a continuous digital readout of the helicopter's position, expressed in degrees lat and long, the TSB's subsequent investigation found that…

Slide 7: A13H0002: Findings as to cause

Slide 8: A13H0002: Associated risk

Slide 9: A13A0075: The challenges of automation

TSB investigations have also found automation that has been introduced without being incorporated into Standard Operating Procedures. Here's an example:

On 03 July 2013, a Bombardier CL-415 amphibious aircraft touched down on Moosehead Lake, Newfoundland and Labrador, to scoop a load of water as part of efforts to fight a nearby forest fire. During the scooping run, the aircraft took on too much water, because the switch controlling the scooping probes was in the wrong position and the flight crew did not notice. The extra seconds on the water placed the aircraft closer to the shore than desired, and the pilot flying elected to turn the aircraft to the left to allow for a longer departure path. During the turn, the left float contacted the water while the hull became airborne. The resulting forces caused the aircraft to water-loop, and it came to rest upright but partially submerged. There were no injuries to the 2 crew members, but the aircraft was destroyed.

Slide 10: A13A0075 toggle switch

Here's some of what our investigation revealed:

Slide 11: A13A0075: Cause and risk factors


Slide 12: A14F0065: Unstable approach

TSB investigations have also found examples where crews used aspects of automation only infrequently—or in a mode crews were not practiced in using.

Here's one last example:

On 10 May 2014, an Airbus A319 departed Toronto Lester B. Pearson International Airport, Toronto, Ontario, under instrument flight rules for Montego Bay, Jamaica, with 131 passengers and 6 crew members on board. The flight crew was cleared for a non-precision approach to Runway 07 in visual meteorological conditions. The approach became unstable—there was excessive airspeed, as well as vertical speed deviations, an incomplete landing checklist, and unstabilized thrust. The aircraft touched down hard, exceeding the design criteria of the landing gear. There was no structural damage to the aircraft, and there were no injuries.

No accident is ever the result of a single action by a person or organization, and we made 10 findings about causes and contributing factors. However, in the interests of time, I'll focus on just one, to illustrate the point of today's presentation: the use of the Autothrust.

Slide 13: A14F0065: Finding as to cause.

As I said, the flight was unstable—for various reasons—as it approached the runway. They were a bit high, and too fast. The flight crew recognized this, but due to a combination of factors, including distraction and confusion about what the automation was doing, they turned off the autothrust, manually reducing the thrust to “slow down and get down.”

By turning off the autothrust, the flight crew had full authority/control of the thrust system. So, yes, they successfully reduced thrust and lost enough speed, but as they continued the approach they forgot that the autothrust was still off. Their speed continued to decrease, and they ended up coming in too slow: below the target speed.

This resulted in a hard landing.

The TSB's findings reflected this:

“Air Canada Rouge did not include autothrust-off approach scenarios in each recurrent simulator training module, and flight crews routinely fly with the automation on. As a result, the occurrence flight crew was not fully proficient in autothrust-off approaches, including management of the automation.'”

Slide 14: Conclusions

speak to slide.

Coming back to my earlier point that we don't get to see “everyday operations,” these examples are the tip of the iceberg. Similar instances—where automation that is not well designed around the human user, or where it is not effectively implemented—are easily found in operational contexts. Although Human Factors is not new, it has struggled as a field to be systematically considered during system development and system integration activities.

This also ties in well with the TSB's Watchlist, which identifies the key safety issues that need to be addressed to make Canada's transportation system even safer—specifically, the issue of safety management and oversight.

The issues described above existed before these occurrences, and many who have an up close and personal view of “everyday operations” were likely aware of them.  As one of the speakers at a safety summit organized by the TSB last year pointed out: “Your next accident is likely already in your data somewhere.”

One cannot expect to anticipate all of the possible issues during the development and introduction of a new system.  Therefore, the flow of information and effective system monitoring through systems such as SMS is critical to continual improvement.

Slide 15: Questions?

Slide 16: Canada wordmark