Rotorcraft flight information inference from cockpit videos using deep learning

Hikmat Khan, Ghulam Rasool, Nidhal Carla Bouaynaya, Charles C. Johnson

Research output: Contribution to conferencePaperpeer-review


As the premier agency for promoting and insuring aviation safety, the Federal Aviation Administration (FAA) continues to promote and highlight the importance of participating in aviation Flight Data Monitoring (FDM) programs to improve flight safety and operational efficiency. Indeed, recorder safety is one of the agency’s top 10 most wanted list of safety improvements in 2017-2018. The FAA, National Transportation Safety Board (NTSB), and the United States Helicopter Safety Team (USHST) are strong proponents of recorder use. These organizations and other industry partners are working together to implement a helicopter safety enhancement that promotes the use of flight data recorders as a mechanism to reduce the helicopter fatal accident rate. However, despite these best efforts to reduce the fatal accident rate with this lifesaving technology, barriers to implementation exist. These include initial costs of flight data recorders which can range from 9, 000 − 50, 000, on average. These costs can be significant for small operators and they combine to prohibit the widespread adoption of FDM by the rotorcraft community. Thus, rotorcraft, in general, typically have a lower participation rate in FDM programs than other forms of aviation (i.e. commercial fixed-wing or part 121 airline operations). On the other hand, even small helicopter operators often have access to or the financial means to purchase one or more off-the-shelf video cameras, which can be mounted inside the cockpit. These cameras offer an alternative to traditional flight data recorders as well as a means to augment them with supplementary data not always available depending on the type of Flight Data Recorder (FDR) installed in the helicopter. On board video data offers several possibilities for improving safety including flight replay, as well as the ability to extract information from the external scene such as readings of instrument panel gauges. As part of our research approach, we analyzed video data from cameras recording the instrument panel and compared these values against ground truth data from the flight data recorder. These values formed the training dataset for our video analytic framework. To analyze this information, we first cropped the gauge of interest (i.e. airspeed indicator, tachometer, engine oil temperature/pressure) in each frame of every video. The gauge image, extracted from all videos, were subsequently fed to train a deep Convolutional Neural Network (CNN) using the FDR measurements as ground truth. We trained Resnet50 CNN models for airspeed, engine oil temperature/pressure, and tachometer gauges. These models obtained 78%, 89%, 89%, and 88% validation accuracy on airspeed, engine oil temperature/pressure, and tachometer gauges, respectively. To further demonstrate the feasibility, we used the trained models to retrieve airspeed and engine oil values from the complete flight profile. We observed that the our models predicted trajectories for gauges closely follow the actual sensory values recorded by FDR. Such solution results in an effective flight data analysis tool as well as improved safety and operational efficiency of rotorcraft. These results demonstrate the feasibility of an inexpensive cockpit camera solution that would facilitate participation in FDM programs even for legacy helicopters that may otherwise require significant installation work.

Original languageEnglish (US)
StatePublished - Jan 1 2019
EventVertical Flight Society's 75th Annual Forum and Technology Display - Philadelphia, United States
Duration: May 13 2019May 16 2019


ConferenceVertical Flight Society's 75th Annual Forum and Technology Display
Country/TerritoryUnited States

All Science Journal Classification (ASJC) codes

  • Aerospace Engineering
  • Control and Systems Engineering


Dive into the research topics of 'Rotorcraft flight information inference from cockpit videos using deep learning'. Together they form a unique fingerprint.

Cite this