Deep ensemble for rotorcraft attitude prediction

Hikmat Khan, Ghulam Rasool, Nidhal Carla Bouaynaya, Tyler Travis, Lacey Thompson, Charles C. Johnson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations


Historically, the rotorcraft community has experienced a higher fatal accident rate than other aviation segments, including commercial and general aviation. To date, traditional methods applied to reduce incident rates have not proven hugely successful for the rotorcraft community. Recent advancements in artificial intelligence (AI) and the application of these technologies in different areas of our lives are both intriguing and encouraging. When developed appropriately for the aviation domain, AI techniques may provide an opportunity to help design systems that can address rotorcraft safety challenges. Our recent work demonstrated that AI algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges, e.g., indicated airspeed. These AI-based techniques provide a potentially cost-effective solution, especially for small helicopter operators, to record the flight state information and perform post-flight analyses. We also showed that carefully designed and trained AI systems can accurately predict rotorcraft attitude (i.e., pitch and yaw) from outside scenes (images or video data). Ordinary off-the-shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene, including the horizon. The AI algorithm was able to correctly identify rotorcraft attitude at an accuracy in the range of 80%. In this work, we combined five different onboard camera viewpoints to improve attitude prediction accuracy to 94%. Our current approach, which is referred to as ensembled prediction, significantly increased the reliability in the predicted attitude (i.e., pitch and yaw). For example, in some camera views, the horizon may be obstructed or not visible. The proposed ensemble method can combine visual details recorded from other cameras and predict the attitude with high reliability. In our setup, the five onboard camera views included pilot windshield, co-pilot windshield, pilot Electronic Flight Instrument System (EFIS) display, co-pilot EFIS display, and the attitude indicator gauge. Using video data from each camera view, we trained a variety of convolutional neural networks (CNNs), which achieved prediction accuracy in the range of 79% to 90%. We subsequently ensembled the learned knowledge from all CNNs and achieved an ensembled accuracy of 93.3%. Our efforts could potentially provide a cost-effective means to supplement traditional Flight Data Recorders (FDR), a technology that to date has been challenging to incorporate into the fleets of most rotorcraft operators due to cost and resource constraints. Such cost-effective solutions can gradually increase the rotorcraft community's participation in various safety programs, enhancing safety and opening up helicopter flight data monitoring (HFDM) to historically underrepresented segments of the vertical flight community.

Original languageEnglish (US)
Title of host publication77th Annual Vertical Flight Society Forum and Technology Display, FORUM 2021
Subtitle of host publicationThe Future of Vertical Flight
PublisherVertical Flight Society
ISBN (Electronic)9781713830016
StatePublished - 2021
Event77th Annual Vertical Flight Society Forum and Technology Display: The Future of Vertical Flight, FORUM 2021 - Virtual, Online
Duration: May 10 2021May 14 2021

Publication series

Name77th Annual Vertical Flight Society Forum and Technology Display, FORUM 2021: The Future of Vertical Flight


Conference77th Annual Vertical Flight Society Forum and Technology Display: The Future of Vertical Flight, FORUM 2021
CityVirtual, Online

All Science Journal Classification (ASJC) codes

  • Aerospace Engineering
  • Control and Systems Engineering


Dive into the research topics of 'Deep ensemble for rotorcraft attitude prediction'. Together they form a unique fingerprint.

Cite this