TY - GEN
T1 - Atmospheric Visibility Image-Based System for Instrument Meteorological Conditions Estimation
T2 - 9th International Conference on Wireless Networks and Mobile Communications, WINCOM 2022
AU - Bouhsine, Taha
AU - Idbraim, Soufiane
AU - Bouaynaya, Nidhal Carla
AU - Alfergani, Husam
AU - Ouadil, Kabira Ait
AU - Johnson, Charles Cliff
N1 - Funding Information:
ACKNOWLEDGMENT This work was supported by the Federal Aviation Administration (FAA) by providing us access to the databases and resources we used while working on this project.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Visibility degradation is the cause of many road accidents and air crashes around the globe, it is caused and controlled by multiple key factors that hinder the observer from having a clear vision of what's ahead, playing a crucial role in aviation safety. Alas, researchers and engineers created many tools to measure or restore images suffering from those degradations. This paper compares different image-based deep learning architectures for Atmospheric Visibility Range Classification. A Vision Transformer trained from scratch and three pre-trained Convolution Neural Network (CNN) models were examined and contrasted in terms of accuracy, obtaining a validation accuracy of more than 95 %. Our experiment shows that the different models trained on the Federal Aviation Administration (FAA) initial dataset can be used towards building an efficient tool to aid in Flight Control for long-range visibility estimation. Results showed that DenseNet121 was the best performing model with 99 % training accuracy and 98 % validation accuracy and shows early convergence, while the Vision Transformer kept on showing signs of improving but falling behind only by a bit behind DenseNet121 by the end of the experiment.
AB - Visibility degradation is the cause of many road accidents and air crashes around the globe, it is caused and controlled by multiple key factors that hinder the observer from having a clear vision of what's ahead, playing a crucial role in aviation safety. Alas, researchers and engineers created many tools to measure or restore images suffering from those degradations. This paper compares different image-based deep learning architectures for Atmospheric Visibility Range Classification. A Vision Transformer trained from scratch and three pre-trained Convolution Neural Network (CNN) models were examined and contrasted in terms of accuracy, obtaining a validation accuracy of more than 95 %. Our experiment shows that the different models trained on the Federal Aviation Administration (FAA) initial dataset can be used towards building an efficient tool to aid in Flight Control for long-range visibility estimation. Results showed that DenseNet121 was the best performing model with 99 % training accuracy and 98 % validation accuracy and shows early convergence, while the Vision Transformer kept on showing signs of improving but falling behind only by a bit behind DenseNet121 by the end of the experiment.
UR - http://www.scopus.com/inward/record.url?scp=85144630022&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85144630022&partnerID=8YFLogxK
U2 - 10.1109/WINCOM55661.2022.9966454
DO - 10.1109/WINCOM55661.2022.9966454
M3 - Conference contribution
AN - SCOPUS:85144630022
T3 - Proceedings - 2022 9th International Conference on Wireless Networks and Mobile Communications, WINCOM 2022
BT - Proceedings - 2022 9th International Conference on Wireless Networks and Mobile Communications, WINCOM 2022
A2 - El Maliani, Ahmed Drissi
A2 - El Kamili, Mohamed
A2 - Foschini, Luca
A2 - Kobbane, Abdellatif
A2 - Han, Shuai
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 26 October 2022 through 29 October 2022
ER -