TY - GEN
T1 - DyViR
T2 - Synthetic Data for Artificial Intelligence and Machine Learning: Tools, Techniques, and Applications 2023
AU - Williams, Garrett
AU - Lecakes, George D.
AU - Almon, Amanda
AU - Koutsoubis, Nikolas
AU - Naddeo, Kyle
AU - Kiel, Thomas
AU - Ditzler, Gregory
AU - Bouaynaya, Nidhal C.
N1 - Publisher Copyright:
© 2023 SPIE.
PY - 2023
Y1 - 2023
N2 - Unmanned combat aerial vehicles (i.e., drones), are changing the modern geopolitical stage’s surveillance, security, and conflict landscape. Various technologies and solutions can help track drones; each technology has different advantages and limitations concerning drone size and detection range. Machine learning (ML) can automatically detect and track drones in real-time while superseding human-level accuracy and providing enhanced situational awareness. Unfortunately, ML’s power depends on the data’s quality and quantity. In the drone detection task scenario, limited datasets provide limited environmental variation, view angle, view distance, and drone type. We developed a customizable software tool called DyViR that generates large synthetic video datasets for training machine learning algorithms in aerial threat object detection. These datasets contain video and audio renderings of aerial objects within user-specified dynamic simulated biomes (i.e., arctic, desert, and forest). Users can alter the environment on a timeline allowing changes to behaviors such as drone flight patterns and weather conditions across a synthetically generated dataset. DyViR supports additional controls such as motion blur, anti-aliasing, and fully dynamic moving cameras to produce imagery across multiple viewing angles. Each aerial object’s classification (drone or airplane) and bounding box data automatically exports to a comma-separated-value (CSV) file and a video to form a synthetic dataset. We demonstrate the value of DyViR by training a real-time YOLOv7-tiny model on these synthetic datasets. The performance of the object detection model improved by 60.4% over its counterpart not using DyViR. This result suggests a use-case of synthetic datasets to surmount the lack of real-world training data for aerial threat object detection.
AB - Unmanned combat aerial vehicles (i.e., drones), are changing the modern geopolitical stage’s surveillance, security, and conflict landscape. Various technologies and solutions can help track drones; each technology has different advantages and limitations concerning drone size and detection range. Machine learning (ML) can automatically detect and track drones in real-time while superseding human-level accuracy and providing enhanced situational awareness. Unfortunately, ML’s power depends on the data’s quality and quantity. In the drone detection task scenario, limited datasets provide limited environmental variation, view angle, view distance, and drone type. We developed a customizable software tool called DyViR that generates large synthetic video datasets for training machine learning algorithms in aerial threat object detection. These datasets contain video and audio renderings of aerial objects within user-specified dynamic simulated biomes (i.e., arctic, desert, and forest). Users can alter the environment on a timeline allowing changes to behaviors such as drone flight patterns and weather conditions across a synthetically generated dataset. DyViR supports additional controls such as motion blur, anti-aliasing, and fully dynamic moving cameras to produce imagery across multiple viewing angles. Each aerial object’s classification (drone or airplane) and bounding box data automatically exports to a comma-separated-value (CSV) file and a video to form a synthetic dataset. We demonstrate the value of DyViR by training a real-time YOLOv7-tiny model on these synthetic datasets. The performance of the object detection model improved by 60.4% over its counterpart not using DyViR. This result suggests a use-case of synthetic datasets to surmount the lack of real-world training data for aerial threat object detection.
UR - http://www.scopus.com/inward/record.url?scp=85171179612&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85171179612&partnerID=8YFLogxK
U2 - 10.1117/12.2663417
DO - 10.1117/12.2663417
M3 - Conference contribution
AN - SCOPUS:85171179612
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Synthetic Data for Artificial Intelligence and Machine Learning
A2 - Howell, Christopher L.
A2 - Manser, Kimberly E.
A2 - Rao, Raghuveer M.
PB - SPIE
Y2 - 1 May 2023 through 3 May 2023
ER -