Robust Navigation for Racing Drones based on Imitation Learning and Modularization
Tianqi Wang, Dong Eui Chang
This paper presents a vision-based modularized drone racing navigation system that uses a customized convo- lutional neural network (CNN) for the perception module to produce high-level navigation commands and then leverages a state-of-the-art planner and controller to generate low-level control commands, thus exploiting the advantages of both data- based and model-based approaches. Unlike the state-of-the-art method, which only takes the current camera image as the CNN input, we further add the latest three estimated drone states as part of the inputs. Our method outperforms the state-of-the-art method in various track layouts and offers two switchable navi- gation behaviors with a single trained network. The CNN-based perception module is trained to imitate an expert policy that automatically generates ground truth navigation commands based on the pre-computed global trajectories. Owing to the extensive randomization and our modified dataset aggregation (DAgger) policy during data collection, our navigation system, which is purely trained in simulation with synthetic textures, successfully operates in environments with randomly-chosen photo-realistic textures without further fine-tuning.