|
Subject
Supervisors: dr. ir. Bryan Convens (bryan.convens@vub.be) and prof dr. ir. Adrian Munteanu (adrian.munteanu@vub.be). Bryan will be responsible for weekly technical guidance during the project. To learn more about his robotics research, see Bryans research videos or publications. Period: academic year 2025-2026, first and second session deadlines for written/oral defense in May/June and August/September, respectively. Who: any interested student or team of students with experience in at least one aspect of mechatronics, robotics, or autonomous decision making should email us. Bryan will organize a meeting to discuss which aspects the student could focus on based on background and interest and answer your questions. Introduction Autonomous drone racing (ADR) has emerged as a high-impact research benchmark that merges agile robotics, real-time perception, and model and learning-based control under extreme conditions. In ADR, quadrotors fly through a series of gates in minimum time without human intervention, relying only on onboard sensing and computation. Research in ADR pushes the boundaries of embedded vision, model-predictive control, reinforcement and imitation learning, with drones navigating at speeds over 100 km/h and accelerations exceeding 10g, amid motion blur, aerodynamic disturbances, and GNSS-denied environments. Beyond racing, ADR capabilities have strong implications for safety-critical tasks such as search-and-rescue, disaster inspection, and counter-drone systems, where high-speed autonomous navigation in cluttered, unknown spaces is key. This thesis will contribute to this evolving field by implementing a perception-to-control pipeline capable of fast, reliable racing using only onboard cameras and inertial sensors on a small racing quadrotor.
Kind of work
Problem Statement Despite recent advances, fully autonomous drone racing remains an unsolved challenge, especially under constraints of onboard sensing, onboard computation, and unknown or dynamic environments. Most prior works rely heavily on pre-mapped tracks, external motion capture systems, or offboard processing. The core problem is to jointly solve real-time visual perception, state estimation, and control onboard a resource-limited drone to fly through a racing trackaccurately, robustly, and fast. The thesis will investigate: 1. How can a drone localize itself and detect gates with minimal drift using only a monocular camera and IMU? 2. What are the trade-offs between model-based and learning-based control approaches for real-time agile racing and how can such appropaches be combined effectively? 3. Can a learned coupled perception-control policy (trained in simulation or on real data) outperform traditional pipelines?
Framework of the Thesis
Research and Engineering and Objectives The primary goal is to design and evaluate a fully onboard, end-to-end system for autonomous drone racing. Specific objectives include: System Design: Review lightweight perception pipeline (e.g., VIO + deep gate detection) for onboard state estimation and landmark (gate) localization. Integrate a state-of-the art low-latency onboard controller (e.g., nonlinear MPC or RL policy) tuned for high-speed gate traversal. Learning and Generalization: Train or fine-tune gate detection models and sensorimotor policies using modular and/or end-to-end Reinforcement Learning in simulation (Nvidia Isaac Sim / Lab) and transfer them to hardware using domain adaptation or sim2real techniques. Evaluate generalization across different track layouts, lighting conditions, and drone configurations. Real-World Evaluation: Build a state-of-the art small (total mass +-500grams) quadrotor platform using an autopilot suitable for racing (e.g., BetaFlight or PX4-based) including a frame or event-based monocular camera and sufficient onboard compute. If deemed useful for racing, a lightweight collision tolerant design using protective caging and/or embodied elasticity could be studied. Benchmark performance between traditional VIO-MPC approaches, end-to-end learned policies and against other prior works and manual drone piloting using metrics like lap time, number of crashes, and perception drift. References (preliminary) 1. D. Hanover et al., Autonomous Drone Racing: A Survey, IEEE Trans. on Robotics, vol. 40, pp. 30443065, 2024. 2. E. Kaufmann et al., Deep Drone Racing: Learning Agile Flight in Dynamic Environments, Conf. on Robot Learning, 2018. 3. D. Falanga et al., Dynamic Obstacle Avoidance for Quadrotors with Event Cameras, Science Robotics, vol. 5, no. 45, 2020. 4. A. Loquercio et al., Learning High-Speed Flight in the Wild, Science Robotics, vol. 7, 2022. 5. R. Penicka et al., Minimum-Time Quadrotor Trajectory Planning Using Nonlinear Optimization, Robotics: Science and Systems, 2021. 6. Kaufmann, E., Bauersfeld, L., Loquercio, A. et al. Champion-level drone racing using deep reinforcement learning. Nature 620, 982987 (2023). https://doi.org/10.1038/s41586-023-06419-4 7. Learning Robot Control: From RL to Differential Simulation - (PhD Defense of Yunlong Song), video 8. MAVLab TU Delft won the 2025 A2RLxDCL Drone Championship Grand Challenge (the world championship in ADR), link to post
Number of Students
1-2
|
|