
Physical interaction with the environment poses a significant challenge for controlling Unmanned Aerial Vehicles (UAVs). Unlike free-flight tasks, contact interactions introduce nonlinear and unpredictable dynamics. While Reinforcement Learning (RL) has shown promise for such tasks, it typically relies on extensive simulation to avoid damaging the real system. A more direct approach is to start RL with an initial policy learned through Imitation Learning (IL), which can be derived from expert demonstrations.
This project aims to develop a control policy for aerial physical interaction by leveraging recent advances in IL. The approach will build on sensitivity-based data aggregation, allowing for the use of a small number of expert demonstrations. The learned policy will then serve as an initial candidate for a model-based RL algorithm, enabling direct policy learning on the real robot. The outcome will be a robust policy for push-and-slide interaction tasks.
- Literature Review: Study recent advances in Imitation Learning for UAV control, with a focus on closed-loop sensitivity and data aggregation.
- Implementation of Expert Controller: Develop a force/impedance control expert for aerial physical interaction.
- Design of IL Pipeline: Design an Imitation Learning pipeline based on the literature, incorporating data aggregation using closed-loop sensitivity.
- Implementation: Implement the IL pipeline in Python, ensuring compatibility with the telekyb3 framework.
- Validation: Simulate and test the learning strategy in a simulated environment, then deploy it on the real system.
Interested candidates are requested to apply via this form. The position will remain open until a satisfactory candidate is found.
In case of positive feedback, you will be contacted. If not positive, you won't hear back.
Applications sent directly by email and not through the web form will not be considered!!