The role designs and implements custom machine learning architectures for vision-based navigation and control in GPS-denied, high-velocity environments such as UAVs. Responsibilities include developing real-time perception using CNNs, transformers, or event-based vision and applying deep reinforcement learning combined with model predictive control to guide autonomous behaviors. The position requires architecting sensor fusion pipelines with IMUs, event cameras, lidar, radar, and visual odometry, and building, training, and deploying models in simulation and on embedded hardware. Strong proficiency in Python and C++, PyTorch or TensorFlow, and NVIDIA GPU toolkits for embedded, real-time deployment is required.
Design and implement custom machine learning architectures for vision-based navigation and control. Develop and optimize real-time perception systems using CNNs, transformers, or event-based vision. Apply deep reinforcement learning and MPC to guide autonomous behaviors in dynamic environments. Architect robust sensor fusion pipelines with IMUs, event cameras, lidar, radar, and visual odometry. Build, train, and evaluate ML models in simulation and deploy them to embedded hardware.
3-7+ years of hands-on experience in AI/ML, robotics, or autonomous systems research (industry or PhD/postdoc). Deep knowledge of real-time computer vision, sensor fusion, and model-based control. Strong expertise in deep reinforcement learning, optimal control, and/or hybrid AI systems. Track record of designing custom ML models or training pipelines, not just reusing open-source architectures. Advanced Python and C++ proficiency; deep familiarity with PyTorch and/or TensorFlow.
Collection
[
|
...
]