We are looking for a Computer Vision Engineer to work on vision-based systems that enable autonomous targeting and navigation for drones. This role focuses on deploying and optimizing computer vision systems on real hardware platforms, where compute resources are limited and performance constraints are strict. You will work closely with engineers across the autonomy stack to ensure that vision models operate reliably in real-world conditions.
What You’ll Do
- Develop and improve computer vision components for autonomous drone systems
- Integrate vision models into real-time autonomy pipelines
- Optimize inference performance for low-compute edge hardware
- Work with camera feeds and sensor data
- Improve detection, tracking, and scene understanding in real environments
- Collaborate with engineers across the autonomy stack to improve overall system performance
- Balance model performance with hardware constraints and system-level tradeoffs
The goal is to build systems that work reliably in real-world conditions, not only in ideal laboratory environments.
Technical Focus
Typical areas of work include: object detection, visual tracking, scene understanding, perception systems for autonomous platforms, and optimization of vision models for edge devices
Tech Stack
- Python
- C / C++
- PyTorch / TensorFlow
- OpenCV
Hardware Environment
Vision systems operate on low-compute edge platforms with limited processing power.
Typical sensors include: RGB cameras, compass, additional drone sensors (LiDAR, altimeters, barometers, etc.)
The work requires optimizing models for latency, reliability, and efficient use of compute resources.
We are looking for engineers who enjoy applying computer vision in real-world systems. Experience deploying models on low-compute hardware is highly valuable.