.
Embedded Computer Vision Engineer (Edge Inference)
Overview
We are building computer-vision capabilities on Linux-based edge devices. This role owns the embedded software that takes models from “works on a workstation” to “runs reliably, efficiently, and measurably fast on-device.” You will develop and optimize inference pipelines, integrate vendor runtimes on NPUs/MPUs, and work close to the Linux kernel when needed (performance, memory, I/O, and driver interactions).
What you will do
- Build and maintain production-grade embedded software for on-device computer vision inference (camera ingest, preprocessing, inference, postprocessing, telemetry) primarily in C++, with Rust as an option where appropriate.
- Integrate and run deep learning models using edge runtimes/toolchains (e.g., TensorRT, TFLite, OpenVINO, ONNX Runtime, vendor SDKs for NPUs/MPUs).
- Profile and optimize end-to-end performance: latency, throughput, memory footprint, power, and thermal constraints.
- Implement deployment-oriented model optimizations when needed (quantization workflows, operator compatibility fixes, graph optimizations, runtime-specific conversion).
- Work on Linux-based embedded platforms: cross-compilation, build systems, packaging, and reliable field deployment.
- Debug complex system issues across the stack: kernel/user-space boundaries, driver/I/O bottlenecks, memory contention, and multi-threaded performance.
- Collaborate with model/CV stakeholders to ensure models are edge-ready (I/O specs, accuracy vs. performance tradeoffs, validation on target hardware).
- Establish and uphold engineering standards: code quality, test strategy, CI, performance benchmarks, and observability on-device.
Requirements
Required qualifications
- 7–8+ years professional experience in embedded software development, with significant time shipping Linux-based products.
- Strong expertise in C++ (modern C++11/14/17); Rust experience is a plus (or willingness to use Rust where it benefits reliability/performance).
- Strong Linux systems knowledge, including at least some of: kernel fundamentals, device I/O, scheduling, memory behavior, and profiling/debugging tooling (e.g., perf, ftrace, eBPF).
- Working knowledge of computer vision and deep learning inference concepts (pipelines, tensors, common CV tasks, latency/accuracy tradeoffs). You do not need to be a model developer/researcher, but must be fluent in deploying and running models.
- Experience optimizing inference for edge hardware (NPUs/MPUs/GPUs/accelerators), including quantization and runtime constraints.
- Master’s degree minimum in a relevant field (Computer Vision, Machine Learning/Deep Learning, Electrical/Computer Engineering, Computer Science, or related).
Preferred qualifications
- Camera stacks and media pipelines (V4L2, GStreamer, ISP integration).
- Embedded build and deployment toolchains (Yocto/Buildroot, CMake/Bazel).
- Hardware-aware optimization experience (ARM, NEON/SIMD).
- Experience with vendor-specific NPU SDKs and quantization toolchains (e.g., Rockchip RKNN, Qualcomm SNPE/QNN, MediaTek, Intel Movidius, etc.).
- OTA, reliability, and embedded security practices (watchdogs, crash dumps, secure boot).
AI coding tools
- Comfortable using modern AI-assisted development tools (e.g., code completion, refactoring, test generation) while maintaining strong engineering judgment, code review discipline, and security awareness.
Benefits
At Rapsodo, you will have the opportunity to
- Work on cutting-edge technology that integrates AI, sensor fusion, and high-performance embedded computing,
- Be part of a highly skilled, multidisciplinary engineering team driving innovation,
- Lead end-to-end product development with real-world impact,
- Shape the future of sports through advanced embedded systems and AI-driven solutions.
If you’re passionate about solving complex engineering challenges and want to be at the forefront of next-generation technology, we’d love to hear from you.
Apply now and be part of the team that’s redefining performance through innovation!