About Rivet
Rivet is an American company building integrated task systems — fusing hardened hardware with software, sensors, AI, and networking — for industrial workforces and defense personnel. We create capabilities that multiply the effectiveness of every individual and withstand the world’s toughest environments.
We serve the people who build, operate, maintain, and defend our way of life. From technicians and engineers to first responders and service members, they embody the hard work, ingenuity, and meritocratic values that drive Western prosperity. Yet too often they are forced to rely on outdated tools that fail under modern pressures. Rivet exists to reset that priority.
At Rivet, you’ll join a mission-driven team that fuses disciplines to deliver decisive outcomes where they matter most. Whether shaping our technology, strengthening our partnerships, or building our culture, every role here contributes to equipping the front lines with the modern systems they deserve.
Work Authorization Requirement: Due to the nature of our business and compliance with federal regulations, all candidates must be a "U.S. Person". Upon hire, you will be required to provide documentation verifying your status as a U.S. Citizen, a lawful permanent resident, or a protected individual under 8 U.S.C. 1324b(a)(3).
Role: Augmented Reality Software Engineer, Computer Vision
Location: San Jose preferred, open to Bellevue, WA
Compensation*: $210,000-$290,000 + benefits
Role Description
Rivet is looking for an Augmented Reality Software Engineer focused on computer vision, tracking, and equilibrium systems for next-generation AR devices running Android and embedded Linux. This role is responsible for building perception and tracking capabilities that enable stable, low-latency, and spatially aware AR experiences in dynamic real-world environments. You will work on systems that process and interpret visual and sensor data in real time — including camera pipelines, visual tracking, scene understanding, motion estimation, and equilibrium/stability algorithms for wearable AR platforms. The role spans perception, XR runtime integration, and real-time performance optimization on constrained hardware. You will collaborate closely with sensor fusion, hardware, firmware, graphics, and product teams to deliver robust and reliable spatial computing capabilities.
Responsibilities
- Develop computer vision and tracking systems for AR/XR devices using different vision modalities from state-of-the-art sensor inputs.
- Build and debug perception pipelines for visual tracking, scene understanding, motion estimation, localization, and equilibrium/stability systems on edge platforms such as Qualcomm, Nvidia and Intel processors.
- Implement performant real-time systems in C++ and C# for AR applications and device services.
- Integrate with OpenXR and AR runtimes for cross-platform XR compatibility.
- Optimize latency, tracking fidelity, synchronization, and runtime performance on Android and embedded Linux platforms.
- Develop tooling, visualization, and debugging workflows for perception and tracking validation.
- Build companion applications and services for Android-based devices using Java/Kotlin.
- Use OpenCV and Python/PyTorch, Tensflow, Tensorflow lite for prototyping for automation, simulation, test harnesses, and CI/build tooling.
- Have required expertise to optimize models for end to end deployment of vision models on to the edge.
- Collaborate across hardware, firmware, graphics, backend, and perception teams to deliver end-to-end AR functionality.
Role Requirements
- Deep experience building computer vision, perception, tracking, robotics, AR/VR, or real-time spatial computing systems.
- MS + at least 10 years (or PhD + at least 5 years) of relevant professional experience.
- Strong background in computer vision, motion tracking, localization, geometry, coordinate systems, and real-time systems.
- Proficient in C++ and/or C# for performance-critical applications.
- Experience developing on Android or embedded Linux devices.
- Familiarity with XR and perception frameworks such as OpenXR, ARCore, SLAM/VIO pipelines, OpenCV, or related technologies.
- Strong understanding of performance optimization, profiling, synchronization, and CPU/GPU/memory tradeoffs.
- Experience with Java/Kotlin for Android services and peripheral integration.
- Python experience for scripting, automation, testing, or simulation workflows.
- Ability to partner across hardware, firmware, graphics, and cloud/backend systems.
- Track record of shipping complex real-time systems or XR capabilities.
Preferred Qualifications
- PhD in a relevant discipline with 5+ years of industry experience, or MS with 10+ years of industry experience.
- Experience with SLAM, visual-inertial odometry (VIO), object tracking, depth sensing, or probabilistic estimation systems.
- Experience with Unity, Unreal Engine, StereoKit, or comparable 3D/XR engines.
- Experience building wearable, robotics, aerospace, defense, or embedded perception systems.
*Total compensation may vary within this range and is determined by years and level of relevant experience, job-related skills, education, and other factors. In addition to base salary, this role may be eligible for equity grants and other forms of compensation. Eligible employees also receive a competitive benefits package, including unlimited PTO.
EOE
Compensation details: 210000-290000 Yearly Salary
PIe0380014a18b-30511-40277756