About Overland AI
Founded in 2022 and headquartered in Seattle, Washington, Overland AI is transforming land operations for modern defense. The company leverages over a decade of advanced research in robotics and machine learning, as well as a field-test forward ethos, to deliver combined capabilities for unit commanders. Our OverDrive autonomy stack enables ground vehicles to navigate and operate off-road in any terrain without GPS or direct operator control. Our intuitive OverWatch C2 interface provides commanders with precise coordination capabilities essential for mission success.
Overland AI has secured funding from prominent defense tech investors including 8VC and Point 72, and built trusted partnerships with DARPA, the U.S. Army, Marine Corps, and Special Operations Command. Backed by eight-figure contracts across the Department of Defense, we are strengthening national security by iterating closely with end users engaged in tactical operations.
Role Summary
Overland AI is hiring a Test Data Analyst to develop and maintain a first-order, data-driven understanding of how our autonomous vehicles behave in real-world testing. This role sits within the Systems, Safety, and Test (SST) organization and partners closely with software, hardware, and test teams to turn daily field test outputs into reliable insight that improves autonomy performance, safety, and system maturity.
This role is centered on deep, hands-on analysis of field test data. You will spend your time immersed in autonomy runs, synchronized logs, ROS MCAPs, sensor outputs, and recorded test video — building deep intuition for system behavior by repeatedly reviewing the same routes and scenarios over time. This sustained exposure and consistent analysis enables you to not only annotate and tag data but identify subtle patterns, regressions, and improvements that are not visible through metrics alone.
You will be embedded in the test workflow, translating observed behavior into structured datasets, high-quality issue reports, and clear test summaries. Your work forms the factual record of system behavior that engineering, leadership, and customers rely on to assess readiness and risk in demanding defense environments.
This role sits at the intersection of autonomy testing, data analysis, and systems thinking, with a strong emphasis on accuracy, traceability, and clarity over speed.
Key Responsibilities
Primary Responsibility : Field Test Data Review & Behavior Analysis
- Perform deep review of autonomy field test data, including synchronized video, ROS MCAPs, telemetry, and sensor outputs
- Build strong familiarity with system behavior by analyzing repeated routes and scenarios across changing software and hardware configurations
- Annotate autonomy behavior, anomalies, and decision-making moments with precise timestamps and contextual notes
- Identify subtle deviations, trends, and regressions that emerge through longitudinal analysis rather than single test runs
Issue Identification, Trends & Root Cause Insight
Identify, classify, and document hardware, software, and system-level behaviors observed during autonomy testingOwn the quality of issue reporting by producing, reviewing, and enriching bug reports with clear context, timestamps, and supporting evidenceTrack and trend system behavior across repeated routes, environments, and software / hardware releases to identify regressions and improvementsAnalyze recurring anomalies (e.g., odometry stability, localization consistency, planner decisions) using longitudinal test dataPerform structured analysis to identify contributing factors across autonomy software, vehicle systems, sensing, and operationsSupport issue prioritization by providing data-backed context that distinguishes isolated events from systemic riskTest Reporting & Evidence Development
Generate clear, structured test summaries that synthesize large volumes of data into conclusions and recommendationsContribute traceable evidence to support hazard analysis, validation activities, and future certification effortsHelp define repeatable standards and formats for test reporting as the organization scalesData Visibility & Communication
Transform raw test data and analysis into visual, consumable artifacts for engineers, operators, and leadershipCreate clear plots, summaries, timelines, and annotated media that communicate system behavior and test outcomesSupport shared understanding of system performance, risk, and maturity across technical and non-technical audiences
What You’ll Need to Succeed
Bachelor’s degree in a technical field (Engineering, Computer Science, Applied Math, Physics, Data Science, or similar) or equivalent practical experience2–5 years of experience analyzing data from complex, real-world systemsExperience working with sensor-rich or operational data from domains such as autonomy, robotics, automotive, aerospace, defense, or similar environmentsComfort working with autonomy and robotics data artifacts, including logs, telemetry, ROS artifacts, and sensor outputsStrong analytical skills and the discipline to methodically work through large volumes of real-world test dataAbility to reason about autonomous system behavior across perception, planning, control, and vehicle interfacesHigh attention to detail with a bias toward accuracy, traceability, and completenessClear written communication skills for producing bug reports, analyses, and test summariesWorking proficiency with Python or similar tools for data analysis and lightweight automationComfort operating in fast-paced, field-forward development environmentsWhat Will Set You Apart
Experience working directly with autonomous vehicle sensor data, including LiDAR, radar, and camera streamsDemonstrated ability to synthesize ambiguous or incomplete datasets into clear, defensible conclusionsExperience analyzing long-duration test data or reviewing extensive test video to identify subtle system behaviorsFamiliarity with systems engineering concepts such as Operational Design Domains (ODDs), duty cycles, or performance requirementsExposure to safety analysis, certification activities, or formal verification and validation workflowsExperience producing structured test reports or evidence packages for external or regulated stakeholdersStrong data visualization skills and the ability to communicate technical results clearly to both technical and non-technical audiencesLocation
The preferred location for this position is onsite in Seattle, WA .
Compensation
Annual Base Pay : $95,000 – $120,000 USD
Benefits
Equity compensationBest-in-class healthcare, dental, and vision plansUnlimited PTO401(k) with company matchParental leave