About Geviti
Geviti is a next-generation health optimization platform helping individuals take control of their health through advanced diagnostics, AI-driven insights, and personalized care. Our members benefit from custom lab panels, continuous biomarker tracking, wearable integration, and tailored interventions—including hormone optimization, peptide therapies, supplements, and evidence-based lifestyle strategies.
We're setting a new standard in proactive, precision-based care—designed not just to treat illness, but to optimize health and longevity. Technology and data are at the heart of how we deliver this vision, powering everything from personalized member experiences to real-time operational insights and AI-driven clinical intelligence.
The Opportunity
We are searching for a Senior Data Engineer to serve as the founding member of our data function. You will architect the core data infrastructure that supports every part of the business—from clinical operations to member experience to AI-powered insights.
You'll build our data pipelines, warehouse / lakehouse environment, workflow orchestration, and foundational data models. As the organization's first dedicated data engineering hire, you'll operate with a high degree of autonomy, influence technical direction, and enable advanced downstream use cases including analytics, BI, and machine learning.
This is a hands-on, high-impact role ideal for someone excited to design a system from the ground up and shape the future of data at Geviti.
Key Responsibilities
Data Infrastructure & Engineering
- Architect, build, and maintain a scalable cloud-based data warehouse or lakehouse environment
- Design and implement robust ETL / ELT pipelines integrating data from clinical systems, internal tools, third-party APIs, and operational platforms
- Develop clean, well-structured data models optimized for analytics, reporting, and data science
- Implement data quality processes, validation layers, testing frameworks, and governance standards
- Create and maintain metadata documentation, data dictionaries, lineage tracking, and schema definitions
- Optimize warehouse performance, cost efficiency, and pipeline reliability
- Evaluate tooling for orchestration, workflow automation, and observability
Data Operations & Reliability
Own end-to-end data ingestion processes ensuring stable, timely, and accurate data availabilityEstablish monitoring, alerting, logging, and operational observability for all pipelinesTroubleshoot and resolve data inconsistencies, ingestion failures, and system bottlenecksEnsure PHI and HIPAA-compliant data access, storage, and transmission practicesAnalytics Enablement & Cross-Functional Support
Collaborate with analytics and product teams to build source-of-truth datasets and KPI reporting layersPartner with engineering to instrument data collection and event tracking across the platformEnable self-service analytics by delivering clean, well-modeled data assets and clear documentationAI & Advanced Capabilities
Build pipelines and data structures that support machine learning training, testing, and deploymentPrepare datasets for AI-driven features, RAG workflows, and real-time personalizationSupport experimentation frameworks, A / B testing, and future intelligent system capabilitiesCross-Functional Collaboration
Work closely with product, engineering, operations, and clinical teams to understand data needs and deliver scalable solutionsUphold HIPAA and best practices for secure PHI handling across all data environmentsKey Qualifications
5+ years of experience as a Data Engineer or similar data infrastructure-focused roleExpert SQL proficiency, including schema design, warehouse modeling, and query optimizationStrong Python experience for ETL / ELT development, automation, and data processingProven experience architecting and maintaining cloud-based data warehouse / lakehouse environments (e.g., Snowflake, BigQuery, Redshift, Databricks)Deep knowledge of ETL / ELT frameworks, workflow orchestration (Airflow, dbt, Prefect, Dagster), and data quality methodologiesExperience with API integrations, event-based data flows, and data modeling for analyticsFamiliarity with modern BI ecosystems and supporting high-quality reporting layersDeep statistical foundation and experience designing experimentsAbility to translate ambiguous business challenges into measurable data solutionsExperience with product and customer analyticsStrong communication skills—able to make data meaningful for non-technical stakeholdersSelf-starter mindset with comfort operating independently and building systems from the ground upStrong systems-thinking approach with attention to reliability, performance, and costExcellent documentation and communication skillsHighly autonomous, proactive, and comfortable building systems from the ground upNice-to-haves
Experience in healthcare or healthtech environmentsKnowledge of FHIR standards or healthcare data modelsFamiliarity with HIPAA compliance and secure data handlingExposure to ML frameworks (classification, regression, time-series, RAG)Cloud certifications or real-time analytics experiencePrior experience as a founding or first data hireWhy Geviti
Work on meaningful problems that directly improve people's health and longevityJoin a mission-led, growth-oriented team shaping the future of healthcareCompetitive compensation with opportunities for leadership and growthBuild impactful products at the intersection of health, data, and AI