Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
Summary :
We’re seeking a seasoned Senior Data Platform Engine er to design, build, and optimize scalable data solutions that power analytics, reporting, and AI / ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing.
Responsibilites :
- Build and maintain robust data pipelines (structured, semi‑structured, unstructured).
- Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools.
- Support big data platforms (Databricks, Snowflake, GCP) in production.
- Troubleshoot and optimize SQL queries, Spark jobs, and workloads.
- Ensure governance, security, and compliance across data systems.
- Integrate workflows into CI / CD pipelines with Git, Jenkins, Terraform.
- Collaborate cross‑functionally to translate business needs into technical solutions.
Qualifications :
7+ years in data engineering with production pipeline experience.Expertise in Spark ecosystem, Databricks, Snowflake, GCP.Strong skills in PySpark, Python, SQL.Experience with RAG systems, semantic search, and LLM integration.Familiarity with Kafka, Pub / Sub, vector databases.Proven ability to optimize ETL jobs and troubleshoot production issues.Agile team experience and excellent communication skills.Certifications in Databricks, Snowflake, GCP, or Azure.Exposure to Airflow, BI tools (Power BI, Looker Studio).