Company Description
Zigma LLC is a women-owned technology consulting and IT services start-up specializing in Big Data engineering, cloud data modernization, cloud architecture, and advanced analytics. Our mission is to empower organizations through secure, scalable, and high-performance digital ecosystems while maintaining a strong commitment to cybersecurity and compliance. We work with clients across various industries, including healthcare, telecom, and financial services, ranging from local businesses to enterprise-level corporations. Dedicated to fostering inclusion and women's leadership, we strive to deliver innovative solutions that drive operational efficiency and digital transformation. Zigma LLC combines technical expertise with a passion for empowering the next generation of women entrepreneurs.
Data Engineer (Mid-Level) Hybrid | C2C | Healthcare
Locations : East Bay Area, CA | Greater Los Angeles Area, CA | Oregon's Willamette Valley, OR | Greater Atlanta Area, GA
Employment Type : C2C
Work Authorization : US Citizens, Green Card, H4 / L2 / Any EAD, OPT / CPT Candidates.
Work Arrangement : Hybrid
Openings : 3 per location
Experience : 712 years Contract : Long-term (12+ months, performance-based)
Preferred Education / Certification : B.S / M.S. in Engineering Discipline with Computer Science, Data Engineering or relevant skills and certifications
Join a leading healthcare analytics team as a Data Engineer ! Work on Azure Cloud , Databricks , and modern Data Pipelines to drive insights from complex healthcare datasets. This is a hybrid role with opportunities to collaborate across multiple locations.
Key Responsibilities :
Design, build, and maintain ETL / ELT Ingestion pipelines on Azure Cloud
Collaborate with data scientists and analysts to ensure data quality, governance, and availability
Implement batch and streaming data processing workflows
Optimize data workflows and pipelines for performance and scalability
Work with HIPAA-compliant healthcare data
Technical Skills & Tools :
Programming & Scripting : Python, SQL, Scala / Java Data Processing Frameworks : Apache Spark, Kafka, Airflow / Prefect Databases : Relational (PostgreSQL, MySQL, SQL Server), NoSQL (MongoDB, Cassandra), Data Warehouses (Snowflake, Redshift) Data Formats : CSV, JSON, Parquet, Avro, ORC Version Control & DevOps : Git, Azure DevOps, CI / CD pipelines Cloud & Containerization : Azure Cloud, Docker, Kubernetes, Terraform
Core Skills :
ETL / ELT Ingestion pipeline design
Batch & streaming data processing
Data modelling (star / snowflake schema)
Performance optimization & scalability
Data governance and security
Must-Have :
712 years in Data Engineering
Hands-on Azure Cloud and Databricks experience
M.S. in Data Science or relevant certifications (Databricks / Data Science)
Data Engineer • San Jose, CA, US