The team is seeking a Data Engineer experienced in implementing modern data solutions in Azure, with strong hands-on skills in Databricks, Spark, Python, and cloud-based DataOps practices. The Data Engineer will analyze, design, and develop data products, pipelines, and information architecture deliverables, focusing on data as an enterprise asset. This role also supports cloud infrastructure automation and CI / CD using Terraform, GitHub, and GitHub Actions to deliver scalable, reliable, and secure data solutions.
Requirements :
- 5+ years of experience as a Data Engineer
- Hands-on experience with Azure Databricks, Spark, and Python
- Experience with Delta Live Tables (DLT) or Databricks SQL
- Strong SQL and database background
- Experience with Azure Functions, messaging services, or orchestration tools
- Familiarity with data governance, lineage, or cataloging tools (e.g., Purview, Unity Catalog)
- Experience monitoring and optimizing Databricks clusters or workflows
- Experience working with Azure cloud data services and understanding how they integrate with Databricks and enterprise data platforms
- Experience with Terraform for cloud infrastructure provisioning
- Experience with GitHub and GitHub Actions for version control and CI / CD automation
- Strong understanding of distributed computing concepts (partitions, joins, shuffles, cluster behavior)
- Familiarity with SDLC and modern engineering practices
- Ability to balance multiple priorities, work independently, and stay organized
Key Responsibilities
Analyze, design, and develop enterprise data solutions with a focus on Azure, Databricks, Spark, Python, and SQLDevelop, optimize, and maintain Spark / PySpark data pipelines, including managing performance issues such as data skew, partitioning, caching, and shuffle optimizationBuild and support Delta Lake tables and data models for analytical and operational use casesApply reusable design patterns, data standards, and architecture guidelines across the enterprise, including collaboration with when neededUse Terraform to provision and manage cloud and Databricks resources, supporting Infrastructure as Code (IaC) practicesImplement and maintain CI / CD workflows using GitHub and GitHub Actions for source control, testing, and pipeline deploymentManage Git-based workflows for Databricks notebooks, jobs, and data engineering artifactsTroubleshoot failures and improve reliability across Databricks jobs, clusters, and data pipelinesApply cloud computing skills to deploy fixes, upgrades, and enhancements in Azure environmentsWork closely with engineering teams to enhance tools, systems, development processes, and data securityParticipate in the development and communication of data strategy, standards, and roadmapsDraft architectural diagrams, interface specifications, and other design documentsPromote the reuse of data assets and contribute to enterprise data catalog practicesDeliver timely and effective support and communication to stakeholders and end usersMentor team members on data engineering principles, best practices, and emerging technologies