What you know
- Design develop and maintain scalable high-performance data pipelines across Azure cloud platforms.
- Build and orchestrate ETL / ELT workflows using Azure Data Factory (ADF).
- Develop enterprise-grade data models (dimensional relational lakehouse) to support analytics and reporting needs.
- Implement and optimize data processing solutions using Azure Databricks and distributed computing frameworks.
- Support AI / ML initiatives by enabling curated datasets and workflow integration using Azure Machine Learning pipelines.
- Collaborate with data scientists analysts architects and business stakeholders to deliver trusted data products.
- Ensure best practices in data governance security quality and compliance across platforms.
- Monitor and optimize pipeline reliability performance and cost efficiency in production environments.
- Contribute to CI / CD automation and operational excellence for enterprise data workflows.
Important attributes for this role
Strong data modeling mindset (enterprise analytics and lakehouse architectures)Ownership of end-to-end data architecture and deliveryAbility to work cross-functionally with technical and business teamsClear communication and stakeholder alignmentStrong decision-making in platform and architecture designEnterprise focus on reliability security and scalabilityWhat youll do
8 10 years of experience in Data Engineering Data Platform Engineering or related roles.Strong experience building cloud-scale data solutions in the Microsoft Azure ecosystem.Hands-on expertise with Azure Data Factory for orchestration and automation.Experience with Azure Databricks for large-scale data transformation and processing.Strong proficiency in SQL and Python for data engineering workflows.Experience working with Azure Data Lake Storage (ADLS) and modern lakehouse architectures.Familiarity with Azure Machine Learning pipelines and supporting feature / data workflows for ML teams.Strong understanding of data governance lineage and quality frameworks.Experience with Delta Lake and modern lakehouse design.Familiarity with DevOps practices CI / CD pipelines and Infrastructure-as-Code.Experience working in large-scale enterprise environments with complex data integration needs.Knowledge of Azure security best practices (RBAC Key Vault encryption).Education
Bachelor s degree in Computer Science Engineering Data Science or a related field (required).Master s degree is a plus.Compensation : $140K - $145K / PA
Key Skills
Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala
Employment Type : Full Time
Experience : years
Vacancy : 1