About the job :
We are seeking an accomplished Tech Lead Data Engineer to architect and drive the development of large-scale, high-performance data platforms supporting critical customer and transaction-based systems. The ideal candidate will have a strong background in data pipeline design , Hadoop ecosystem , and real-time data processing , with proven experience building data solutions that power digital products and decisioning platforms in a complex, regulated environment.
As a technical leader, you will guide a team of engineers to deliver scalable, secure, and reliable data solutions enabling advanced analytics, operational efficiency, and intelligent customer experiences.
Key Roles & Responsibilities
- Lead and oversee the end-to-end design, implementation, and optimization of data pipelines supporting key customer onboarding, transaction, and decisioning workflows.
- Architect and implement data ingestion, transformation, and storage frameworks leveraging Hadoop , Avro , and distributed data processing technologies.
- Partner with product, analytics, and technology teams to translate business requirements into scalable data engineering solutions that enhance real-time data accessibility and reliability.
- Provide technical leadership and mentorship to a team of data engineers, ensuring adherence to coding, performance, and data quality standards.
- Design and implement robust data frameworks to support next-generation customer and business product launches.
- Develop best practices for data governance , security , and compliance aligned with enterprise and regulatory requirements.
- Drive optimization of existing data pipelines and workflows for improved efficiency, scalability, and maintainability.
- Collaborate closely with analytics and risk modeling teams to ensure data readiness for predictive insights and strategic decision-making.
- Evaluate and integrate emerging data technologies to future-proof the data platform and enhance performance.
Must-Have Skills
810 years of experience in data engineering , with at least 23 years in a technical leadership role .Strong expertise in the Hadoop ecosystem (HDFS, Hive, MapReduce, HBase, Pig, etc.).Experience working with Avro , Parquet , or other serialization formats.Proven ability to design and maintain ETL / ELT pipelines using tools such as Spark , Flink , Airflow , or NiFi .Proficiency in Python , Scala for large-scale data processing.Strong understanding of data modeling , data warehousing , and data lake architectures.Hands-on experience with SQL and both relational and NoSQL data stores.Cloud data platform experience with AWS .Deep understanding of data security , compliance , and governance frameworks .Excellent problem-solving, communication, and leadership skills.