Job Description
Role : Data engineer
Location : Phoenix AZ / / Need only locals and F2F interview required
Term : Contract
Looking for a highly motivated Data Engineer to join our dynamic data and analytics team. In this role, you will be responsible for building and maintaining scalable data pipelines, supporting ingestion from multiple sources, and ensuring data integrity and availability across various systems. You'll work closely with data scientists, analysts, and engineering teams to enable real-time and batch data processing.
Key Responsibilities
- Design, develop, and maintain robust data pipelines to ingest, transform, and deliver data across internal and external platforms.
- Write and optimize complex SQL queries for data extraction, transformation, and loading (ETL / ELT).
- Implement data ingestion frameworks using batch and streaming technologies.
- Develop data integration workflows and scripts using Python, Shell scripting, or other scripting languages.
- Ensure high performance, reliability, and data quality across all stages of the pipeline.
- Collaborate with cross-functional teams (data science, analytics, product) to understand data needs and deliver scalable solutions.
- Monitor data jobs, identify bottlenecks, and troubleshoot issues in real time.
- Handle large and intricate datasets, perform data profiling, and ensure conformance to data quality standards.
- Apply problem-solving skills to identify root causes of data issues and suggest long-term fixes or enhancements. Work in Agile / Scrum environments, participating in planning, reviews, and delivery cycles.
Required Skills & Experience
Strong hands-on experience with SQL (writing complex joins, window functions, CTEs, aggregations, etc.).Proven experience with data ingestion, integration, and pipeline design across multiple data sources.Proficiency in Python, Shell, or other scripting languages for automation and orchestration tasks.Familiarity with data processing tools and frameworks such as Apache Airflow, Spark, Kafka, or similar.Experience working with relational databases (PostgreSQL, Oracle, MySQL).Experience with Cloud data platforms (GCP Big Query) is a Plus.Ability to work with complex and messy data : cleansing, validating, and transforming to ensure consistency.Strong analytical and problem-solving skills with attention to detail and data accuracy.Exposure to CI / CD practices and version control (GIT).Knowledge of data modeling principles and schema design is a plus.Plus Qualifications
Experience in handling data from APls, flat files, and event streams.Background in financial services, payments, or customer analytics .Familiarity with data governance practices, metadata management, and PIl data handling.Understanding of data security, encryption, and masking techniques.