Job Description
Key Responsibilities
Data Ingestion & Processing
- Design, build, and maintain data ingestion pipelines using AWS Glue, AWS Lambda, and S3
- Ingest data from multiple sources including databases, files, APIs, and SFTP
- Handle batch and near-real-time data processing use cases
Data Warehousing & Snowflake
Load, transform, and optimize data in SnowflakeDevelop and maintain Snowflake objects including tables, views, stages, file formats, and tasksImplement performance optimization techniques (clustering, partitioning, query tuning)Data Transformation & Quality
Implement data transformations using SQL and / or ETL frameworksApply basic data quality checks and validation rulesTroubleshoot and resolve data pipeline failures and performance issuesCloud & Automation
Develop serverless solutions using AWS Lambda for orchestration and lightweight transformationsWork with IAM roles and policies to ensure secure access to AWS resourcesMonitor pipelines using CloudWatch and logsCollaboration & Delivery
Collaborate with senior data engineers, architects, and analysts on solution designParticipate in code reviews, testing, and deployment activitiesSupport production issues and contribute to continuous improvement initiativesRequired Skills & Qualifications
3-4 years of experience as a Data EngineerHands-on experience with AWS services : Glue, Lambda, S3, CloudWatchStrong experience with Snowflake data warehouseProficiency in SQL (Snowflake SQL preferred)Experience working with Python for ETL and Lambda developmentUnderstanding of data modeling concepts (fact / dimension tables, normalization)Familiarity with Git-based version control and CI / CD basicsJob Requirements
Key Responsibilities
Data Ingestion & Processing
Design, build, and maintain data ingestion pipelines using AWS Glue, AWS Lambda, and S3Ingest data from multiple sources including databases, files, APIs, and SFTPHandle batch and near-real-time data processing use casesData Warehousing & Snowflake
Load, transform, and optimize data in SnowflakeDevelop and maintain Snowflake objects including tables, views, stages, file formats, and tasksImplement performance optimization techniques (clustering, partitioning, query tuning)Data Transformation & Quality
Implement data transformations using SQL and / or ETL frameworksApply basic data quality checks and validation rulesTroubleshoot and resolve data pipeline failures and performance issuesCloud & Automation
Develop serverless solutions using AWS Lambda for orchestration and lightweight transformationsWork with IAM roles and policies to ensure secure access to AWS resourcesMonitor pipelines using CloudWatch and logsCollaboration & Delivery
Collaborate with senior data engineers, architects, and analysts on solution designParticipate in code reviews, testing, and deployment activitiesSupport production issues and contribute to continuous improvement initiativesRequired Skills & Qualifications
3-4 years of experience as a Data EngineerHands-on experience with AWS services : Glue, Lambda, S3, CloudWatchStrong experience with Snowflake data warehouseProficiency in SQL (Snowflake SQL preferred)Experience working with Python for ETL and Lambda developmentUnderstanding of data modeling concepts (fact / dimension tables, normalization)Familiarity with Git-based version control and CI / CD basics