A company is looking for an AWS Data Engineer.
Key Responsibilities
Design and build scalable, reliable data pipelines using AWS services to process and transform large datasets from utility systems
Orchestrate workflows across data pipelines using AWS Step Functions, with a preference for this over Airflow
Implement ETL / ELT processes using PySpark, Python, and Pandas to clean, transform, and integrate data from multiple sources
Required Qualifications
Minimum of 5 years of experience in data engineering
Proficiency in AWS services such as Step Functions, Lambda, Glue, S3, DynamoDB, and Redshift
Strong programming skills in Python with experience using PySpark and Pandas for large-scale data processing
Hands-on experience with distributed systems and scalable architectures
Knowledge of ETL / ELT processes for integrating diverse datasets into centralized systems
AWS Data Engineer • Lakewood, Colorado, United States