AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI / ML, and our people-first culture has earned us multiple Best Place to Work awards.
WHY JOIN US
If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!
ABOUT THE ROLE
This Senior Data Engineer (Python) role is central to transforming large, diverse datasets into reliable insights that support research and strategic decisions across a global financial platform. You will help shape a unified data ecosystem, partnering with data scientists, researchers, and stakeholders to connect technology with real business impact. What makes this opportunity unique is the scale of data, use of modern cloud, AI, and data engineering practices, and strong influence on platform evolution. It’s a chance to grow technically while contributing to a mission-driven, collaborative environment.
WHAT YOU WILL DO
- Design and build scalable Data Lakes, Data Warehouses, and Data Lakehouses;
- Design and implement robust ETL / ELT processes at scale using Python and pipeline orchestration tools like Airflow;
- Develop ingestion workflows from diverse third-party APIs and data sources;
- Manage and optimize file formats such as Parquet, Avro, and ORC for high-performance data retrieval;
- Work with AI development tools to support machine learning initiatives and advanced analytics;
- Act as a technical consultant to gather requirements, understand business goals, and translate them into technical roadmaps;
- Work with Terraform and other tools to build AWS and on-prem infrastructure.
MUST HAVES
You must be authorized to work for ANY employer in the US, as employment visa sponsorship is not available;Bachelor’s degree in computer science / engineering or other technical field, or equivalent experience;5+ years of experience with Python with strong hands-on expertise;5+ years of experience with data processing and analytics libraries such as Pandas, Polars, PySpark, and DuckDB;2+ years of experience with Big Data technologies such as Spark and Snowflake;Expert-level knowledge of Airflow or similar pipeline orchestration tools;Deep understanding of Medallion Architecture , columnar file formats, and database technologies including SQL, NoSQL, and Lakehouse architectures;Proven ability to work with third-party APIs for complex data ingestion;Proficiency with cloud platforms such as AWS, GCP, and Snowflake, including advanced SQL optimization;Upper-intermediate English level.NICE TO HAVES
Familiarity with the fintech industry and financial data domains;Documentation skills for data pipelines, architecture designs, and best practices;OpenSearch or Elasticsearch;AWS SageMaker Studio and Jupyter for data analysis;Terraform;Scala.PERKS AND BENEFITS
Professional growth : Mentorship, TechTalks, and personalized growth roadmaps.Competitive compensation : USD-based pay with education, fitness, and team activity budgets.Exciting projects : Modern solutions with Fortune 500 and top product companies.Flextime : Flexible schedule with remote and office options.