Talent.com
Senior Data Engineer – DevOps [Gitlab, Terraform]
Senior Data Engineer – DevOps [Gitlab, Terraform]First Citizens Bank • Raleigh, North Carolina, US
Senior Data Engineer – DevOps [Gitlab, Terraform]

Senior Data Engineer – DevOps [Gitlab, Terraform]

First Citizens Bank • Raleigh, North Carolina, US
[job_card.variable_days_ago]
[job_preview.job_type]
  • [job_card.full_time]
[job_card.job_description]

Overview

This is a remote role that may only be hired in the following locations : NC, AZ, TX

We are seeking an experienced DevOps Engineer to design, build, and maintain CI / CD pipelines, infrastructure automation, and deployment workflows supporting our data engineering platform. This role focuses on infrastructure as code, configuration management, cloud operations, and enabling data engineers to deploy reliably and rapidly across AWS and Azure environments.

Responsibilities

CI / CD Pipeline & Deployment Automation

  • Design and implement robust CI / CD pipelines using Azure DevOps or GitLab; automate build, test, and deployment processes for data applications, dbt Cloud jobs, and infrastructure changes.
  • Build deployment orchestration for multi-environment (dev, qa, uat, production) workflows with approval gates, rollback mechanisms, and artifact management.
  • Implement GitOps practices for infrastructure and application deployments; maintain version control and audit trails for all changes.
  • Optimize pipeline performance, reduce deployment times, and enable fast feedback loops for rapid iteration.

Infrastructure as Code (IaC) & Cloud Operations

  • Design and manage Snowflake, AWS and Azure infrastructure using Terraform; ensure modularity, reusability, and consistency across environments.
  • Provision and manage Cloud resources
  • Implement tagging strategies and resource governance; maintain Terraform state management and implement remote state backends.
  • Support multi-cloud architecture patterns and ensure portability between AWS and Azure where applicable.
  • Configuration Management & Infrastructure Automation

  • Deploy and manage Ansible playbooks for configuration management, patching, and infrastructure orchestration across cloud environments.
  • Utilize Puppet for infrastructure configuration, state management, and compliance enforcement; maintain Puppet modules and manifests for reproducible environments.
  • Automate VM provisioning, OS hardening, and application stack deployment; reduce manual configuration and ensure environment consistency.
  • Build automation for scaling, failover, and disaster recovery procedures.
  • Snowflake Cloud Operations & Integration

  • Automate Snowflake provisioning, warehouse sizing, and cluster management via Terraform; integrate Snowflake with CI / CD pipelines.
  • Implement Infrastructure as Code patterns for Snowflake roles, permissions, databases, and schema management.
  • Build automated deployment workflows for dbt Cloud jobs and Snowflake objects; integrate version control with Snowflake changes.
  • Monitor Snowflake resource utilization, costs, and performance; implement auto-suspend / auto-resume policies and scaling strategies.
  • Python Development & Tooling

  • Develop Python scripts and tools for infrastructure automation, cloud operations, and deployment workflows.
  • Build custom integrations between CI / CD systems, cloud platforms, and Snowflake; create monitoring and alerting automation.
  • Monitoring, Logging & Observability

  • Integrate monitoring and logging solutions (Splunk, Dynatrace, CloudWatch, Azure Monitor) into CI / CD and infrastructure stacks.
  • Build automated alerting for infrastructure health, deployment failures, and performance degradation.
  • Implement centralized logging for applications, infrastructure, and cloud audit trails; maintain log retention and compliance requirements.
  • Create dashboards and metrics for infrastructure utilization, deployment frequency, and change failure rates.
  • Data Pipeline & Application Deployment

  • Support deployment of data processing jobs, Airflow DAGs, and dbt Cloud transformations through automated pipelines.
  • Implement blue-green or canary deployment patterns for zero-downtime updates to data applications.
  • Build artifact management workflows (Docker images, Python packages, dbt artifacts); integrate with Artifactory or cloud registries.
  • Collaborate with data engineers on deployment best practices and production readiness reviews.
  • Disaster Recovery & High Availability

  • Design backup and disaster recovery strategies for data infrastructure; automate backup provisioning and testing.
  • Implement infrastructure redundancy and failover automation using AWS / Azure native services.
  • Documentation & Knowledge Sharing

  • Maintain comprehensive documentation for infrastructure architecture, CI / CD workflows, and operational procedures.
  • Create runbooks and troubleshooting guides for common issues; document infrastructure changes and design decisions.
  • Establish DevOps best practices and standards; share knowledge through documentation, lunch-and-learns, and mentoring.
  • Qualifications

    Bachelor's Degree and 4 years of experience in Data engineering, big data technologies, cloud platforms OR High School Diploma or GED and 8 years of experience in Data engineering, big data technologies, cloud platforms

    Preferred :

    Technical / Business Skills :

    CI / CD tools : Azure DevOps Pipelines or GitLab CI / CD (hands-on pipeline development)Infrastructure as Code : Terraform (AWS and Azure providers) — production-grade experienceConfiguration Management : Ansible and / or Puppet — ability to write playbooks / manifests and manage infrastructure stateCloud platforms : AWS (EC2, S3, RDS, VPC, IAM, Lambda, Glue, Lakeformation) and Azure (VMs, App Services, Blob Storage, Cosmos DB, networking)Python programming : scripting, automation, API integration, and tooling developmentSnowflake : operational knowledge of warehouse management, cost optimization, and cloud integrationGit / GitLab / GitHub : version control, branching strategies, and repository managementLinux / Unix system administration and command-line proficiencyNetworking fundamentals : VPCs, subnets, security groups, DNS, load balancingScripting languages : Bash, Python, or similar for automation5+ years in DevOps, Platform Engineering, or Infrastructure Engineering3+ years hands-on with Terraform and Infrastructure as Code3+ years with CI / CD tools (Jenkins, GitLab CI, Azure DevOps, or similar)2+ years with configuration management tools (Ansible, Puppet, or similar)2+ years supporting cloud platforms (AWS and / or Azure in production)1+ years with Python automation and scriptingExperience supporting or integrating with Snowflake or modern data warehousesFinancial banking experience is a plus.Must have one or more certifications in the relevant technology fields.

    Functional Skills / Core Competencies :

    Strong automation mindset : identify and eliminate manual toilSystems thinking : understand full deployment pipelines and infrastructure dependenciesComfortable with continuous learning of new tools and cloud servicesAbility to balance speed of delivery with stability and safetyTeam Player : Support peers, team, and department management.Communication : Excellent verbal, written, and interpersonal communication skills.Problem Solving : Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality.Partnership and Collaboration : Develop and maintain partnership with business and IT stakeholdersAttention to Detail : Ensure accuracy and thoroughness in all tasks.

    #LI-XG1

    Benefits are an integral part of total rewards and First Citizens Bank is committed to providing a competitive, thoughtfully designed and quality benefits program to meet the needs of our associates. More information can be found at benefits.

    CI / CD Pipeline & Deployment Automation

  • Design and implement robust CI / CD pipelines using Azure DevOps or GitLab; automate build, test, and deployment processes for data applications, dbt Cloud jobs, and infrastructure changes.
  • Build deployment orchestration for multi-environment (dev, qa, uat, production) workflows with approval gates, rollback mechanisms, and artifact management.
  • Implement GitOps practices for infrastructure and application deployments; maintain version control and audit trails for all changes.
  • Optimize pipeline performance, reduce deployment times, and enable fast feedback loops for rapid iteration.
  • Infrastructure as Code (IaC) & Cloud Operations

  • Design and manage Snowflake, AWS and Azure infrastructure using Terraform; ensure modularity, reusability, and consistency across environments.
  • Provision and manage Cloud resources
  • Implement tagging strategies and resource governance; maintain Terraform state management and implement remote state backends.
  • Support multi-cloud architecture patterns and ensure portability between AWS and Azure where applicable.
  • Configuration Management & Infrastructure Automation

  • Deploy and manage Ansible playbooks for configuration management, patching, and infrastructure orchestration across cloud environments.
  • Utilize Puppet for infrastructure configuration, state management, and compliance enforcement; maintain Puppet modules and manifests for reproducible environments.
  • Automate VM provisioning, OS hardening, and application stack deployment; reduce manual configuration and ensure environment consistency.
  • Build automation for scaling, failover, and disaster recovery procedures.
  • Snowflake Cloud Operations & Integration

  • Automate Snowflake provisioning, warehouse sizing, and cluster management via Terraform; integrate Snowflake with CI / CD pipelines.
  • Implement Infrastructure as Code patterns for Snowflake roles, permissions, databases, and schema management.
  • Build automated deployment workflows for dbt Cloud jobs and Snowflake objects; integrate version control with Snowflake changes.
  • Monitor Snowflake resource utilization, costs, and performance; implement auto-suspend / auto-resume policies and scaling strategies.
  • Python Development & Tooling

  • Develop Python scripts and tools for infrastructure automation, cloud operations, and deployment workflows.
  • Build custom integrations between CI / CD systems, cloud platforms, and Snowflake; create monitoring and alerting automation.
  • Monitoring, Logging & Observability

  • Integrate monitoring and logging solutions (Splunk, Dynatrace, CloudWatch, Azure Monitor) into CI / CD and infrastructure stacks.
  • Build automated alerting for infrastructure health, deployment failures, and performance degradation.
  • Implement centralized logging for applications, infrastructure, and cloud audit trails; maintain log retention and compliance requirements.
  • Create dashboards and metrics for infrastructure utilization, deployment frequency, and change failure rates.
  • Data Pipeline & Application Deployment

  • Support deployment of data processing jobs, Airflow DAGs, and dbt Cloud transformations through automated pipelines.
  • Implement blue-green or canary deployment patterns for zero-downtime updates to data applications.
  • Build artifact management workflows (Docker images, Python packages, dbt artifacts); integrate with Artifactory or cloud registries.
  • Collaborate with data engineers on deployment best practices and production readiness reviews.
  • Disaster Recovery & High Availability

  • Design backup and disaster recovery strategies for data infrastructure; automate backup provisioning and testing.
  • Implement infrastructure redundancy and failover automation using AWS / Azure native services.
  • Documentation & Knowledge Sharing

  • Maintain comprehensive documentation for infrastructure architecture, CI / CD workflows, and operational procedures.
  • Create runbooks and troubleshooting guides for common issues; document infrastructure changes and design decisions.
  • Establish DevOps best practices and standards; share knowledge through documentation, lunch-and-learns, and mentoring.
  • Bachelor's Degree and 4 years of experience in Data engineering, big data technologies, cloud platforms OR High School Diploma or GED and 8 years of experience in Data engineering, big data technologies, cloud platforms

    Preferred :

    Technical / Business Skills :

    CI / CD tools : Azure DevOps Pipelines or GitLab CI / CD (hands-on pipeline development)Infrastructure as Code : Terraform (AWS and Azure providers) — production-grade experienceConfiguration Management : Ansible and / or Puppet — ability to write playbooks / manifests and manage infrastructure stateCloud platforms : AWS (EC2, S3, RDS, VPC, IAM, Lambda, Glue, Lakeformation) and Azure (VMs, App Services, Blob Storage, Cosmos DB, networking)Python programming : scripting, automation, API integration, and tooling developmentSnowflake : operational knowledge of warehouse management, cost optimization, and cloud integrationGit / GitLab / GitHub : version control, branching strategies, and repository managementLinux / Unix system administration and command-line proficiencyNetworking fundamentals : VPCs, subnets, security groups, DNS, load balancingScripting languages : Bash, Python, or similar for automation5+ years in DevOps, Platform Engineering, or Infrastructure Engineering3+ years hands-on with Terraform and Infrastructure as Code3+ years with CI / CD tools (Jenkins, GitLab CI, Azure DevOps, or similar)2+ years with configuration management tools (Ansible, Puppet, or similar)2+ years supporting cloud platforms (AWS and / or Azure in production)1+ years with Python automation and scriptingExperience supporting or integrating with Snowflake or modern data warehousesFinancial banking experience is a plus.Must have one or more certifications in the relevant technology fields.

    Functional Skills / Core Competencies :

    Strong automation mindset : identify and eliminate manual toilSystems thinking : understand full deployment pipelines and infrastructure dependenciesComfortable with continuous learning of new tools and cloud servicesAbility to balance speed of delivery with stability and safetyTeam Player : Support peers, team, and department management.Communication : Excellent verbal, written, and interpersonal communication skills.Problem Solving : Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality.Partnership and Collaboration : Develop and maintain partnership with business and IT stakeholdersAttention to Detail : Ensure accuracy and thoroughness in all tasks.

    #LI-XG1

    Benefits are an integral part of total rewards and First Citizens Bank is committed to providing a competitive, thoughtfully designed and quality benefits program to meet the needs of our associates. More information can be found at benefits.

    [job_alerts.create_a_job]

    Senior Data Engineer DevOps Gitlab Terraform • Raleigh, North Carolina, US

    [internal_linking.similar_jobs]
    Senior DevOps Engineer

    Senior DevOps Engineer

    VirtualVocations • Raleigh, North Carolina, United States
    [job_card.full_time]
    A company is looking for a Senior DevOps Engineer to support digital transformation initiatives.Key Responsibilities Receive and synthesize customer business and technical requirements to develop...[show_more]
    [last_updated.last_updated_variable_hours] • [promoted] • [new]
    Senior Data Engineer

    Senior Data Engineer

    VirtualVocations • Raleigh, North Carolina, United States
    [job_card.full_time]
    A company is looking for a Senior Data Engineer (Remote).Key Responsibilities Design, build, and maintain reliable ETL pipelines from the ground up Work with large datasets using Python or Java ...[show_more]
    [last_updated.last_updated_variable_hours] • [promoted] • [new]
    Senior Cognos Developer

    Senior Cognos Developer

    Kimley-Horn • Raleigh, NC, United States
    [job_card.full_time]
    Kimley-Horn is looking for a Senior Cognos Developer to join the corporate team in Raleigh, North Carolina (NC).We are looking for an experienced Senior Cognos Developer to lead the design, develop...[show_more]
    [last_updated.last_updated_less] • [promoted] • [new]
    Senior Principal Data Engineer - Red Hat Sales Data Management (Raleigh Office)

    Senior Principal Data Engineer - Red Hat Sales Data Management (Raleigh Office)

    Red Hat, Inc. • Raleigh, NC, US
    [job_card.full_time]
    Red Hats Global Sales Go-To-Market Strategy, Incentives & Data Analytics organization is seeking a Senior Principal Data Engineer to work with a high degree of autonomy to lead the integration, aut...[show_more]
    [last_updated.last_updated_variable_days] • [promoted]
    MO2-613-Senior Azure Cloud Engineer 11694-1

    MO2-613-Senior Azure Cloud Engineer 11694-1

    FHR • Raleigh, NC, US
    [job_card.full_time]
    [filters_job_card.quick_apply]
    Our direct client has an opening for.Senior Azure Cloud Engineer 11694-1.This position is up to 12 months, with the option of extension, in. Please send rates and a resume.DEA needs an Architect or ...[show_more]
    [last_updated.last_updated_30]
    Technical Specialist – Expert (Sr. Data Engineer)

    Technical Specialist – Expert (Sr. Data Engineer)

    Sunrise Systems • Raleigh, North Carolina, United States
    [job_card.full_time]
    [filters_job_card.quick_apply]
    Job Title : Technical Specialist Expert (Sr.Location : Raleigh, NC (Hybrid : 2 3 days onsite / week).Duration : 12 Months on W2 Contract. Data Solutions Engineer to lead database architecture for cloud ...[show_more]
    [last_updated.last_updated_30]
    Senior Manager Data Science

    Senior Manager Data Science

    LexisNexis • Raleigh, NC, United States
    [job_card.full_time]
    Please note that the selected individual for this role will be expected to work in our Raleigh, NC location from the time of joining. If you reside outside of the Raleigh region and you are unable o...[show_more]
    [last_updated.last_updated_variable_days] • [promoted]
    Locum Physician (MD / DO) - Anesthesiology - General / Other in Dunn, NC

    Locum Physician (MD / DO) - Anesthesiology - General / Other in Dunn, NC

    LocumJobsOnline • Dunn, NC, US
    [job_card.full_time] +1
    Doctor of Medicine | Anesthesiology - General / Other.Competitive weekly pay (inquire for details) .LocumJobsOnline is working with LocumTenens. Anesthesiology MD in Dunn, North Carolina, 28334!.Job R...[show_more]
    [last_updated.last_updated_variable_days] • [promoted]
    Remote Investment Analyst – AI Trainer ($50-$60 / hour)

    Remote Investment Analyst – AI Trainer ($50-$60 / hour)

    Data Annotation • Butner, North Carolina
    [filters.remote]
    [job_card.full_time] +1
    We are looking for a finance professional to join our team to train AI models.You will measure the progress of these AI chatbots, evaluate their logic, and solve problems to improve the quality of ...[show_more]
    [last_updated.last_updated_variable_days] • [promoted]
    Senior Manager, Geospatial Technology

    Senior Manager, Geospatial Technology

    CDM Smith • Raleigh, NC, United States
    [job_card.full_time]
    CDM Smith is seeking a Geospatial Technology Leader to join our Digital Engineering Solutions team.This individual will lead the Geospatial Technology group within the Digital Engineering Solutions...[show_more]
    [last_updated.last_updated_variable_days] • [promoted]
    Azure Data Engineer 1-20-

    Azure Data Engineer 1-20-

    Focused HR Solutions • Raleigh, North Carolina, United States
    [job_card.full_time]
    [filters_job_card.quick_apply]
    This position is 100% on-site in Raleigh, NC.In-Person Interview REQUIRED on 1st round.Our direct client has an opening for a Azure Data Engineer position # 730992. This position is for 4+ months, w...[show_more]
    [last_updated.last_updated_30]
    Senior MES Engineer

    Senior MES Engineer

    MetaOption, LLC • Raleigh, NC, US
    [job_card.full_time]
    [filters_job_card.quick_apply]
    MES, Werum PAS-X, Sepasoft, POMS, cGMP, 21 CFR Part 11, GAMP5, S88, S95, Life Science Industry, Manufacturing Execution System, Database, SQL, Oracle Experience level : Mid-senior Experie...[show_more]
    [last_updated.last_updated_30]
    Senior Cloud Technical Lead- Storage Engineer

    Senior Cloud Technical Lead- Storage Engineer

    SAS • Cary, NC, United States
    [job_card.full_time]
    Senior Cloud Technical Lead- Storage Engineer- Hybrid.Through our software and services, we inspire customers around the world to transform data into intelligence - and questions into answers.We're...[show_more]
    [last_updated.last_updated_less] • [promoted] • [new]
    Physician (MD / DO) - Pediatrics - General / Other in Dunn, NC

    Physician (MD / DO) - Pediatrics - General / Other in Dunn, NC

    LocumJobsOnline • Dunn, NC, US
    [job_card.full_time] +1
    Doctor of Medicine | Pediatrics - General / Other.Competitive weekly pay (inquire for details) .LocumJobsOnline is working with CompHealth to find a qualified Pediatrics MD in Dunn, North Carolina, 2...[show_more]
    [last_updated.last_updated_30] • [promoted]
    Director- Data Engineering, Governance & DS

    Director- Data Engineering, Governance & DS

    NYC Staffing • Cary, NC, United States
    [job_card.full_time]
    Director, Data & Analytics Delivery.Location : Cary, NC; New York, NY; Tampa, FL.Reports to : VP - Data & Analytics (Corporate Functions). Team : Leads a multi-disciplinary team of ~10-15 comprising of...[show_more]
    [last_updated.last_updated_less] • [promoted] • [new]
    Senior Software Engineer

    Senior Software Engineer

    LogistiVIEW • Cary, NC, US
    [job_card.full_time]
    [filters_job_card.quick_apply]
    Senior Software Engineer Do you have a passion for technology and op timization ?.Do you want to join a growing company with the same passion?. At Logis ti VIEW we deliver intelligent Warehouse Exec...[show_more]
    [last_updated.last_updated_30]
    M-3-19 - Senior DevOps Engineer (758983)

    M-3-19 - Senior DevOps Engineer (758983)

    Focused HR Solutions • Raleigh, North Carolina, United States
    [job_card.full_time]
    [filters_job_card.quick_apply]
    Work currently can be performed remote with potential for onsite at the Client / manager’s discretion.Our client has an opening for a Senior DevOps Engineer (758983). This position is 12 months, with ...[show_more]
    [last_updated.last_updated_30]
    Senior Software Engineer - SDET - Data Mobility

    Senior Software Engineer - SDET - Data Mobility

    Dell • Butner, NC, Granville County, NC; North Carolina, United States
    [job_card.full_time]
    Senior Software Engineer - SDET – Data Mobility.The Software Engineering team delivers next-generation application enhancements and new products for a changing world. Working at the cutting edge, we...[show_more]
    [last_updated.last_updated_1_hour] • [promoted] • [new]