Tata Technologies, Inc. : Senior Data Engineer– Detroit, MI
DUTIES :
- Implement multi-layered Microsoft Azure hosted Snowflake data platform and centralized data lake solutions for enterprise product engineering service data.
- Oversee the roll out of bespoke data lake solutions for clients to automate data collection and dashboarding.
- Take part in daily status calls with the global team to coordinate the developments, project status, and location-specific requirements into one forum.
- Take part in various meetings conducted by business users for requirement gathering.
- Keep global teams informed about any changes, enhancements, new deliverables, and strategic development decisions. This also includes prioritization of pipelined work and resource bandwidth utilization.
- Run the digital modernization analytics program to help end client perform enterprise data lake implementation (Microsoft Azure & Snowflake cloud data platform based) for their Senior Data Engineering services.
- Help team to design robust cloud platform based architecture for the successful implementation utilizing proprietary Data enablement framework principles.
- Implement security model for restricted access of authorized users.
- Responsible for leading the unit testing, system integration testing & validation of data sets, data models, data pipelines.
- Work on bug fixes & enhancements.
- Produce data pipelines & document analytics models and processes.
- Collaborate with other analytics capability groups (Data Architecture, Visualization) to recommend high quality solutions.
- Create dashboards & visualizations for global reporting which would consume data from data marts based on the Business user’s requirements & referring to TTL’s Athenium KPI library.
- Implement Dynamic Row level security in dashboards to restrict relevant data access.
- Train & guide business users for self-service BI.
- Train teammates on TTL’s proprietary tools such as Synthesis, Data enablement framework.
- Responsible for configuring, upgrading & utilizing TTL’s proprietary Synthesis tool to integrate with few of the on prem data sources to onboard them into data lake using customized algorithm based transformations.
- Build Data pipelines for batch load & change data capture using DBT tool & Data vault 2.0 methodology for onboarding rest of the data sources into data lake.
- Define data requirements and business rules, perform logical and physical data modeling, implement and test database design.
- Develop database models and supporting documentation in order to meet business requirements .
- Implement three layered end to end solution consists of data lake, data warehouse & data mart utilizing TTL’s Auto data ingestion tool for preliminary exploratory data analysis before onboarding data sources into data lake.
REQUIREMENTS : Bachelor’s Degree or foreign equivalent degree in Computer Science, Data Science, Electronics Engineering, or related technical field and 4+ years of experience in data engineering and cloud computing, focusing on the design, implementation, and maintenance of data and data pipelines.
Experience must include at least 3 years of experience with each of the following :
Synthetic Data Engineering.Data Platform Enablement Framework.Auto Data Ingestion.IPMS tools.Athenium KPI Library.Visidata.Collaborative data engineering tools (KM Portal, Databricks, Jupyter and / or similar tools).TELEWORK : Hybrid position; part time telecommuting authorized.
TRAVEL : Up to 5% domestic travel required.
BASE SALARY RANGE : $119,725 / year to $125,725 / year. Regional salary adjustments for CA, CO, NY and WA states are base + 20% - 30%.
LOCATION : 6001 Cass Avenue, Suite 600, Detroit, MI – 48202 and various unanticipated client locations