Position Summary
Designs, develops, modifies, configures, debugs and evaluates jobs for extracting data from various sources, implements transformation logic, and stores data in various formats fit for use by stakeholders. Collects metadata about jobs including data lineage and transformation logic. Works with teams, clients, data owners, and leadership throughout the development cycle practicing continuous improvement.
This position is hybrid, working from your remote office and your assigned location based on business need.
PG&E is providing the salary range that the company in good faith believes it might pay for this position at the time of the job posting. This compensation range is specific to the locality of the job. The actual salary paid to an individual will be based on multiple factors, including, but not limited to, specific skills, education, licenses or certifications, experience, market value, geographic location, and internal equity. Although we estimate the successful candidate hired into this role will be placed towards the middle or entry point of the range, the decision will be made on a case-by-case basis related to these factors.
Bay Minimum : $140,000
Bay Maximum : $238,000
This job is also eligible to participate in PG&E’s discretionary incentive compensation programs.
Job Responsibilities
- Leads a team on moderately complex to complex data and analytics-centric problems having broad impact that require in-depth analysis and judgment to obtain results or solutions.
- May contribute to the resolution of uniquely complex data and analytics-centric problems having significant impact
- Identifies, designs and implements internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes.
- Resolves application programming analysis problems of broad scope within procedural guidelines.
- Provides assistance to other programmers / analysts on unusual or especially complex problems that cross multiple functional / technology areas.
- Conceptualizes and generates infrastructure that allows big data to be accessed and analyzed with verified data quality and metadata is appropriately captured and catalogued.
- Collaborates with peers to develop departmental standards, norms, and new goals / objectives.
- Plans work to meet assigned general objectives; reviews progress regularly and solutions may provide an opportunity for creative / non-standard approaches.
- Assesses data pipeline performance and suggests / implements changes as required.
- Communicates (oral and written) recommendations.
- Mentors / provides guidance to less experienced colleagues.
Qualifications
Minimum :
BA / BS in Computer Science, Management Information Systems or related field of study, or equivalent experience7 years of experience with data engineering / ETL ecosystems such as Palantir Foundry, Spark, Informatica, SAP BODS, OBIEEExperience with multiple data engineering / ETL ecosystemsExperience with machine learning algorithm deploymentDesired :
Master’s degree in Computer Science, Management Information Systems or related field, or equivalent experienceExperience leading development teamsBusiness Intelligence and data access tool expertise, including advanced SQL, data modeling, and performance optimization.Strong software engineering fundamentals (Git, CI / CD, unit and integration testing) with production data pipelines experienceProficiency in Python and SQL within Palantir Foundry, including PySpark-based transformations and data workflows