Lorven technologiesRancho California, CA, United States
[job_card.full_time]
[filters_job_card.quick_apply]
Hi Our client is looking for a Tech lead (Python, Pyspak, Java) with a Long-Term Contract project in Remote below is the detailed requirement.Role : ...[show_more][last_updated.last_updated_variable_days]
Remote Senior Java Engineer - AI Trainer
SuperAnnotateFrench Valley, California, US
[filters.remote]
[job_card.full_time]
As a Senior Java Engineer, you will work remotely on an hourly paid basis to review AI-generated Java code, architectural solutions, and technical explanations, as well as generate high-quality ref...[show_more][last_updated.last_updated_variable_days]
Test lead (Python, Pyspak, Java)
Lorven technologiesRancho California, CA, United States
[job_card.variable_days_ago]
[job_preview.job_type]
[job_card.full_time]
[filters_job_card.quick_apply]
[job_card.job_description]
Hi
Our client is looking for a Tech lead (Python, Pyspak, Java) with a Long-Term Contract project in Remote below is the detailed requirement.
Role : Tech lead (Python, Pyspak, Java)
Location : Remote
Duration : Long term Contract
Job description :
MS or equivalent experience in Computer Science, MIS, or related technical fields.
10 15+ years of overall experience, including 5+ years in data engineering / ETL ecosystems using PySpark, Python, and Java.
Translate business requirements into technical solutions using PySpark and Python frameworks.
Lead data engineering initiatives addressing moderately complex to highly complex data and analytics challenges.
Plan and execute tasks to meet shared objectives, maintain progress tracking, and document work following best practices.
Identify and implement internal process improvements, including scalable infrastructure design, optimized data distribution, and automation of manual workflows.
Participate actively in Agile / Scrum ceremonies such as stand ups, sprint planning, and retrospectives.
Contribute to the evolution of data systems and architecture, recommending enhancements to pipelines and frameworks.
Provide technical guidance to team members on complex challenges spanning multiple functional and technical domains.
Build infrastructure that supports large scale data access and analysis, ensuring data quality and proper metadata management.
Collaborate with leadership to strengthen data driven decision making through demos, mentorship, and best practice sharing.
Minimum Qualifications
Required Skills
Strong expertise in PySpark and Python.
Experience with Pandas, APIs, and Spark Streaming.
Solid understanding of database design fundamentals.
Familiarity with CI / CD tools and infrastructure as code frameworks.
Experience writing production grade code.
Experience with unit tests, integration tests, schema validations, and health checks.
Knowledge of Palantir Foundry (Ontology modeling, API configuration, Foundry Typescript) is a strong plus.