Talent.com
Software Development Engineer, AI/ML, AWS Neuron, Model Inference
Software Development Engineer, AI/ML, AWS Neuron, Model InferenceAnnapurna Labs (U.S.) Inc. • Cupertino, California, USA
Software Development Engineer, AI/ML, AWS Neuron, Model Inference

Software Development Engineer, AI/ML, AWS Neuron, Model Inference

Annapurna Labs (U.S.) Inc. • Cupertino, California, USA
[job_card.30_days_ago]
[job_preview.job_type]
  • [job_card.full_time]
[job_card.job_description]
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium.

The AWS Neuron SDK, developed by the Annapurna Labs team at AWS, is the backbone for accelerating deep learning and GenAI workloads on Amazon's Inferentia and Trainium ML accelerators. This comprehensive toolkit includes an ML compiler, runtime, and application framework that seamlessly integrates with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance.

The Inference Enablement and Acceleration team is at the forefront of running a wide range of models and supporting novel architecture alongside maximizing their performance for AWS's custom ML accelerators. Working across the stack from PyTorch till the hardware-software boundary, our engineers build systematic infrastructure, innovate new methods and create high-performance kernels for ML functions, ensuring every compute unit is fine tuned for optimal performance for our customers' demanding workloads. We combine deep hardware knowledge with ML expertise to push the boundaries of what's possible in AI acceleration.

As part of the broader Neuron organization, our team works across multiple technology layers - from frameworks and kernels and collaborate with compiler to runtime and collectives. We not only optimize current performance but also contribute to future architecture designs, working closely with customers to enable their models and ensure optimal performance. This role offers a unique opportunity to work at the intersection of machine learning, high-performance computing, and distributed architectures, where you'll help shape the future of AI acceleration technology

You will architect and implement business critical features, and mentor a brilliant team of experienced engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators. The team collaborates with open source ecosystems to provide seamless integration and bring peak performance at scale for customers and developers.

This role is responsible for development, enablement and performance tuning of a wide variety of LLM model families, including massive scale large language models like the Llama family, DeepSeek and beyond. The Inference Enablement and Acceleration team works side by side with compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trainium and Inferentia. Experience optimizing inference performance for both latency and throughput on such large models across the stack from system level optimizations through to Pytorch or JAX is a must have.

You can learn more about Neuron






Key job responsibilities
This role will help lead the efforts in building distributed inference support for Pytorch in the Neuron SDK. This role will tune these models to ensure highest performance and maximize the efficiency of them running on the customer AWS Trainium and Inferentia silicon and servers. Strong software development using Python, System level programming and ML knowledge are both critical to this role. Our engineers collaborate across compiler, runtime, framework, and hardware teams to optimize machine learning workloads for our global customer base. Working at the intersection of software, hardware, and machine learning systems, you'll bring expertise in low-level optimization, system architecture, and ML model acceleration. In this role, you will:

* Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.
* Participate in all stages of the ML system development lifecycle including distributed computing based architecture design, implementation, performance profiling, hardware-specific optimizations, testing and production deployment.
* Build infrastructure to systematically analyze and onboard multiple models with diverse architecture.
* Design and implement high-performance kernels and features for ML operations, leveraging the Neuron architecture and programming models
* Analyze and optimize system-level performance across multiple generations of Neuron hardware
* Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks
* Implement optimizations such as fusion, sharding, tiling, and scheduling
* Conduct comprehensive testing, including unit and end-to-end model testing with continuous deployment and releases through pipelines.
* Work directly with customers to enable and optimize their ML models on AWS accelerators
* Collaborate across teams to develop innovative optimization techniques

A day in the life
You will collaborate with a cross-functional team of applied scientists, system engineers, and product managers to deliver state-of-the-art inference capabilities for Generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, and shaping the future of Neuron's inference stack across Amazon and the Open Source Community. As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects.

You will also build high-impact solutions to deliver to our large customer base and participate in design discussions, code review, and communicate with internal and external stakeholders. You will work cross-functionally to help drive business decisions with your technical input. You will work in a startup-like development environment, where you’re always working on the most important initiative.


About the team
The Inference Enablement and Acceleration team fosters a builder’s culture where experimentation is encouraged, and impact is measurable. We emphasize collaboration, technical ownership, and continuous learning. Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future. Join us to solve some of the most interesting and impactful infrastructure challenges in AI/ML today.

BASIC QUALIFICATIONS

- Bachelor's degree in computer science or equivalent
- 5+ years of non-internship professional software development experience
- 5+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Fundamentals of Machine learning and LLMs, their architecture, training and inference lifecycles along with work experience on some optimizations for improving the model execution.
- Software development experience in C++, Python (experience in at least one language is required).
- Strong understanding of system performance, memory management, and parallel computing principles.
- Proficiency in debugging, profiling, and implementing best software engineering practices in large-scale systems.

PREFERRED QUALIFICATIONS

- Familiarity with PyTorch, JIT compilation, and AOT tracing.
- Familiarity with CUDA kernels or equivalent ML or low-level kernels
- Candidates with performant kernel development such as CUTLASS, FlashInfer etc., would be well suited.
- Familiar with syntax and tile-level semantics similar to Triton.
- Experience with online/offline inference serving with vLLM, SGLang, TensorRT or similar platforms in production environments.
- Deep understanding of computer architecture, operation systems level software and working knowledge of parallel computing.

Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies. Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position. These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation. Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

Our inclusive culture empowers Amazonians to deliver the best results for our customers.
[job_alerts.create_a_job]

Software Development Engineer, AI/ML, AWS Neuron, Model Inference • Cupertino, California, USA

[internal_linking.similar_jobs]

Software Engineer - AI SysML (Technical Leadership)

METAMountain View, CA, United States
[job_card.full_time]

Meta is seeking an AI Software Engineer to join our Research & Development teams.The ideal candidate will have industry experience working on AI Infrastructure related topics.The position will invo...[internal_linking.show_more]

 • [job_card.promoted]

Senior Machine Learning Engineer, Agentic Systems - Moveworks

ServiceNowMountain View, CA, United States
[job_card.full_time]

Moveworks is the Agentic AI Assistant platform that empowers the entire workforce.Our platform enables employees to converse with all of their business systems through natural language to quickly f...[internal_linking.show_more]

 • [job_card.promoted]

(USA) Staff, Software Engineer - MLE- Agentic AI & AIOps

WalmartSunnyvale, CA, United States
[job_card.full_time] +1

The WCNP AI team builds next-generation intelligent platforms for Walmart's global scale.We are a small, high-impact group working at the intersection of AI, cloud infrastructure, and developer pro...[internal_linking.show_more]

 • [job_card.promoted]

Senior AI/ML Software Engineer

Solomon PagePalo Alto, CA, United States
[job_card.full_time]

We are hiring software engineers with extensive industry experience and expertise in data-intensive systems that leverage AI and machine learning, who can design, implement, and improve core compon...[internal_linking.show_more]

 • [job_card.promoted] • [job_card.new]

Staff Software Engineer, Gen AI

EgnyteMountain View, CA, United States
[job_card.full_time]

STAFF SOFTWARE ENGINEER - AI/ML.MOUNTAIN VIEW, CA - HYBRID OFFICE.Egnyte is a place where we spark opportunities for amazing people.We believe that every role has meaning, and every Egnyter should ...[internal_linking.show_more]

 • [job_card.promoted]

Software Development Engineer AI/ML, Inference Serving, AWS Neuron

Amazon Web Services (AWS)Cupertino, CA, United States
[job_card.full_time]

Software Development Engineer AI/ML, Inference Serving, AWS Neuron.Software Development Engineer AI/ML, Inference Serving, AWS Neuron.AWS Neuron is the software stack powering AWS Inferentia and Tr...[internal_linking.show_more]

 • [job_card.promoted]

Senior AI Engineer

Cadence Design SystemsSan Jose, CA, United States
[job_card.full_time]

At Cadence, we hire and develop leaders and innovators who want to make an impact on the world of technology.We are looking for a talented Software Engineer with experience in Machine Learning.You ...[internal_linking.show_more]

 • [job_card.promoted]

Gen AI Engineer

United IT SolutionsMountain View, CA, United States
[job_card.full_time]

Design, build, and ship GenAI solutions from prototype to production.Architect RAG pipelines leveraging large language models.Lead prompt engineering: system/tool prompts, function calling, prompt ...[internal_linking.show_more]

 • [job_card.promoted]

Software Engineer, Deep Learning

Pony.aiFremont, CA, United States
[job_card.full_time]

Founded in 2016 in Silicon Valley, Pony.Operating Robotaxi, Robotruck and Personally Owned Vehicles (POV) business units, Pony.CNBC Disruptor list of the 50 most innovative and disruptive tech comp...[internal_linking.show_more]

 • [job_card.promoted]

Edge ML Software Engineer (System Modeling-PICO) - San Jose

ByteDanceSan Jose, CA, United States
[job_card.full_time]

As a world-renowned VR/AR brand with independent innovation and R&D capabilities, PICO has been at the forefront of the consumer electronic market.We have teams in Europe, Japan and South Korea.Now...[internal_linking.show_more]

 • [job_card.promoted]

AI/ML Engineer

BayOne SolutionsMountain View, CA, United States
[job_card.full_time]

Develop, train, test, and deploy machine learning models for classification, prediction, recommendation, NLP, or computer vision use cases.Build scalable data pipelines and model training workflows...[internal_linking.show_more]

 • [job_card.promoted]

Senior Deep Learning Software Engineer, Inference and Model Optimization

NVIDIASanta Clara, CA, United States
[job_card.full_time]

NVIDIA is at the forefront of the generative AI revolution! The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLM) and d...[internal_linking.show_more]

 • [job_card.promoted]

Senior AI / ML Engineer - Embodied AI

General MotorsMountain View, CA, United States
[job_card.full_time]

At General Motors, our product teams are redefining mobility.Through a human-centered design process, we create vehicles and experiences that are designed not just to be seen, but to be felt.We're ...[internal_linking.show_more]

 • [job_card.promoted]

AI/ML Engineer

InterSourcesFremont, CA, United States
[job_card.temporary]

Experience in AI/ML development, with focus on OpenAI services, NLPs and LLMs.Ability to fine-tune pre-trained models for custom tailored solutions.Drive AI-powered automation for testing and test-...[internal_linking.show_more]

 • [job_card.promoted]

Software Engineer, Agentic AI Systems

Moveworks.aiMountain View, CA, United States
[job_card.full_time]

Software Engineer, Agentic AI Systems (New Grad).Are you a software engineer who has honed your craft through internships, research, and/or academic projects, and are looking to apply your skills a...[internal_linking.show_more]

 • [job_card.promoted]

Senior Software Development Engineer, AI/ML, AWS Neuron, Model Inference

AmazonCupertino, CA, United States
[job_card.full_time]

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning acceler...[internal_linking.show_more]

 • [job_card.promoted]

AI Engineer

Exaways CorporationSan Jose, CA, United States
[job_card.full_time]

Location: San Jose, CA / Dallas, TX (Hybrid).Mandatory Skills: AI, Python, GenAI, SQL / PL-SQL, LLM, RAG, LangChain, LangGraph.Software Engineering & System Design.Write, test, and deploy productio...[internal_linking.show_more]

 • [job_card.promoted]

Software Engineer - AI Native Development

Genspark AIPalo Alto, CA, United States
[job_card.full_time]

Join Genspark to build next-generation AI-powered super agents.We're seeking software engineers with an AI-native development mindset who are passionate about LLM products and agent systems.Design ...[internal_linking.show_more]