NVIDIA Logo

NVIDIA

Senior Applied AI Software Engineer, Distributed Inference Systems

Reposted 9 Days Ago
In-Office or Remote
6 Locations
148K-288K
Senior level
In-Office or Remote
6 Locations
148K-288K
Senior level
As a Senior Applied AI Software Engineer, you will develop scalable AI infrastructure, focusing on distributed inference systems and optimizing GPU resource management in Kubernetes and Python environments.
The summary above was generated by AI

NVIDIA Dynamo is an innovative, open-source platform focused on efficient, scalable inference for large language and reasoning models in distributed GPU environments. By bringing to bear sophisticated techniques in serving architecture, GPU resource management, and intelligent request handling, Dynamo achieves high-performance AI inference for demanding applications. Our team is addressing the most challenging issues in distributed AI infrastructure, and we’re searching for engineers enthusiastic about building the next generation of scalable AI systems.

As a Senior Applied AI Software Engineer on the Dynamo project, you will address some of the most sophisticated and high-impact challenges in distributed inference, including:

  • Dynamo k8s Serving Platform: Build the Kubernetes deployment and workload management stack for Dynamo to facilitate inference deployments at scale. Identify bottlenecks and apply optimization techniques to fully use hardware capacity.

  • Scalability & Reliability: Develop robust, production-grade inference workload management systems that scale from a handful to thousands of GPUs, supporting a variety of LLM frameworks (e.g., TensorRT-LLM, vLLM, SGLang).

  • Disaggregated Serving: Architect and optimize the separation of prefill (context ingestion) and decode (token generation) phases across distinct GPU clusters to improve throughput and resource utilization. Contribute to embedding disaggregation for multi-modal models (Vision-Language models, Audio Language Models, Video Language Models).

  • Dynamic GPU Scheduling: Develop and refine Planner algorithms for real-time allocation and rebalancing of GPU resources based on fluctuating workloads and system bottlenecks, ensuring peak performance at scale.

  • Intelligent Routing: Enhance the smart routing system to efficiently direct inference requests to GPU worker replicas with relevant KV cache data, minimizing re-computation and latency for sophisticated, multi-step reasoning tasks.

  • Distributed KV Cache Management: Innovate in the management and transfer of large KV caches across heterogeneous memory and storage hierarchies, using the NVIDIA Optimized Transfer Library (NIXL) for low-latency, cost-effective data movement.

What you'll be doing:

  • Collaborate on the design and development of the Dynamo Kubernetes stack.

  • Introduce new features to the Dynamo Python SDK and Dynamo Rust Runtime Core Library.

  • Design, implement, and optimize distributed inference components in Rust and Python.

  • Contribute to the development of disaggregated serving for Dynamo-supported inference engines (vLLM, SGLang, TRT-LLM, llama.cpp, mistral.rs).

  • Improve intelligent routing and KV-cache management subsystems.

  • Contribute to open-source repositories, participate in code reviews, and assist with issue triage on GitHub.

  • Work closely with the community to address issues, capture feedback, and evolve the framework’s APIs and architecture.

  • Write clear documentation and contribute to user and developer guides.

What we need to see:

  • BS/MS or higher in computer engineering, computer science or related engineering (or equivalent experience).

  • 5+ years of proven experience in related field.

  • Strong proficiency in systems programming (Rust and/or C++), with experience in Python for workflow and API development. Experience with Go for Kubernetes controllers and operators development.

  • Deep understanding of distributed systems, parallel computing, and GPU architectures.

  • Experience with cloud-native deployment and container orchestration (Kubernetes, Docker).

  • Experience with large-scale inference serving, LLMs, or similar high-performance AI workloads.

  • Background with memory management, data transfer optimization, and multi-node orchestration.

  • Familiarity with open-source development workflows (GitHub, continuous integration and continuous deployment).

  • Excellent problem-solving and communication skills.

Ways to stand out from the crowd:

  • Prior contributions to open-source AI inference frameworks (e.g., vLLM, TensorRT-LLM, SGLang).

  • Experience with GPU resource scheduling, cache management, or high-performance networking.

  • Understanding of LLM-specific inference challenges, such as context window scaling and multi-model agentic workflows.

With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our special engineering teams are growing fast. If you're a creative and autonomous engineer with a genuine passion for technology, we want to hear from you!

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until July 29, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Top Skills

C++
Docker
Go
Kubernetes
Nvidia Optimized Transfer Library
Python
Rust
Sglang
Tensorrt-Llm
Vllm

Similar Jobs

5 Hours Ago
Remote
USA
149K-175K Annually
Senior level
149K-175K Annually
Senior level
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
The Compliance Manager will oversee customer support execution, improve service quality, manage teams, and ensure compliance with regulations. Responsibilities include strategic planning, operational goals, and risk management.
Top Skills: Google AppsJIRALooker DashboardsSalesforce Service Cloud
7 Hours Ago
Remote or Hybrid
United States
150K-230K Annually
Senior level
150K-230K Annually
Senior level
HR Tech • Information Technology • News + Entertainment • Professional Services • Sales • Software
As a Frontend Engineer, you will develop product features, collaborate with the Product team, and manage the entire development cycle in a continuous delivery environment.
Top Skills: AngularCSSEs2015HTMLJavaScriptReactTypescript
7 Hours Ago
Remote or Hybrid
MA, USA
145K-242K Annually
Expert/Leader
145K-242K Annually
Expert/Leader
Automotive • Cloud • Greentech • Information Technology • Other • Software • Cybersecurity
The Principal Enterprise Solutions Architect will lead complex cloud solution strategies, guiding executive stakeholders and pursuing significant enterprise opportunities across various cloud platforms and industries.
Top Skills: Ai/MlApp ModernizationAWSAzureCloud InfrastructureComplianceCybersecurityDevOpsGCPNetworkingPrivate CloudSecurity

What you need to know about the Boston Tech Scene

Boston is a powerhouse for technology innovation thanks to world-class research universities like MIT and Harvard and a robust pipeline of venture capital investment. Host to the first telephone call and one of the first general-purpose computers ever put into use, Boston is now a hub for biotechnology, robotics and artificial intelligence — though it’s also home to several B2B software giants. So it’s no surprise that the city consistently ranks among the greatest startup ecosystems in the world.

Key Facts About Boston Tech

  • Number of Tech Workers: 269,000; 9.4% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Thermo Fisher Scientific, Toast, Klaviyo, HubSpot, DraftKings
  • Key Industries: Artificial intelligence, biotechnology, robotics, software, aerospace
  • Funding Landscape: $15.7 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Summit Partners, Volition Capital, Bain Capital Ventures, MassVentures, Highland Capital Partners
  • Research Centers and Universities: MIT, Harvard University, Boston College, Tufts University, Boston University, Northeastern University, Smithsonian Astrophysical Observatory, National Bureau of Economic Research, Broad Institute, Lowell Center for Space Science & Technology, National Emerging Infectious Diseases Laboratories

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account