CoreWeave Logo

CoreWeave

Director of Engineering, Inference Services

Reposted 23 Hours Ago
Be an Early Applicant
In-Office
2 Locations
206K-303K Annually
Expert/Leader
In-Office
2 Locations
206K-303K Annually
Expert/Leader
The Director of Engineering will oversee the development of CoreWeave's Inference Platform, focusing on high-performance GPU inference services, leading engineering teams, and collaborating cross-functionally to enhance model-serving capabilities.
The summary above was generated by AI
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com.
About this Role:

CoreWeave is looking for a Director of Engineering to own and scale our next-generation Inference Platform. In this highly technical, strategic role you will lead a world-class engineering organization to design, build, and operate the fastest, most cost-efficient, and most reliable GPU inference services in the industry. Your charter spans everything from model-serving runtimes (e.g., Triton, vLLM, TensorRT-LLM) and autoscaling micro-batch schedulers to developer-friendly SDKs and airtight, multi-tenant security - all delivered on CoreWeave’s unique accelerated-compute infrastructure.

What You'll Do:
  • Vision & Roadmap -  Define and continuously refine the end-to-end Inference Platform roadmap, prioritizing low-latency, high-throughput model serving and world-class developer UX. Set technical standards for runtime selection, GPU/CPU heterogeneity, quantization, and model-optimization techniques.
  • Platform Architecture - Design and implement a global, Kubernetes-native inference control plane that delivers <50 ms P99 latencies at scale. Build adaptive micro-batching, request-routing, and autoscaling mechanisms that maximize GPU utilization while meeting strict SLAs. Integrate model-optimization pipelines (TensorRT, ONNX Runtime, BetterTransformer, AWQ, etc.) for frictionless deployment.
  • Implement state-of-the-art runtime optimizations—including speculative decoding, KV-cache reuse across batches, early-exit heuristics, and tensor-parallel streaming—to squeeze every microsecond out of LLM inference while retaining accuracy.
  • Operational Excellence -  Establish SLOs/SLA dashboards, real-time observability, and self-healing mechanisms for thousands of models across multiple regions. Drive cost-performance trade-off tooling that makes it trivial for customers to choose the best HW tier for each workload.
  • Leadership -  Hire, mentor, and grow a diverse team of engineers and managers passionate about large-scale AI inference. Foster a customer-obsessed, metrics-driven engineering culture with crisp design reviews and blameless post-mortems.
  • Collaboration -  Partner closely with Product, Orchestration, Networking, and Security teams to deliver a unified CoreWeave experience. Engage directly with flagship customers (internal and external) to gather feedback and shape the roadmap.
Who You Are: 
  • 10+ years building large-scale distributed systems or cloud services, with 5+ years leading multiple engineering teams.
  • Proven success delivering mission-critical model-serving or real-time data-plane services (e.g., Triton, TorchServe, vLLM, Ray Serve, SageMaker Inference, GCP Vertex Prediction).
  • Deep understanding of GPU/CPU resource isolation, NUMA-aware scheduling, micro-batching, and low-latency networking (gRPC, QUIC, RDMA).
  • Track record of optimizing cost-per-token / cost-per-request and hitting sub-100 ms global P99 latencies.
  • Expertise in Kubernetes, service meshes, and CI/CD for ML workloads; familiarity with Slurm, Kueue, or other schedulers a plus.
  • Hands-on experience with LLM optimization (quantization, compilation, tensor parallelism, speculative decoding) and hardware-aware model compression.
  • Excellent communicator who can translate deep technical concepts into clear business value for C-suite and engineering audiences.
  • Bachelor’s or Master’s in CS, EE, or related field (or equivalent practical experience).
Nice-to-have:
  • Experience operating multi-region inference fleets at a cloud provider or hyperscaler.
  • Contributions to open-source inference or MLOps projects.
    Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) for AI workloads.
  • Background in edge inference, streaming inference, or real-time personalization systems.

The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility). 

What We Offer

The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.

In addition to a competitive salary, we offer a variety of benefits to support your needs, including:

  • Medical, dental, and vision insurance - 100% paid for by CoreWeave
  • Company-paid Life Insurance 
  • Voluntary supplemental life insurance 
  • Short and long-term disability insurance 
  • Flexible Spending Account
  • Health Savings Account
  • Tuition Reimbursement 
  • Ability to Participate in Employee Stock Purchase Program (ESPP)
  • Mental Wellness Benefits through Spring Health 
  • Family-Forming support provided by Carrot
  • Paid Parental Leave 
  • Flexible, full-service childcare support with Kinside
  • 401(k) with a generous employer match
  • Flexible PTO
  • Catered lunch each day in our office and data center locations
  • A casual work environment
  • A work culture focused on innovative disruption

Our Workplace

While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration

California Consumer Privacy Act - California applicants only

CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.

As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: [email protected].


Export Control Compliance

This position requires access to export controlled information.  To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency.  CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.

Top Skills

Awq
Bettertransformer
Ci/Cd
Grpc
Kubernetes
Onnx Runtime
Quic
Rdma
Tensorrt-Llm
Triton

Similar Jobs at CoreWeave

23 Hours Ago
In-Office
4 Locations
185K-275K Annually
Senior level
185K-275K Annually
Senior level
Cloud • Information Technology • Machine Learning
As a Staff Software Engineer, you'll lead the development of CoreWeave's orchestration platform, guiding architectural direction and mentoring engineers to advance AI workload efficiency across GPU clusters.
Top Skills: GoKubernetesSlurm
23 Hours Ago
In-Office
4 Locations
135K-198K Annually
Senior level
135K-198K Annually
Senior level
Cloud • Information Technology • Machine Learning
The role involves defining CoreWeave's product positioning, messaging, and go-to-market strategy for AI infrastructure, focusing on open source and Kubernetes-native architecture. Responsibilities include collaborating across teams, developing customer proof, and engaging with the AI community to ensure product adoption and market success.
Top Skills: AIKubernetesOpen Source SoftwareSaaS
23 Hours Ago
In-Office
4 Locations
165K-242K Annually
Senior level
165K-242K Annually
Senior level
Cloud • Information Technology • Machine Learning
Administer multi-tenant Kubernetes platforms, manage application deployments, troubleshoot incidents, and ensure system reliability while collaborating with development teams.
Top Skills: ArgocdDevOpsDockerGitopsGoKubernetes

What you need to know about the Boston Tech Scene

Boston is a powerhouse for technology innovation thanks to world-class research universities like MIT and Harvard and a robust pipeline of venture capital investment. Host to the first telephone call and one of the first general-purpose computers ever put into use, Boston is now a hub for biotechnology, robotics and artificial intelligence — though it’s also home to several B2B software giants. So it’s no surprise that the city consistently ranks among the greatest startup ecosystems in the world.

Key Facts About Boston Tech

  • Number of Tech Workers: 269,000; 9.4% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Thermo Fisher Scientific, Toast, Klaviyo, HubSpot, DraftKings
  • Key Industries: Artificial intelligence, biotechnology, robotics, software, aerospace
  • Funding Landscape: $15.7 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Summit Partners, Volition Capital, Bain Capital Ventures, MassVentures, Highland Capital Partners
  • Research Centers and Universities: MIT, Harvard University, Boston College, Tufts University, Boston University, Northeastern University, Smithsonian Astrophysical Observatory, National Bureau of Economic Research, Broad Institute, Lowell Center for Space Science & Technology, National Emerging Infectious Diseases Laboratories

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account