Spotify Logo

Spotify

Machine Learning Engineering Manager - LLM Serving & Infrastructure

Posted Yesterday
Be an Early Applicant
In-Office or Remote
3 Locations
176K-252K
Senior level
In-Office or Remote
3 Locations
176K-252K
Senior level
Lead a machine learning engineering team focused on developing and deploying LLM serving infrastructure, ensuring high performance and integration with recommendation systems.
The summary above was generated by AI
The Personalization team makes deciding what to play next on Spotify easier and more enjoyable for every listener. We seek to understand the world of music and podcasts better than anyone else so that we can make great recommendations to every individual and keep the world listening. Every day, hundreds of millions of people all over the world use the products we build which include destinations like Home and Search, original playlists like Discover Weekly and Daylist, and are at the forefront of new innovations like AI DJ and AI Playlists.

Generative AI is transforming Spotify’s product capabilities and technical architecture. Generative recommender systems, agent frameworks, and LLMs present huge opportunities for our products to serve more user needs and use cases and unlock richer understanding of our content and users. This ML Manager will focus on the serving of a Unified Recommender model, based on open-weight LLM and transformer technology. You will collaborate with a diverse team to establish and implement the machine learning plan for the product domain, developing innovative recommendations and agent interactions. You will work as a technology leader, managing a team and influencing peers. You will collaborate with internal customers and platform teams, offering the opportunity to profoundly build the direction of the entire Spotify experience.

Join us and you’ll keep millions of users listening and engaging with our platform every day!

What You’ll Do

  • Lead a high-performing engineering team to develop, build, and deploy a high-scale, low-latency LLM Serving Infrastructure.
  • Drive the implementation of a unified serving layer to support multiple LLM models and inference types (batch, offline eval flows and real-time/streaming).
  • Lead all aspects of the development of the Model Registry for deploying, versioning, and running LLMs across production environments.
  • Ensure successful integration with the core Personalization and Recommendation systems to deliver LLM-powered features.
  • Define and champion standardized technical interfaces and protocols for efficient model deployment and scaling.
  • Establish and monitor the serving infrastructure's performance, cost, and reliability, including load balancing, autoscaling, and failure recovery.
  • Collaborate closely with data science, machine learning research, and feature teams (Autoplay, Home, Search, etc.) to drive the active adoption of the serving infrastructure.
  • Scale up the serving architecture to handle hundreds of millions of users and high-volume inference requests for internal domain-specific LLMs.
  • Drive Latency and Cost Optimization: partner with SRE and ML teams to implement techniques like quantization, pruning, and efficient batching to minimize serving latency and cloud compute costs.
  • Develop Observability and Monitoring: build dashboards and alerting for service health, tracing, A/B test traffic, and latency trends to ensure consistency to defined SLAs.
  • Contribute to Core LPM Serving: focus on the technical strategy for deploying and maintaining the core Large Personalization Model (LPM).

Who You Are

  • 5+ years of experience in software or machine learning engineering, with 2+ years of experience managing an engineering team.
  • Hands-on with ML Engineering: you have deep expertise in building, scaling, and governing high-quality ML systems and datasets, including defining data schemas, handling data lineage, and implementing data validation pipelines (e.g., HuggingFace datasets library or similar internal systems).
  • Deep technical background in building and operating large-scale, high-velocity Machine Learning/MLOps infrastructure, ideally for personalization, recommendation, or Large Language Models (LLMs).
  • Proven track record to drive complex projects involving multiple partners and federated contribution models ("one source of truth, many contributors").
  • Expertise in designing robust, loosely coupled systems with clean APIs and clear separation of concerns (e.g., distinguishing between fast dev-time tools and rigorous production-like systems).
  • Experience integrating evaluation and testing into continuous integration/continuous deployment (CI/CD) pipelines to enable rapid 'fork-evaluate-merge' developer workflows.
  • Solid understanding of experiment tracking and results visualization platforms (e.g., MLFlow, custom UIs).
  • A pragmatic leader who can balance the need for speed with progressive rigor and production fidelity.

Where You’ll Be

  • This role is based in New York or Boston.
  • We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home.

The United States base range for this position is $176,166- $251,666 plus equity. The benefits available for this position include health insurance, six month paid parental leave, 401(k) retirement plan, monthly meal allowance, 23 paid days off, 13 paid flexible holidays, paid sick leave. These ranges may be modified in the future.

Spotify is an equal opportunity employer. You are welcome at Spotify for who you are, no matter where you come from, what you look like, or what’s playing in your headphones. Our platform is for everyone, and so is our workplace. The more voices we have represented and amplified in our business, the more we will all thrive, contribute, and be forward-thinking! So bring us your personal experience, your perspectives, and your background. It’s in our differences that we will find the power to keep revolutionizing the way the world listens.

At Spotify, we are passionate about inclusivity and making sure our entire recruitment process is accessible to everyone. We have ways to request reasonable accommodations during the interview process and help assist in what you need. If you need accommodations at any stage of the application or interview process, please let us know - we’re here to support you in any way we can.

Spotify transformed music listening forever when we launched in 2008. Our mission is to unlock the potential of human creativity by giving a million creative artists the opportunity to live off their art and billions of fans the chance to enjoy and be passionate about these creators. Everything we do is driven by our love for music and podcasting. Today, we are the world’s most popular audio streaming subscription service.

Top Skills

Huggingface
Machine Learning
Mlflow
Mlops

Similar Jobs

An Hour Ago
Remote
30 Locations
Senior level
Senior level
Artificial Intelligence • Productivity • Software • Automation
Manage and develop the Data Engineering team to build scalable data systems and APIs. Set architectural vision, ensure data quality, and collaborate across teams to drive business impact.
Top Skills: AirflowAWSDatabricksDbtKafkaPythonTypescript
An Hour Ago
Easy Apply
Remote or Hybrid
2 Locations
Easy Apply
Expert/Leader
Expert/Leader
HR Tech • Software • Consulting
The Managing Director will lead Catalant's Industrial Client Community team, focusing on sales strategy, business development, and client engagement to drive revenue growth and compete with top consulting firms.
An Hour Ago
Easy Apply
Remote or Hybrid
2 Locations
Easy Apply
Expert/Leader
Expert/Leader
HR Tech • Software • Consulting
Lead the Consumer Client Community team, driving revenue and strategy in the Consumer/CPG vertical against major consulting firms. Engage clients, enhance sales, and develop the team while ensuring performance and account growth.

What you need to know about the Boston Tech Scene

Boston is a powerhouse for technology innovation thanks to world-class research universities like MIT and Harvard and a robust pipeline of venture capital investment. Host to the first telephone call and one of the first general-purpose computers ever put into use, Boston is now a hub for biotechnology, robotics and artificial intelligence — though it’s also home to several B2B software giants. So it’s no surprise that the city consistently ranks among the greatest startup ecosystems in the world.

Key Facts About Boston Tech

  • Number of Tech Workers: 269,000; 9.4% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Thermo Fisher Scientific, Toast, Klaviyo, HubSpot, DraftKings
  • Key Industries: Artificial intelligence, biotechnology, robotics, software, aerospace
  • Funding Landscape: $15.7 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Summit Partners, Volition Capital, Bain Capital Ventures, MassVentures, Highland Capital Partners
  • Research Centers and Universities: MIT, Harvard University, Boston College, Tufts University, Boston University, Northeastern University, Smithsonian Astrophysical Observatory, National Bureau of Economic Research, Broad Institute, Lowell Center for Space Science & Technology, National Emerging Infectious Diseases Laboratories

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account