At Liquid, we’re not just building AI models—we’re redefining the architecture of intelligence itself. Spun out of MIT, our mission is to build efficient AI systems at every scale. Our Liquid Foundation Models (LFMs) operate where others can’t: on-device, at the edge, under real-time constraints. We’re not iterating on old ideas—we’re architecting what comes next.
We believe great talent powers great technology. The Liquid team is a community of world-class engineers, researchers, and builders creating the next generation of AI. Whether you're helping shape model architectures, scaling our dev platforms, or enabling enterprise deployments—your work will directly shape the frontier of intelligent systems.
This Role Is For You If:You have hands-on experience optimizing and deploying local LLMs - running models like Llama, Mistral or other open-source LLMs locally through tools like vLLM, Ollama or LM Studio
You're passionate about customizing ML models to solve real customer problems - from fine-tuning foundation models to optimizing them for specific use cases, you know how to make models work for unique requirements
You have a knack for lightweight ML deployment and can architect solutions that work efficiently in resource-constrained environments - whether that's optimizing inference on CPUs, working with limited memory budgets, or deploying to edge devices
You have a sharp eye for data quality and know what makes data effective - able to spot ineffective patterns in sample data, help design targeted synthetic datasets, and craft prompts that unlock the full potential of foundation models for specific use cases
You have customized an existing product for a customer
You're versatile across deployment scenarios - whether it's containerized cloud deployments, on-premise installations with strict security requirements, or optimized edge inference, you can make models work anywhere
Own the complete deployment journey - from model customization to serving infrastructure, ensuring our solutions work flawlessly in variable customer environments
Deploy AI systems to solve use cases others can not - implementing solutions that push beyond base LFMs can deliver and redefine what's possible with our technology
Work alongside our core engineering team to leverage and enhance our powerful toolkit of Liquid infrastructure
The ability to shape how the world's most influential organizations adopt and deploy LFMs - you'll be hands-on building solutions for customers who are reimagining entire industries
Own the complete journey of delivering ML solutions that matter - from model customization to deployment architecture to seeing your work drive real customer impact
Spun out of MIT CSAIL, we’re a foundation model company headquartered in Boston. Our mission is to build capable and efficient general-purpose AI systems at every scale—from phones and vehicles to enterprise servers and embedded chips. Our models are designed to run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We’re already partnering with global enterprises across consumer electronics, automotive, life sciences, and financial services. And we’re just getting started.
Top Skills
Liquid AI Cambridge, Massachusetts, USA Office
314 Main St, Cambridge, Massachusetts , United States, 02142
Similar Jobs
What you need to know about the Boston Tech Scene
Key Facts About Boston Tech
- Number of Tech Workers: 269,000; 9.4% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Thermo Fisher Scientific, Toast, Klaviyo, HubSpot, DraftKings
- Key Industries: Artificial intelligence, biotechnology, robotics, software, aerospace
- Funding Landscape: $15.7 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Summit Partners, Volition Capital, Bain Capital Ventures, MassVentures, Highland Capital Partners
- Research Centers and Universities: MIT, Harvard University, Boston College, Tufts University, Boston University, Northeastern University, Smithsonian Astrophysical Observatory, National Bureau of Economic Research, Broad Institute, Lowell Center for Space Science & Technology, National Emerging Infectious Diseases Laboratories