Your impact at LILA
We’re building a talent-dense, high-agency research team to develop the next generation of learning systems and reasoning algorithms for agentic LLMs. Our work sits at the intersection of large language models, post-training, and scientific reasoning, with the goal of enabling systems that learn from experience, reason effectively, and improve through interaction.
Scientific domains present a distinct set of challenges that make this problem uniquely hard. Feedback is sparse and delayed — experiments take days or weeks, not milliseconds. Ground truth is expensive or contested. Distribution shift is structural, as instruments, techniques, and knowledge bases evolve continuously. The hypothesis space is vast and reward signal is thin. Existing benchmark do not capture these nuances. The goal is to build systems that can operate effectively in this scientific regime.
This role spans a few complementary directions. Candidates are expected to bring deep expertise in one (ore more) of the following areas. In the event of cross-track expertise, please select the one you align to the most. Our interview process will be catered to verifying the chosen expertise area.
Expertise Area 1 — Agentic system building
Focus: Build systems that autonomously propose, execute, and verify scientific hypotheses over long time horizons.
- Create and analyze long-running auto-research systems that propose and verify hypotheses
- Design planning frameworks for agentic systems operating over long, sparse feedback loops
- Design memory architectures that allow agents to build and retrieve structured knowledge over time
- Explore algorithms in recursive self-improvement, multi-agent coordination, and continual learning
Expertise Area 2: Distillation
Focus: Translate strong inference-time behaviors and reasoning traces into efficient, trainable models.
- Develop distillation strategies from large or ensemble models into deployable systems
- Research methods for self-improvement, including iterative self-distillation and critique loops
- Investigate how to preserve generalization and reduce catastrophic forgetting through the distillation process
Expertise Area 3 — Scalable experience generation
Focus: Develop inference-time algorithms and synthetic data pipelines that generate high-quality training signal for scientific reasoning.
- Design and benchmark inference-time search, sampling, and verification strategies
- Propose new techniques in synthetic environment creation and curriculum learning
- Develop synthetic data generation strategies that capture high-quality scientific reasoning for agentic model training
- Measure the end-to-end impact of inference-time improvements on real scientific tasks
What you’ll need to succeed:
- An advanced degree in computer science, machine learning, or a related field, or or comparable experience
- Strong foundation in LLMs and empirical research
- Experience designing and executing rigorous ML experiments, including benchmarking and ablations
- Experience working with large-scale training or evaluation pipelines
- Ability to define and pursue research directions in open-ended, rapidly evolving spaces
- Strong collaboration and communication skills across research and engineering teams
Bonus points for:
- Experience with synthetic data generation, distillation, or self-improvement loops
- Familiarity with reinforcement learning (e.g., RLHF, on-policy methods)
- Experience with planning, search, or decision-making systems at scale
- Experience in building agentic systems with tool use, or multi-agent workflows
- Background in program synthesis, coding benchmarks, or long-horizon tasks
- Experience building evaluation frameworks or large-scale benchmarks
Scientific rigor & persistence:
- You take a principled approach to experimentation, with careful baselines, ablations, and evaluation design
- You are motivated by understanding why systems work, not just improving metrics
- You prioritize clarity, reproducibility, and intellectual honesty in research
- You are comfortable working through long, nonlinear iteration cycles
- You operate effectively in ambiguous, fast-evolving research environments
Compensation
We offer competitive base compensation with bonus potential and generous early-stage equity. Your final offer will reflect your background, expertise, and expected impact.
U.S. Benefits. Full-time U.S. employees receive a comprehensive benefits program including medical, dental, and vision coverage; employer-paid life and disability insurance; flexible time off with generous company wide holidays; paid parental leave; an educational assistance program; commuter benefits, including bike share memberships for office based employees; and a company subsidized lunch program.
International Benefits. Full-time employees outside the U.S. receive a comprehensive benefits program tailored to their region. USD salary ranges apply only to U.S.-based positions; international salaries are set to local market.
About LILA
Lila Sciences is building Scientific Superintelligence™ to solve humankind's greatest challenges. We believe science is the most inspiring frontier for AI. Rather than hard-coding expert knowledge into tools, LILA builds systems that can learn for themselves.
LILA combines advanced AI models with proprietary AI Science Factory™ instruments into an operating system for science that executes the entire scientific method autonomously, accelerating discovery at unprecedented speed, scale, and impact across medicine, materials, and energy. Learn more at www.lila.ai.
Guided by our core values of truth, trust, curiosity, grit, and velocity, we move with startup speed while tackling problems of historic importance. If this sounds like an environment you'd love to work in, even if you don't meet every qualification listed above, we encourage you to apply.
We’re All In
Lila Sciences is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
Information you provide during your application process will be handled in accordance with our Candidate Privacy Policy.
A Note to Agencies
Lila Sciences does not accept unsolicited resumes from any source other than candidates. The submission of unsolicited resumes by recruitment or staffing agencies to Lila Sciences or its employees is strictly prohibited unless contacted directly by Lila Science’s internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Lila Sciences, and Lila Sciences will not owe any referral or other fees with respect thereto.
Similar Jobs
What you need to know about the Boston Tech Scene
Key Facts About Boston Tech
- Number of Tech Workers: 269,000; 9.4% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Thermo Fisher Scientific, Toast, Klaviyo, HubSpot, DraftKings
- Key Industries: Artificial intelligence, biotechnology, robotics, software, aerospace
- Funding Landscape: $15.7 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Summit Partners, Volition Capital, Bain Capital Ventures, MassVentures, Highland Capital Partners
- Research Centers and Universities: MIT, Harvard University, Boston College, Tufts University, Boston University, Northeastern University, Smithsonian Astrophysical Observatory, National Bureau of Economic Research, Broad Institute, Lowell Center for Space Science & Technology, National Emerging Infectious Diseases Laboratories


