Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram
The OpportunityVoice is the most natural modality for human interaction with machines. However, current sequence modeling paradigms based on jointly scaling model and data cannot deliver voice AI capable of universal human interaction. The challenges are rooted in fundamental data problems posed by audio: real-world audio data is scarce and enormously diverse, spanning a vast space of voices, speaking styles, and acoustic conditions. Even if billions of hours of audio were accessible, its inherent high dimensionality creates computational and storage costs that make training and deployment prohibitively expensive at world scale. We believe that entirely new paradigms for audio AI are needed to overcome these challenges and make voice interaction accessible to everyone.
The Role
Deepgram is currently looking for an experienced researcher to who has worked extensively with Large Language Models (LLMS) and has a deep understanding of transformer architecture to join our Research Staff. As a Member of the Research Staff, this individual should have extensive experience working on the hard technical aspects of LLMs, such as data curation, distributed large-scale training, optimization of transformer architecture, and Reinforcement Learning (RL) training.
The ChallengeWe are seeking researchers who:
See "unsolved" problems as opportunities to pioneer entirely new approaches
Can identify the one critical experiment that will validate or kill an idea in days, not months
Have the vision to scale successful proofs-of-concept 100x
Are obsessed with using AI to automate and amplify your own impact
If you find yourself energized rather than daunted by these expectations—if you're already thinking about five ideas to try while reading this—you might be the researcher we need. This role demands obsession with the problems, creativity in approach, and relentless drive toward elegant, scalable solutions. The technical challenges are immense, but the potential impact is transformative.
What You'll DoBrainstorming and collaborating with other members of the Research Staff to define new LLM research initiatives
Broad surveying of literature, evaluating, classifying, and distilling current methods
Designing and carrying out experimental programs for LLMs
Driving transformer (LLM) training jobs successfully on distributed compute infrastructure and deploying new models into production
Documenting and presenting results and complex technical concepts clearly for a target audience
Staying up to date with the latest advances in deep learning and LLMs, with a particular eye towards their implications and applications within our products
Are passionate about AI and excited about working on state of the art LLM research
Have an interest in producing and applying new science to help us develop and deploy large language models
Enjoy building from the ground up and love to create new systems.
Have strong communication skills and are able to translate complex concepts clearly
Are highly analytical and enjoy delving into detailed analyses when necessary
It's Important to Us That You Have
3+ years of experience in applied deep learning research, with a solid understanding toward the applications and implications of different neural network types, architectures, and loss mechanism
Proven experience working with large language models (LLMs) - including experience with data curation, distributed large-scale training, optimization of transformer architecture, and RL Learning
Strong experience coding in Python and working with Pytorch
Experience with various transformer architectures (auto-regressive, sequence-to-sequence.etc)
Experience with distributed computing and large-scale data processing
Prior experience in conducting experimental programs and using results to optimize models
Deep understanding of transformers, causal LMs, and their underlying architecture
Understanding of distributed training and distributed inference schemes for LLMs
Familiarity with RLHF labeling and training pipelines
Up-to-date knowledge of recent LLM techniques and developments
Published papers in Deep Learning Research, particularly related to LLMs and deep neural networks
Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $85 million in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!
Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.
We are happy to provide accommodations for applicants who need them.
Compensation Range: $150K - $220K
#BI-Remote
Top Skills
Similar Jobs at Deepgram
What you need to know about the Boston Tech Scene
Key Facts About Boston Tech
- Number of Tech Workers: 269,000; 9.4% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Thermo Fisher Scientific, Toast, Klaviyo, HubSpot, DraftKings
- Key Industries: Artificial intelligence, biotechnology, robotics, software, aerospace
- Funding Landscape: $15.7 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Summit Partners, Volition Capital, Bain Capital Ventures, MassVentures, Highland Capital Partners
- Research Centers and Universities: MIT, Harvard University, Boston College, Tufts University, Boston University, Northeastern University, Smithsonian Astrophysical Observatory, National Bureau of Economic Research, Broad Institute, Lowell Center for Space Science & Technology, National Emerging Infectious Diseases Laboratories