FAR.AI Logo

FAR.AI

Infrastructure Engineer

Posted 22 Days Ago
Remote
2 Locations
100K-175K Annually
Mid level
Remote
2 Locations
100K-175K Annually
Mid level
As an Infrastructure Engineer, you will develop and manage scalable infrastructure, supporting Kubernetes clusters and enhancing compute capabilities for AI research workloads while ensuring security and improved observability of resources.
The summary above was generated by AI
About FAR.AI

FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response.

Since our founding in July 2022, we've grown quickly to 30+ staff, producing over 40 influential academic papers, and established the leading AI Safety events for research, and international cooperation. Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times, Nature News, and MIT Technology Review.

We drive practical change through red-teaming with frontier model developers and government institutes. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio, running FAR.Labs, an AI safety-focused co-working space in Berkeley housing 40 members, and supporting the community through targeted grants to technical researchers.

About FAR.Research

Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR.AI aims to pursue a diverse portfolio of projects.

Our current focus areas include:

  • Investigating deception in AI (e.g. lie detectors can either induce honesty or evasion)

  • Building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs)

  • Advancing model evaluation techniques (e.g. inverse scaling and codebook features, and learned planning).

We also put our research into practice through red-teaming engagements with frontier AI developers, and collaborations with government institutes.

About the Role

We’re seeking an Infrastructure Engineer to develop and manage scalable infrastructure to support our research workloads. You will own our existing Kubernetes cluster, deployed on top of bare-metal H100 cloud instances. You will oversee and enhance the cluster to 1) support new workloads, such as multi-node LoRA training; 2) new users, as we double the size of our research team in the next twelve to eighteen months; and 3) new features, such as fine-grained experiment compute usage tracking.

You will be the point-person for cluster-related work. You will work on the Foundations team alongside experienced engineers, including those who built and designed the cluster, who can provide guidance and backup. However, as our first dedicated infrastructure hire, you will need to work autonomously, design solutions to varied and complex problems, and communicate with researchers who are technically skilled but less knowledgeable about our cluster and infrastructure.

This is an opportunity to build the technical foundations of the largest independent AI safety research institute, with one of the most varied research agendas. You will be working directly with both the Foundations team and researchers across the organization to enable bleeding-edge research workloads across our research portfolio.

Responsibilities

Build and Maintain

You will deliver a scalable and easy to use compute cluster to support impactful research by:

  • Empowering the research team to solve their own day-to-day compute problems, such as debugging simple issues and streamlining recurring tasks (e.g. running batch experiments, launching an interactive devbox, etc.).

  • Maintaining and developing in-cluster services, such as backups, experiment tracking, and our in-house LLM-based cluster support bot.

  • Maintaining adequate cluster stability to avoid interfering with research workloads (currently >95% uptime outside of planned maintenance windows).

  • Maintaining situational awareness of the cloud GPU market and assisting leadership with vendor comparisons to ensure we are using the most effective compute platforms.

Support Security

We often collaborate with partners with stringent security requirements (e.g. governments, frontier developers) and handle sensitive information (e.g. non-public exploits, CBRN datasets). You will implement security measures towards:

  • Securing the cluster against insider threats (architecting it to have adequate isolation to provide data confidentiality and integrity for sensitive workloads) and external threats (through minimizing the attack surface, and ensuring security updates are promptly installed).

  • Making secure workflows the default, e.g. streamlining the deployment of internal web dashboards behind an OAuth reverse proxy.

  • Championing security across the FAR.AI team, including maintaining and extending our mobile device management (MDM) system.

Bleeding-edge Workloads

You will work with the Foundations team and specific research teams to support novel ML workloads (e.g. fine-tuning a new open-weight model release) by:

  • Architecting our Kubernetes cluster to flexibly support novel workloads.

  • Assisting projects with bespoke requirements, designing and implementing effective infrastructure solutions, and sharing your infrastructure wisdom with ML researchers.

  • Improving observability over cluster resources and GPU utilization to allow us to rapidly diagnose and work around hardware issues or software bugs that may only arise on novel workloads.

About You

It is essential that you

  • Have Kubernetes or other system administration experience.

  • Have a curiosity and willingness to rapidly learn the needs of a new space.

  • Are self-directed and comfortable with ambiguous or rapidly evolving requirements.

  • Are willing to be on-call during waking hours for cluster issues ahead of major deadlines (for a few weeks a quarter).

  • Are interested in improving our security posture through identifying, implementing and administering security policies.

It is preferable that you

  • Have experience supporting ML/AI workloads.

  • Have previously worked in research environments or startups.

  • Are experienced in administering compute or GPU clusters.

  • Are able to adopt a security mindset.

  • Are willing to be part of an eventual on-call rotation, if required.

Example Projects
  • Configure the cluster and user-space development environments to support InfiniBand nodes for high-performance multi-node training.

  • Improve our default devbox K8s pod template to incorporate best-practice workflows for our researchers.

  • Roll out a new mobile device management system to ensure corporate devices meet our security requirements.

  • Streamline onboarding to the cluster for new starters (possibly in different timezones), and candidates on time-limited work trials.

  • Be “holder of the keys”, managing permissions and access control for FAR.AI’s team members to technical systems, including streamlining/automating (e.g. via SAML, SCIM) where appropriate.

  • Analyze storage patterns and propose infrastructure improvements for backups, disaster recovery, and usability.

Logistics

You will be a full-time employee of FAR AI, a 501(c)(3) research non-profit.

  • Location: Both remote and in-person (Berkeley, CA) are possible, though 2 hours of overlap with Berkeley timezones are required. We sponsor visas for CA in-person employees, and can also hire remotely in most countries.

  • Hours: Full-time (40 hours/week).

  • Compensation: $100,000-$175,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.

  • Application process: A programming assessment, a short screening call, two 1-hour interviews, and a 1 week paid work trial.

If you have any questions about the role, please reach out at [email protected]. If you don't have questions, the best way to ensure a proper review of your skills and qualifications is by applying directly via the application form. Please don't email us to share your resume (it won't have any impact on our decision). Thank you!

Top Skills

Cloud Infrastructure
Gpu Clusters
Kubernetes
Machine Learning Frameworks

Similar Jobs

2 Days Ago
Remote
USA
140K-180K Annually
Expert/Leader
140K-180K Annually
Expert/Leader
Artificial Intelligence • Cloud • Hardware • Machine Learning • Other • Software • Infrastructure as a Service (IaaS)
The Infrastructure Engineer will design and maintain observability platforms, such as metrics and alerting systems, collaborating closely with various teams to enhance operational insights and reliability.
Top Skills: BashElkGoGrafanaKafkaKubernetesOtelPrometheusPromtailPythonVictoriametrics
7 Days Ago
Remote
USA
97K-138K Annually
Mid level
97K-138K Annually
Mid level
Cloud • Greentech • Social Impact • Software • Consulting
As a CloudOps Infrastructure Engineer, you will automate AWS infrastructure management, optimize costs, implement monitoring systems, and design disaster recovery solutions.
Top Skills: AnsibleAWSCloudwatchEcsEksFsxNew RelicPythonRdsTerraform
19 Days Ago
In-Office or Remote
3 Locations
99K-167K Annually
Senior level
99K-167K Annually
Senior level
Artificial Intelligence • Cloud • Consumer Web • eCommerce • Information Technology • Software
The Senior Infrastructure Engineer will design and maintain complex infrastructures while applying software engineering and SRE principles, leveraging automation tools, and collaborating with distributed teams.
Top Skills: Amazon EcsAnsibleChefDockerElkGoKubernetesLightstepNew RelicPrometheusPuppetPythonRubyScalaSentryTerraform

What you need to know about the Boston Tech Scene

Boston is a powerhouse for technology innovation thanks to world-class research universities like MIT and Harvard and a robust pipeline of venture capital investment. Host to the first telephone call and one of the first general-purpose computers ever put into use, Boston is now a hub for biotechnology, robotics and artificial intelligence — though it’s also home to several B2B software giants. So it’s no surprise that the city consistently ranks among the greatest startup ecosystems in the world.

Key Facts About Boston Tech

  • Number of Tech Workers: 269,000; 9.4% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Thermo Fisher Scientific, Toast, Klaviyo, HubSpot, DraftKings
  • Key Industries: Artificial intelligence, biotechnology, robotics, software, aerospace
  • Funding Landscape: $15.7 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Summit Partners, Volition Capital, Bain Capital Ventures, MassVentures, Highland Capital Partners
  • Research Centers and Universities: MIT, Harvard University, Boston College, Tufts University, Boston University, Northeastern University, Smithsonian Astrophysical Observatory, National Bureau of Economic Research, Broad Institute, Lowell Center for Space Science & Technology, National Emerging Infectious Diseases Laboratories

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account