Senior/Staff Infrastructure Engineer

AI overview

Build and manage systems for a large fleet of GPU servers, with a strong emphasis on automation, AI integration, and operational efficiency.

You are a hands-on engineer who builds the software and processes that keep a large fleet of GPU servers healthy and productive. You write systems and tooling for managing 1000s of servers including  provisioning, health monitoring, error detection, and recovery — and when something breaks that automation can’t fix, you drive resolution with partners.

Key responsibilities

  • Build and maintain Python fleet tracking system that manages the full lifecycle of servers including contracting and procurement, target use, pricing, availability, health, RMAs, etc
  • Build server management tooling that automates provisioning, health checks, GPU diagnostics, recovery and alerting
  • Create and maintain metrics, dashboards, and alerting for hardware health across the fleet (GPU errors, disk failures, network issues, thermals)
  • Leverage AI to an extreme level to build tools and automate alerting and recovery
  • Implement and enforce OS-level security: hardening baselines, SELinux/AppArmor policies, SSH key management, vulnerability scanning, and compliance automation
  • Manage and optimize distributed and local storage systems supporting model weights, checkpoints, and ephemeral scratch: NVMe arrays, NFS, parallel file systems, and object storage
  • Tune Linux systems for AI workloads: kernel parameters, NUMA topology, CPU pinning, hugepages, I/O schedulers, and GPU driver stack optimization (NVIDIA drivers, CUDA, container runtimes)
  • Develop a suite of automated error detection and recovery processes
  • Work with partners to solve technical issues

Requirements

  • 5+ years experience managing bare-metal and VM server fleets at scale (100+ nodes)
  • Strong software engineering skills in Python; you write production tooling, not scripts
  • Deep Linux systems knowledge: boot process, kernel tuning, networking, storage, systemd, cgroups, namespaces, performance profiling
  • Strong experience with configuration management and infrastructure-as-code: Ansible, Terraform, cloud-init
  • Solid understanding of storage technologies: LVM, RAID, NVMe, NFS, Lustre or GPFS, and Linux I/O stack tuning
  • Familiarity with hardware diagnostics and failure modes (GPUs, NVMe, NICs, memory)
  • Experience building internal tools or dashboards for infrastructure visibility
  • Excellent communication and ability to drive technical decisions across teams
  • Self-starter who executes quickly, takes ownership, and constantly seeks improvement

Nice to have

  • Familiarity with network configuration and diagnostics (VLAN, VXLAN, ECMP, BGP, tcpdump)
  • Experience with NVIDIA GPU infrastructure: driver management, health monitoring, DCGM, NVLink/NVSwitch diagnostics, RDMA, InfiniBand/RoCEv2
  • Experience with AMD GPUs
  • Experience with bare metal and VM provisioning (PXE/iPXE, Kickstart, libvirt, Qemu/KVM)
  • Experience with compliance frameworks relevant to cloud providers (SOC 2, ISO 27001)

Compensation

  • $180,000-250,000 plus equity + benefits

Location

  • San Francisco, CA

What we offer at fal

  • Interesting and challenging work
  • A lot of learning and growth opportunities
  • We are currently hiring in downtown San Francisco.
  • We offer visa sponsorship and will help you relocate to San Francisco.
  • Health, dental, and vision insurance (US)
  • Regular team events and offsites

Perks & Benefits Extracted with AI

  • Health Insurance: Health, dental, and vision insurance (US)
  • Visa Sponsorship: We offer visa sponsorship and will help you relocate to San Francisco.

In the modern era, content is shifting from being human-made and algorithm-distributed to being generated on demand - personalized in real time for every audience, context, and moment. We’re Fal, and we’re building the infrastructure powering this transformation. Our platform is the first of its kind: a generative media stack for developers that enables real-time, AI-generated content across image, video, and audio.   At the core is our serverless Python runtime, purpose-built to run massive ML models across thousands of GPUs with unmatched speed and efficiency. Applications built on Fal already serve millions of users - and we’re just getting started. Founded in 2021, we're scaling fast and backed by top investors including a16z, Bessemer, and Kindred. If you're an ambitious builder who wants to define the future of AI and media, we’d love to meet you.

View all jobs
Salary
$180,000 – $250,000 per year
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Infrastructure Engineer Q&A's
Report this job
Apply for this job