Help fal maintain its frontier position on model performance for generative media models. Design and implement novel approaches to model serving architecture on top of our in-house inference engine, focusing on maximizing throughput while minimizing latency and resource usage. Develop performance monitoring and profiling tools to identify bottlenecks and optimization opportunities. Work closely with our Applied ML team and customers (frontier labs on the media space) and make sure their workloads benefit from our accelerator.
Key Responsibilities:
Help fal maintain its frontier position on model performance for generative media models.
Design and implement novel approaches to model serving architecture on top of our in-house inference engine, focusing on maximizing throughput while minimizing latency and resource usage.
Develop performance monitoring and profiling tools to identify bottlenecks and optimization opportunities.
Work closely with our Applied ML team and customers (frontier labs on the media space) and make sure their workloads benefit from our accelerator.
Requirements:
Strong foundation in systems programming with expertise in identifying and fixing bottlenecks.
Deep understanding of cutting edge ML infrastructure stack (anything from PyTorch, TensorRT, TransformerEngine to Nsight), including model compilation, quantization, and serving architectures. Ideally following closely the developments in all these systems as they happen.
Have a fundamental view of the underlying hardware (Nvidia based systems at the moment), and when necessary go deeper into the stack to fix bottlenecks (custom GEMM kernels with CUTLASS for common shapes).
Proficient in Triton or willingness to learn with comparable experience in lower-level accelerator programming.
New frontier: multi-dimensional model parallelism (combining multiple parallelism techniques like TP with context parallel / sequence parallel).
Familiar with internals of Ring Attention, FA3, FusedMLP implementations.
Interesting and challenging work
Competitive salary and equity
Employee-friendly equity terms (early exercise, extended exercise)
A lot of learning and growth opportunities
We offer visa sponsorship and will help you relocate to San Francisco.
Health, dental, and vision insurance (US)
Regular team events and offsite
$180,000 - $250,000 + equity + comprehensive benefits package
We are currently hiring in downtown San Francisco.
In the modern era, content is shifting from being human-made and algorithm-distributed to being generated on demand - personalized in real time for every audience, context, and moment. We’re Fal, and we’re building the infrastructure powering this transformation. Our platform is the first of its kind: a generative media stack for developers that enables real-time, AI-generated content across image, video, and audio. At the core is our serverless Python runtime, purpose-built to run massive ML models across thousands of GPUs with unmatched speed and efficiency. Applications built on Fal already serve millions of users - and we’re just getting started. Founded in 2021, we're scaling fast and backed by top investors including a16z, Bessemer, and Kindred. If you're an ambitious builder who wants to define the future of AI and media, we’d love to meet you.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Be the first to apply. Receive an email whenever similar jobs are posted.
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.
Staff Software Engineer Q&A's