Build in-house tools to support diverse model training methods at scale, ensuring the efficiency and quality of AI models for clients.
ABOUT BASETEN
Baseten powers mission-critical inference for the world's most dynamic AI companies, like Cursor, Notion, OpenEvidence, Abridge, Clay, Gamma and Writer. By uniting applied AI research, flexible infrastructure, and seamless developer tooling, we enable companies operating at the frontier of AI to bring cutting-edge models into production. We're growing quickly and recently raised our $300M Series E, backed by investors including BOND, IVP, Spark Capital, Greylock, and Conviction. Join us and help build the platform engineers turn to to ship AI products.
We are looking for an engineer with strong experience in machine learning and solid foundations in maths and computer science to join our growing Post-Training team at Baseten.
Custom models are instrumental to the success of Baseten customers. By inference volume, the overwhelming majority of traffic at Baseten is to and from models that have been post-trained in some way, whether that be through reinforcement learning, supervised finetuning, a recent technique from the literature, or an in-house research technique from Baseten. The Post-Training team is responsible for the success of our customers’ post-trained models, and we employ a wide array of techniques to produce models that are more efficient and higher quality than even the biggest closed source models for the customer’s specific needs.
Your role as a research engineer is to build the in-house tooling to support all of this. We care about training a wide spectrum of different model architectures with a variety of techniques efficiently and at scale. At times this involves zooming deep into a particular technical topic, but more often if involves working across the stack as a whole - systems-level concepts like Kubernetes, cgroups, storage systems, and networking topologies, as well as PyTorch distributed tensor computation, and GPU kernels.
RECENT RESEARCH
We don’t have a rigid set of skills, but here’s some of what we’re looking for:
A deep understanding of modern ML techniques and tools for training transformers
Advanced experience in a tensor/array computation library like PyTorch, TensorFlow, Jax, or similar
A detailed understanding of transformer training parallelism strategies like data parallelism, sharded data parallelism, tensor parallelism, pipeline parallelism, context parallelism
The experience and knowledge to profile and improve the performance of a distributed GPU program in PyTorch or a similar library
The ability to perform roofline analysis on a transformer training setup
A willingness to dive into messy problems, work with researchers, derive specifications by asking important questions, and execute
Familiarity with HPC and distributed computing platforms like Slurm, Ray, Kubernetes, and Dask
Familiarity with cluster networking technology like Infiniband, RoCE, GPUDirect
Solid fundamentals in operating systems concepts like processes, files, kernel drivers, containerisation, and networking protocols
A sense of creativity and willingness to ask difficult questions about our approach, assumptions, and tooling choices
BENEFITS
Competitive compensation, including meaningful equity.
100% coverage of medical, dental, and vision insurance for employee and dependents
Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
Paid parental leave
Company-facilitated 401(k)
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Apply now to embark on a rewarding journey in shaping the future of AI! If you are a motivated individual with a passion for machine learning and a desire to be part of a collaborative and forward-thinking team, we would love to hear from you.
At Baseten, we are committed to fostering a diverse and inclusive workplace. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status.
We are an Equal Opportunity Employer and will consider qualified applicants with criminal histories in a manner consistent with applicable law (by example, the requirements of the San Francisco Fair Chance Ordinance, where applicable).
Equity Compensation
Competitive compensation, including meaningful equity.
Health Insurance
100% coverage of medical, dental, and vision insurance for employee and dependents
Networking opportunities in ML startups
Exposure to a variety of ML startups, offering unparalleled learning and networking opportunities.
Paid Parental Leave
Paid Time Off
Generous PTO policy including company wide Winter Break (our offices are closed from Christmas Eve to New Year's Day!)
Baseten delivers critical inference solutions for AI companies, equipping them to seamlessly deploy advanced models at scale. Our unique blend of applied AI research and flexible infrastructure allows developers to accelerate their projects and meet demanding performance needs.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.
Research Engineer Q&A's