Member of Engineering (Pre-training / Data Engineering)

AI overview

This role involves architecting high-performance data pipelines for training foundation models, influencing model performance through data processing at scale.

ABOUT POOLSIDE

In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.


Poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.

ABOUT OUR TEAM

We are a remote-first team that sits across Europe and North America. We come together once a month in-person for 3 days, always Monday-Wednesday, with an open invitation to stay the whole week. We also do longer off-sites once a year.

Our team is a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.

ABOUT THE ROLE

You will be a core member of our Pretraining Data team, responsible for building and scaling our Model Factory: our system for quickly training, scaling, and experimenting with our foundation models. This is a hands-on role where your #1 mission is to architect and maintain the high-performance pipelines that transform trillions of raw tokens into the high-quality dataset "fuel" our models require.

To enable us to conduct and implement latest research, you’ll be engineering the ingestion, deduplication, and streaming systems that handle petabyte-scale data. You will bridge the gap between raw web crawls and our GPU clusters, directly influencing model performance through superior data modeling, algorithmic sorting, and distributed pipeline optimization. You will be closely collaborating with other teams like Pretraining, Postraining, Evals, and Product to generate high-quality datasets that map to missing model capabilities and downstream use cases.

YOUR MISSION

To deliver large, high-quality, and diverse datasets of natural language and source code for training poolside models and coding agents.

RESPONSIBILITIES

  • Build and maintain high-performance pipelines for trillions of tokens.

  • Deliver diverse and high quality datasets for pre-training foundation models.

  • Closely work with other teams such as Pretraining, Posttraining, Evals and Product to to ensure alignment on the quality of the models delivered.

SKILLS & EXPERIENCE

  • Strong background in building production-grade, distributed data systems for machine learning, with experience in:

    • Orchestration: Slurm, Airflow, or Dagster

    • Observability & Reliability: CI/CD, Grafana, Prometheus, etc.

    • Infra: Git, Docker, k8s, cloud managed services

    • Batched inference (ex: vLLM)

    • Performance obsession, especially with large-scale GPU clusters and distributed pipelines

  • Expert-level python knowledge and ability to write clean and maintainable code

  • Strong algorithmic foundations

  • Proficiency with libraries like Polars, Dask, or PySpark

  • Nice to have:

    • Experience in building trillion-scale SOTA pretraining datasets

    • Experience translating research to production at scale

    • Experience with OCR, web crawling, or evals

    • Prior experience pre-training LLMs

PROCESS

  • Intro call with Eiso, our CTO & Co-Founder

  • Technical Interview(s) with one of our Founding Engineers

  • Team fit call with the People team

  • Final interview with one of our Founding Engineers

BENEFITS

  • Fully remote work & flexible hours

  • 37 days/year of vacation & holidays

  • Health insurance allowance for you and dependents

  • Company-provided equipment

  • Wellbeing, always-be-learning and home office allowances

  • Frequent team get togethers

  • Great diverse & inclusive people-first culture

View GDPR Policy

Perks & Benefits Extracted with AI

  • Flexible Work Hours: Fully remote work & flexible hours
  • Health Insurance: Health insurance allowance for you and dependents
  • Team get-togethers: Frequent team get togethers
  • Paid Time Off: 37 days/year of vacation & holidays
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job