About Mistral
- At Mistral AI, we are a tight-knit, nimble team dedicated to bringing our cutting-edge AI technology to the world. Our mission is to make AI ubiquitous and open
- We are creative, low-ego, team-spirited, and have been passionate about AI for years
- We hire people who thrive in competitive environments, because they find them more fun to work in. We hire passionate women and men from all over the world
- Our teams are distributed between France, UK and USA
Role Summary
- You will be in charge of deploying state-of-the-art models in production environments, helping turn research breakthroughs into tangible solutions
- Location: Paris / London
Key Responsibilities
- Create and maintain tooling and services: both internal facing (research & dogfooding) and external facing (product)
- Collaborate cross-functionally with researchers, software engineers, and product managers to understand complex business challenges and deliver AI-powered solutions
- Implement and optimize ML pipelines for performance and accuracy, ensuring production readiness and employing cutting-edge technology and innovative approaches
About the ML Engineering team
- Our ML Engineering team is embedded in our Product development organization (SWE & Product) team and works very closely with our Science team
- All our engineers can fluidly move on the production / research spectrum depending on where the needs are or where their interests lie
Qualifications & profile
- Master's degree in Computer Science, Machine Learning, Data Science, or a related field
- Expert programming skills in PythonMLOps or FullStack + ML experience
- Proficiency in frameworks like PyTorch or TensorFlow
- Adaptable, proactive and autonomous
- Attention to detail and a drive to go the last mile to build almost perfect tools
- Deep understanding of machine learning approaches and algorithms
- Low-ego
- Collaborative and have a real team player mindset
Now, it would be ideal if you have;
- Experience with training and fine-tuning large language models (e.g., distillation, supervised fine-tuning, policy optimization)
- Worked with LLMs
- Worked with research teams before