Mistral AI
Model Behavior Architect
TLDR
Define and measure LLM behavior while collaborating closely with the Science team to establish evaluation criteria and improve model performance on complex tasks.
About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise as well as personal needs. Our offerings include Le Chat, La Plateforme, Mistral Code and Mistral Compute - a suite that brings frontier intelligence to end-users.
We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
Mistral AI participates in the E-Verify program
By applying, you agree to our Applicant Privacy Policy.
About the role
As a Model Behavior Architect, you are at the forefront of defining and measuring LLM behaviour.
We are looking for people who have built a career in engineering, machine learning, and large language models and are experts in model evaluation, policy writing, and creating eval pipelines for complicated tasks. Your role is to work hand-in-hand with our Science team to define what ‘good’ looks like for Reasoning, Audio, Alignment, Tools, and all Frontier bets.
Join us if you are passionate about tackling cutting-edge, open-ended research challenges and transforming your insights into best-in-class models.
What you will do
Interact with models to identify where model behavior can be improved
Gather internal and external feedback on model behavior to scope areas for improvement
Design and implement evals, data guidelines, data generation, and synthetic testing environments
Identify and fix edge case behaviors through rigorous testing
Develop robust evaluation pipelines for our model candidates
Work collaboratively with AI Scientists
About you
You have a deep understanding of either 1) linguistics, language, and translation, 2) engineering and code behavior, 3) LLM agents at work, including reasoning and tool use
You have prior knowledge in training and optimising model behaviour
You are an expert at building robust evaluations
You thrive in dynamic and technically complex environments
You have a track record of delivering innovative, out-of-the-box solutions to address real-world constraints
By applying, you agree to our Applicant Privacy Policy.
Mistral AI develops high-performance, open-source AI models and solutions that simplify tasks and enhance creativity for both enterprises and individuals. Our comprehensive platform seamlessly integrates into daily work life, offering tools like Le Chat and Mistral Compute to democratize access to advanced AI technology. We're dedicated to driving innovation and making AI accessible to everyone.
Architect