Drive innovation in model behavior evaluation by defining rigorous standards and creating advanced evaluation pipelines for cutting-edge AI applications.
About Mistral
At Mistral AI, we believe in the power of AI to simplify tasks, save time, and enhance learning and creativity. Our technology is designed to integrate seamlessly into daily working life.
We democratize AI through high-performance, optimized, open-source and cutting-edge models, products and solutions. Our comprehensive AI platform is designed to meet enterprise as well as personal needs. Our offerings include Le Chat, La Plateforme, Mistral Code and Mistral Compute - a suite that brings frontier intelligence to end-users.
We are a dynamic, collaborative team passionate about AI and its potential to transform society. Our diverse workforce thrives in competitive environments and is committed to driving innovation. Our teams are distributed between France, USA, UK, Germany and Singapore. We are creative, low-ego and team-spirited.
Join us to be part of a pioneering company shaping the future of AI. Together, we can make a meaningful impact. See more about our culture on https://mistral.ai/careers.
Mistral AI participates in the E-Verify program
About the role
As a Model Behavior Architect, you are at the forefront of defining and measuring LLM behaviour.
We are looking for people who have built a career in engineering, machine learning, and large language models and are experts in model evaluation, policy writing, and creating eval pipelines for complicated tasks. Your role is to work hand-in-hand with our Science team to define what ‘good’ looks like for Reasoning, Audio, Alignment, Tools, and all Frontier bets.
Join us if you are passionate about tackling cutting-edge, open-ended research challenges and transforming your insights into best-in-class models.
What you will do
Interact with models to identify where model behavior can be improved
Gather internal and external feedback on model behavior to scope areas for improvement
Design and implement evals, data guidelines, data generation, and synthetic testing environments
Identify and fix edge case behaviors through rigorous testing
Develop robust evaluation pipelines for our model candidates
Work collaboratively with AI Scientists
About you
You have a deep understanding of either 1) linguistics, language, and translation, 2) engineering and code behavior, 3) LLM agents at work, including reasoning and tool use
You have prior knowledge in training and optimising model behaviour
You are an expert at building robust evaluations
You thrive in dynamic and technically complex environments
You have a track record of delivering innovative, out-of-the-box solutions to address real-world constraints
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Ace your job interview
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.