ML Research Engineer, Interpretable AI for End-to-End Automated Driving

TLDR

Contribute to research on interpretable AI methods for learning-based automated driving systems, focusing on enhancing the understandability and verifiability of neural driving policies.

At Toyota Research Institute (TRI), we’re on a mission to improve the quality of human life. We’re developing new tools and capabilities to amplify the human experience. To lead this transformative shift in mobility, we’ve built a world-class team advancing the state of the art in AI, robotics, driving, and material sciences. The Team   The Automated Driving Advanced Development (AD2) division at TRI will focus on enabling innovation and transformation at Toyota by building a bridge between TRI research and Toyota products, services, and needs. We achieve this through partnership, collaboration, and shared commitment. This new division is leading a new cross-organizational project between TRI and Woven by Toyota to conduct research and develop a fully end-to-end learned driving stack. This cross-org collaborative project is harmonious with TRI’s robotics divisions' efforts in Diffusion Policy and Large Behavior Models.   Within AD2, we are pursuing a focused research effort in Interpretable AI (iAI) for end-to-end learned automated driving systems, tightly coupled with AD2’s work on Large Behavior Models (LBM-Drive) and World Foundation Models (WFM), while remaining architecturally and product independent.   The Opportunity   We are seeking a Machine Learning Researcher to contribute to research on interpretable AI methods for learning-based automated driving systems. This role is ideal for a researcher who enjoys hands-on experimentation, model development, and evaluation, and who wants to work on foundational problems at the intersection of autonomy, interpretability, and safety. You will work closely with senior researchers and engineers to develop methods that make end-to-end neural driving policies more interpretable, diagnosable, and verifiable, while preserving performance and scalability. Your work will contribute to building “glass-box” representations that help engineers and researchers better understand, debug, and validate learned driving behaviors. Responsibilities
  • Conduct research on interpretable AI methods for end-to-end learned automated driving policies, under the guidance of senior and staff researchers.
  • Develop and evaluate structured representations of driving behavior, such as interpretable behavioral modes underlying learned neural policies.
  • Implement methods that associate driving behavior with perceptual and contextual cues, including language-based or symbolic explanations where appropriate.
  • Design and run experiments using large-scale learned policies and simulation infrastructure to assess interpretability, diagnostic value, and failure modes.
  • Contribute to evaluations of explainability methods for debugging, validation, and analysis of learned driving systems in simulation and/or controlled datasets.
  • Collaborate with researchers and engineers across AD2, LBM, and WFM teams to integrate xAI ideas into broader research workflows.
  • Document research findings clearly and contribute to internal reports, technical presentations, and peer-reviewed publications.
  • Stay up to date with advances in interpretable AI, representation learning, generative models, and embodied AI research.
  • Qualifications
  • Master's or PhD or equivalent research experience in Machine Learning, Robotics, Computer Vision, or a related quantitative field.
  • A demonstrated ability to conduct independent research and contribute to peer-reviewed publications at leading venues (e.g., NeurIPS, ICML, ICLR, CVPR, CoRL, RSS, ICRA).Strong foundation in modern machine learning, including deep learning, representation learning, and sequence or policy modeling.
  • Experience implementing and evaluating ML models using Python (and familiarity with C++ in research or experimental contexts).
  • Interest in or experience with end-to-end learning approaches for robotics or autonomous systems.
  • Ability to work effectively in collaborative, cross-disciplinary research environments.
  • Strong written and verbal communication skills.
  • Bonus Qualifications
  • Experience with interpretable AI, or model introspection techniques.
  • Familiarity with structured or hybrid models (e.g., latent-variable models, program induction, or discrete representations).
  • Experience evaluating learning-based systems in closed-loop simulation or real-world embodied settings.
  • Background in automated driving, robotics, or safety-critical AI systems.
  • Please add a link to Google Scholar to include a full list of publications when submitting your CV for this position.
     
    The pay range for this position at commencement of employment is expected to be between $176,000 and $253,000/year for California-based roles. Base pay offered will depend on multiple individualized factors, including, but not limited to, a candidate's experience, skills, job-related knowledge, and market location. TRI offers a generous benefits package including medical, dental, and vision insurance, 401(k) eligibility, paid time off benefits (including vacation, sick time, and parental leave), and an annual cash bonus structure. Additional details regarding these benefit plans will be provided if an employee receives an offer of employment.

    Please reference this Candidate Privacy Notice to inform you of the categories of personal information that we collect from individuals who inquire about and/or apply to work for Toyota Research Institute, Inc. or its subsidiaries, including Toyota A.I. Ventures GP, L.P., and the purposes for which we use such personal information.
     
    TRI is fueled by a diverse and inclusive community of people with unique backgrounds, education and life experiences. We are dedicated to fostering an innovative and collaborative environment by living the values that are an essential part of our culture. We believe diversity makes us stronger and are proud to provide Equal Employment Opportunity for all, without regard to an applicant’s race, color, creed, gender, gender identity or expression, sexual orientation, national origin, age, physical or mental disability, medical condition, religion, marital status, genetic information, veteran status, or any other status protected under federal, state or local laws.
     
    It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records for employment.

    Toyota Research Institute (TRI) is dedicated to enhancing the quality of human life through the development of innovative tools and capabilities in advanced AI, robotics, driving, and material sciences. We cater to industries focused on improving everyday experiences and pushing the boundaries of what's possible for humanity.

    View all jobs
    Salary
    $176,000 – $253,000 per year
    Ace your job interview

    Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

    Research Engineer Q&A's
    Report this job
    Apply for this job