The Policy Research team at OpenAI is responsible for understanding our company’s current and potential impact on the world, and using that understanding to recommend the best possible policies at OpenAI and elsewhere (“policies” are defined broadly to include laws, safety requirements, industry norms, etc.). Team members have backgrounds in a wide variety of disciplines, including computer science and engineering, law, philosophy, economics, political science, and more, and we use a wide variety of quantitative and qualitative methods to measure, forecast, and analyze OpenAI’s impacts.
The ideal candidate is someone who brings subject matter expertise in emerging technologies and international security. You’ll be excited to collaborate on and investigate deeply important topics of immediate relevance to OpenAI decision-making.
The RS will help the team identify global shifts that influence the development and deployment of AI, analyze how AI capabilities impact international stability and humanitarian goals, and build solutions that encourage global cooperation on AI.
A researcher in this role will have the remit to conduct and publish research and the latitude to collaborate with relevant external scholars.
The successful candidate will be self-directed, a strong analytical problems solver, someone who enjoys asking foundational research questions and using creative research design, and an effective communicator who can engage broad audiences with their research.
Please include examples of your existing writing(s) related to subject matter expertise in emerging technologies and international security (e.g. nonproliferation studies). We ask that you not write new materials for this job application.
Enter your email address below to get notified whenever we find a similar job post.
Unsubscribe at any time.