OpenAI is hiring a

Policy Enforcement Specialist

San Francisco, United States

About the Team

Trust and Safety is at the foundation of OpenAI’s mission. The team is a part of OpenAI’s broader Applied AI group, which is charged with turning OpenAI’s advanced AI model technology into useful products. We see this as a path towards safe and broadly beneficial AGI by gaining practical experience in deploying these technologies safely and easily by developers and customers.

Within the Applied AI group, the Trust and Safety team protects OpenAI’s technologies from abuse. We develop tools and processes to detect, understand, and mitigate large-scale misuse. We’re a small, focused team that cares deeply about safely enabling users to build useful things with our products.

In 2020, we introduced GPT-3 as the first technology on the OpenAI API, allowing developers to integrate its ability to understand and generate natural language into their product. In 2021, we launched Copilot, powered by Codex, in partnership with GitHub, a new product that can translate natural language to code. In April 2022, we introduced DALL-E 2, AI that creates images from text.

About the Role

As a Policy Enforcement Specialist on the Trust and Safety team, you will be a subject matter expert across all our content policies with the potential to focus on specific areas of abuse. You will develop deep expertise on the application of OpenAI’s policies in the context of AI-generated content, provide expert-level guidance on the policy-compliance of content and work on creating and scaling review processes. As part of the Scaled Enforcement subteam you will help build automated moderation solutions to mitigate abuse of OpenAI’s technologies. This is an operations role based in our San Francisco office and involves working with sensitive content, including sexual, violent, or otherwise-disturbing material.

In this role, you will:

  • Be the subject matter expert on OpenAI’s content policies
  • Review content that violates our policies, and improve our review and response processes
  • Ensure our moderation operations are running smoothly and scale those processes without trading off review quality
  • Respond to escalations by owning or assisting in investigations and follow-on processes
  • Collaborate with engineering, policy, and research teams to improve our tooling, policies and understanding of abusive content

You might thrive in this role if you:

  • Have a pragmatic approach to being on an operations team and can get in the weeds to get stuff done
  • Are passionate about AI and are keen to take a part in shaping the safety of this technology
  • Have experience on a trust and safety team and/or have worked closely with policy, content moderation, or security teams
  • Have a knack for data and use metrics to drive your decision making
  • Bonus if you have experience with large language models and/or can use scripting languages (Python preferred) 

About OpenAI
 
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 
 
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Benefits and Perks

  • Medical, dental, and vision insurance for you and your family
  • Mental health and wellness support
  • 401(k) plan with 4% matching
  • Unlimited time off and 18+ company holidays per year
  • Paid parental leave (20 weeks) and family-planning support
  • Annual learning & development stipend ($1,500 per year)

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or other legally protected statuses. Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records. 

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via [email protected].

OpenAI US Applicant Privacy Policy 

This job is no longer available

Enter your email address below to get notified whenever we find a similar job post.

Unsubscribe at any time.