Cybersecurity Landscape Analyst

AI overview

Monitor the evolving cyber threat landscape and translate external signals into risk context, supporting product readiness and strategic decision-making across teams.

About the Team

The Intelligence and Investigations team seeks to rapidly identify and mitigate abuse and strategic risks to ensure a safe online ecosystem in close collaboration with our internal and external partners. Our efforts contribute to OpenAI's overarching goal of developing AI that benefits humanity.

The Strategic Intelligence & Analysis (SIA) team provides safety intelligence for OpenAI’s products by monitoring, analyzing, and forecasting real-world abuse, geopolitical risks, and strategic threats. Our work informs safety mitigations, product decisions, and partnerships, ensuring OpenAI’s tools are deployed securely and responsibly across critical sectors.

About the Role

We are looking for a Cybersecurity Landscape Analyst to help OpenAI understand how the external cyber threat environment is evolving—and what it means for our products, customers, and the broader AI ecosystem.

This is an outward-facing intelligence and analysis role. The Cybersecurity Landscape Analyst monitors emerging attacker TTPs, threat-group behaviors, infrastructure trends, and real-world cyber innovation at the intersection of AI and all cyber threat surfaces including devices and robotics. Using structured research, competitive intelligence, adversarial thinking, and scenario analysis, you will stress-test assumptions about how frontier AI capabilities could be misused, targeted, or integrated into broader cyber campaigns—even in the absence of active warnings or internal incidents.

This role does not conduct internal investigations, run detection on platform data, or own OpenAI’s infrastructure protection or incident response. Instead, this role translates the external cyber landscape into clear risk context, strategic foresight, and decision support for internal stakeholders, with defined handoffs into operational, detection, and security teams. While not the owner of those functions, the role works closely across cross-functional teams, drawing on their operational perspectives to sharpen external analysis, while bringing to them external insights, threat trends, and insights regarding attacker innovation to inform priorities and preparedness. In other words, this role sits at the boundary between external intelligence and internal execution, ensuring bi-directional flow between strategic cyber analysis and the teams responsible for implementation. Your work will synthesize signals from external sources alongside insights from Integrity, Security, and Safety Systems teams to produce crisp strategic assessments, priority questions, and actionable recommendations.

In this role, you will

  • Monitor and interpret the evolving cyber threat landscape

    • Track emerging cyber TTPs, attacker innovation, threat-group behavior, and ecosystem-level shifts relevant to AI systems.

    • Analyze how state actors, criminal networks, hacktivists, and hybrid actors are adapting AI tools—or targeting AI infrastructure.

    • Identify structural risk patterns that may affect AI providers, customers, and downstream sectors.

  • Conduct structured external research and adversarial analysis

    • Use competitive intelligence, red-team style thinking, and scenario methods to explore how frontier AI capabilities could be exploited or targeted.

    • Develop forward-looking assessments of how cyber threats may evolve over 6–24 months.

    • Surface “unknown unknowns” and stress-test prevailing assumptions about attacker incentives, constraints, and capabilities.

  • Translate external signals into strategic risk context for cross-functional teammates

    • Produce concise, executive-ready intelligence estimates that articulate threat relevance, potential impact pathways, and confidence levels.

    • Develop priority questions and structured risk frames that inform product, safety, security, and policy decision-making.

    • Benchmark OpenAI’s risk posture against real-world incidents affecting other AI providers and adjacent technology sectors.

  • Support product and ecosystem readiness

    • Contribute to product reviews and safety readiness processes by outlining plausible cyber-enabled misuse or targeting modes grounded in external analysis.

    • Help shape practical mitigation considerations, with clear handoffs to operational and security teams that own implementation.

  • Represent OpenAI in sensitive external engagements

    • Serve as a credible analytical counterpart in engagements with a range of external partners.

    • Communicate OpenAI’s threat perspective and align on shared risk trends and emerging threat vectors.

    • Support collaboration in ways that complement—without duplicating—incident response, investigations, or core security operations functions.

You might thrive in this role if you

  • Have significant experience (typically 5+ years) in cybersecurity intelligence, strategic threat analysis, trust & safety, or national-level cyber risk assessment.

  • Demonstrate deep familiarity with cyber threat actors, intrusion tradecraft, vulnerability exploitation trends, and cybercrime ecosystems.

  • Have experience translating external threat reporting and OSINT into structured risk assessments and executive guidance.

  • Are comfortable using adversarial thinking and foresight methodologies (e.g., horizon scanning, scenario planning, red-teaming) to explore emerging threat vectors.

  • Can clearly distinguish between intelligence analysis and operational security work, and work effectively across that boundary.

  • Are an excellent, credible communicator capable of distilling complex cyber threat dynamics into crisp, decision-relevant insights.

  • Currently hold or are eligible for a U.S. security clearance.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

We're a non-profit artificial intelligence research company.

View all jobs
Salary
$178,200 – $320,000 per year
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Analyst Q&A's
Report this job
Apply for this job