Member of Technical Staff - Secure Intelligence Institute

TLDR

Conduct impactful research on security and privacy for frontier intelligence systems, translating theoretical advancements into practical improvements for users.

Perplexity is seeking energetic researchers and engineers to join our Secure Intelligence Institute (SII), Perplexity's flagship research center for advancing security, privacy, and trust in frontier intelligence. SII’s goals are to advance frontier AI security research, translate those advances into concrete improvements in Perplexity's systems, and share knowledge and resources that strengthen the broader AI ecosystem.

As a member of SII, you'll conduct original and impactful research on improving the security and privacy of frontier intelligence systems. Your goal will be to conduct research that is not only rigorous in theory, but practical enough to improve the systems people rely on every day. This work will be informed by the realities of operating general-purpose AI systems used by millions of people and thousands of enterprises, and you'll be expected to translate both your own research and advances from the broader community to practical improvements that protect and defend Perplexity's users.

Responsibilities

  • Develop threat models for emerging attack surfaces in AI-native products, including browser, search, and autonomous agents.

  • Identify and analyze security and privacy threats across AI systems, infrastructure, and user-facing products.

  • Develop novel defenses, mitigations, and detection mechanisms for security and privacy in AI-native products.

  • Build security evaluation frameworks, benchmarks, and datasets to measure the effectiveness of different defense mechanisms.

  • Partner with Perplexity’s Security Engineering team to translate state of the art research into shipped security features and hardened system architectures.

  • Collaborate with top-tier academic and industry researchers in SII's external research network.

  • Publish findings at premier venues and contribute to the broader security research community.

Qualifications

  • Hold a PhD (or equivalent research experience) in Computer Science, Computer Engineering, or a related field, with a primary focus on security and/or privacy.

  • Experience publishing at top security conferences (IEEE S&P, USENIX Security, ACM CCS, NDSS) demonstrating original, impactful research contributions.

  • Deep expertise in one or more of: security of agentic systems, systems security, web and applications security, program analysis, and software security.

  • Proficiency in Python (bonus points for TypeScript, Go, and/or Rust).

  • Ability to operate with high independence, willing to dive in and take ownership, and comfortable in a fast-paced environment where research directly informs product.

  • Clear and concise communcation, translating complex attack narratives into actionable insights for engineering and leadership.

Perplexity builds an advanced answer engine that leverages large language models to redefine how users search and interact with information online. Targeted at enhancing browsing experiences, the company is at the forefront of AI-driven knowledge tools, making it easier for people to discover relevant answers swiftly and effectively.

View all jobs
Salary
$220,000 – $405,000 per year
Report this job
Apply for this job