At LinkedIn, our approach to flexible work is centered on trust and optimized for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. This role will be based in Sunnyvale, CA.
The Generative AI (GenAI) Safety team sits at the heart of LinkedIn’s Responsible AI & Governance (RAI‑G) organization, with a mission to set the gold standard for AI safety across all AI applications company‑wide. We ensure that every generative AI product is developed and deployed responsibly, ethically, and securely. By combining rigorous governance with cutting‑edge ML research, we identify and mitigate risks such as bias, hallucination, misuse, and privacy leakage.
As both the AI Safety Research team and the central AI safety engineering function, we build safety guardrails, evaluation pipelines, and alignment techniques that enable safe innovation at scale. Our work is foundational to the company’s AI strategy and influences standards across the industry. We partner closely with Legal, Compliance, AI Infrastructure, and Product teams to embed safety into every stage of the AI lifecycle.
Responsibilities
Drive GenAI Safety Strategy: Serve as the senior technical leader shaping the company’s generative AI safety direction. Define the roadmap for safety alignment research, model evaluation, and system‑level protections.
Lead AI Safety Research & Innovation: Guide LinkedIn’s research agenda in alignment, robustness, and responsible model behaviors. Stay ahead of academic and industry advances, rapidly translating insights into practical, production‑ready solutions.
Design Safety‑First Foundations: Provide architectural leadership for scalable safety systems—benchmarking, red‑teaming, content safety, privacy‑preserving training, and real‑time guardrails — ensuring they are reliable, performant, and deeply integrated into AI infrastructure.
Deliver High‑Impact Solutions in Ambiguous Spaces: Tackle LinkedIn’s toughest ethical, regulatory, and risk‑driven problems. Bring clarity and direction in areas with evolving standards, ensuring the company ships safe GenAI experiences at speed.
Liaison With Product Engineering: Partner closely with product engineering teams to stay current on emerging experiments, venture bets, and product innovations, ensuring safety research and tooling anticipate and support the next wave of product development.
Cross‑Functional Leadership: Collaborate with Legal, Compliance, Privacy, Infra, and Policy teams to operationalize safety requirements, translate regulatory guidance into technical specifications, and ensure end‑to‑end alignment across disciplines.
Technical Mentorship: Mentor and grow a team of ~15 engineers across research, ML, and systems. Elevate engineering rigor, drive high bar execution, and nurture future technical leaders in AI safety.
Company‑Wide Impact: Ensure safety techniques, tools, and evaluations are deployed across all GenAI products, safeguarding member trust while enabling safe, scalable innovation.
Basic Qualifications:
2+ years as a Technical Lead, Staff Engineer, Principal Engineer, or equivalent.
5+ years of industry experience in AI or Machine Learning Engineering.
BA/BS Degree in Computer Science or related technical discipline or equivalent practical experience
Preferred Qualifications:
10+ years of industry and/or research experience in AI/ML delivering impact at scale.
PhD in CS/AI/ML or related field (or equivalent research/industry achievements).
Expert understanding of Transformers; hands-on experience training, fine‑tuning, distilling/compressing, and deploying LLMs in production.
Track record applying LLMs to recommender systems and language agents.
Demonstrated leadership in red‑teaming (manual + automated), safety benchmarking/evaluations, content safety/guardrails, prompt‑injection/jailbreak detection, and abuse/misuse prevention.
Experience translating Legal/Compliance requirements (e.g., EU AI Act) into technical controls, including harm taxonomies, model cards, and risk assessments.
Proven ability to design safety‑first architectures (evaluation pipelines, moderation services, policy engines, incident response & telemetry) for distributed, real‑time ML systems.
Strong understanding of RL (e.g., RLHF/RLAIF, offline/online RL) for language‑based agents, including safety‑aware reward design and feedback loops.
Advanced Python and PyTorch; familiarity with TensorFlow.
Experience with safety evaluation tooling (e.g., platforms akin to LLUME) and safety datasets/benchmarks.
Significant contributions via top‑tier publications (NeurIPS, ICLR, ICML, ACL) and/or impactful open‑source or widely used safety tooling.
Proven technical leadership mentoring ~15 engineers, setting direction, and elevating execution quality.
Effective liaison with Product Engineering (tracking experiments and venture bets; aligning safety research to upcoming bets) and strong collaboration with Legal, Compliance, AI Infra, and Policy.
Good to have: Experience with advanced reasoning/planning (e.g., CoT/ToT, self‑reflection, program synthesis, symbolic/neuro‑symbolic methods, search‑augmented reasoning, verification‑aware decoding).
Suggested Skills:
GenAI Safety & Risk: Red‑Teaming, Safety Benchmarking/Evaluation, Content Safety & Guardrails, Jailbreak/Prompt‑Injection Detection, Model Cards & Risk Taxonomies, Incident Response & Monitoring
AI Modeling: LLMs, Alignment, Reasoning & Planning
Reinforcement Learning (RL): RLHF/RLAIF, Reward Design, Feedback Loops, Adaptive Systems
Architecture & Platforms: Real‑Time ML Services, Safety Policy Engines, Evaluation Pipelines
Technical Leadership: Mentorship, Cross‑Functional Collaboration, Roadmapping, Research Direction
Core Tools: Python, PyTorch, Safety Evaluation Tooling
You will Benefit from our Culture:
We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels.
LinkedIn is committed to fair and equitable compensation practices. The pay range for this role is $191,000 - $315,000. Actual compensation packages are based on a wide array of factors unique to each candidate, including but not limited to skill set, years & depth of experience, certifications and specific office location. This may differ in other locations due to cost of labor considerations. The total compensation package for this position may also include annual performance bonus, stock, benefits and/or other applicable incentive compensation plans. For additional information, visit: https://careers.linkedin.com/benefits.
Equal Opportunity Statement
We seek candidates with a wide range of perspectives and backgrounds and we are proud to be an equal opportunity employer. LinkedIn considers qualified applicants without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, marital status, pregnancy, sex, gender expression or identity, sexual orientation, citizenship, or any other legally protected class.
LinkedIn is committed to offering an inclusive and accessible experience for all job seekers, including individuals with disabilities. Our goal is to foster an inclusive and accessible workplace where everyone has the opportunity to be successful.
If you need a reasonable accommodation to search for a job opening, apply for a position, or participate in the interview process, connect with us at [email protected] and describe the specific accommodation requested for a disability-related limitation.
Reasonable accommodations are modifications or adjustments to the application or hiring process that would enable you to fully participate in that process. Examples of reasonable accommodations include but are not limited to:
A request for an accommodation will be responded to within three business days. However, non-disability related requests, such as following up on an application, will not receive a response.
LinkedIn will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant. However, employees who have access to the compensation information of other employees or applicants as a part of their essential job functions cannot disclose the pay of other employees or applicants to individuals who do not otherwise have access to compensation information, unless the disclosure is (a) in response to a formal complaint or charge, (b) in furtherance of an investigation, proceeding, hearing, or action, including an investigation conducted by LinkedIn, or (c) consistent with LinkedIn's legal duty to furnish information.
San Francisco Fair Chance Ordinance
Pursuant to the San Francisco Fair Chance Ordinance, LinkedIn will consider for employment qualified applicants with arrest and conviction records.
Pay Transparency Policy Statement
As a federal contractor, LinkedIn follows the Pay Transparency and non-discrimination provisions described at this link: https://lnkd.in/paytransparency.
Global Data Privacy Notice for Job Candidates
Please follow this link to access the document that provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: https://legal.linkedin.com/candidate-portal.
LinkedIn is the world’s largest professional network, built to create economic opportunity for every member of the global workforce. Our products help people make powerful connections, discover exciting opportunities, build necessary skills, and gain valuable insights every day. We’re also committed to providing transformational opportunities for our own employees by investing in their growth. We aspire to create a culture that’s built on trust, care, inclusion, and fun – where everyone can succeed.Join us to transform the way the world works.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.
AI Engineer Q&A's