AI Prompt Engineer, Trust Review Operations

AI overview

As an AI Prompt Engineer, you will design, test, and refine prompts that enhance AI-driven content moderation systems, ensuring quality and safety while collaborating with cross-functional teams.

This role will be based in Dublin, Ireland.

At LinkedIn, our approach to flexible work is centered on trust and optimised for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. 

LinkedIn was built to help professionals achieve more in their careers, and everyday millions of people use our products to make connections, discover opportunities and gain insights. Our global reach means we get to make a direct impact on the world’s workforce in ways no other company can. We’re much more than a digital resume – we transform lives through innovative products and technology.   

Searching for your dream job? At LinkedIn, we strive to help our employees find passion and purpose. Join us in changing the way the world works.  

LinkedIn’s Trust Review Operations team protects our global community by ensuring AI‑driven moderation systems are safe, accurate, and reliable. As an AI Prompt Engineer, you will design, test, and refine prompts and workflows that assist in content moderation, improve detection quality, and support reviewer decision‑making.

This role is ideal for someone who enjoys hands‑on experimentation, analyzing model behaviour, and partnering with Policy, Engineering, and Data Science to improve safety outcomes.

What You’ll Do:

Prompt Design, Testing & Optimization

  • Design and refine prompts for classification, risk detection, case summarization, and reviewer support.

  • Run prompt experiments to diagnose issues such as hallucinations, misclassifications, bias, or inconsistent behaviour.

  • Help maintain evaluation frameworks for accuracy, safety, and reliability.

AI Case Support & Risk Mitigation

  • Support AI‑assisted workflows in resolving medium‑ and high‑risk cases.

  • Identify model errors, document patterns, and recommend improvements to reduce operational risk.

  • Contribute to operational guardrails and escalation criteria for AI behaviour.

Incident Management & Quality Monitoring

  • Flag AI output issues (e.g., inconsistent decisions, low‑confidence outcomes, override trends).

  • Participate in incident reviews and help document root‑cause insights.

Policy & Regulatory Alignment

  • Ensure AI outputs align with platform policies, MDSS, and relevant regulations (e.g., DSA).

  • Work with Policy teams to translate reviewer feedback into clearer prompts and rules.

Feedback Integration & Model Improvement

  • Collect feedback from reviewers, policy partners, and operational teams to refine prompts.

  • Translate qualitative insights into structured requirements for Engineering and Data Science.

Data Analysis & Experimentation

  • Analyze prompt and model performance using SQL or dashboards.

  • Track trends such as classifier drift, emerging abuse patterns, or changes in harmful content.

  • Contribute data‑backed insights that inform roadmap and workflow updates.

Basic Qualifications

  • Bachelor’s degree in Data Science, AI/ML, Engineering, Policy, or related field (or equivalent experience).

  • 2+ years of experience in Trust & Safety, content moderation, AI operations, quality, or policy.

  • 1+ years experience designing or testing prompts or working with LLMs / classification models.

  • 2+ years experience using data tools (e.g. SQL, Python) to evaluate model and prompt performance.  

     

Preferred Qualifications

  • Understanding of Trust & Safety policies, global regulations (e.g., DSA), and safety standards.

  • Ability to analyze model outputs and identify patterns, gaps, and risks.

  • Strong written communication skills for writing clear, reproducible prompt instructions.

  • Familiarity with evaluation metrics (precision, recall, FPR, FDR) and model quality testing.

  • Experience collaborating with Product, Engineering, Policy, or Data Science teams.

  • Exposure to human‑in‑the‑loop workflows, generative AI systems, or safety‑centric model evaluation.

  Suggested skills:    

  • Analytical Thinking

  • Data Interpretation

  • Problem Solving & Technical Curiosity

  • Collaboration & Stakeholder Support

  • Quality & Detail Orientation

Global Data Privacy Notice for Job Candidates ​

Please follow this link to access the document that provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: https://legal.linkedin.com/candidate-portal.

LinkedIn is the world’s largest professional network, built to create economic opportunity for every member of the global workforce. Our products help people make powerful connections, discover exciting opportunities, build necessary skills, and gain valuable insights every day. We’re also committed to providing transformational opportunities for our own employees by investing in their growth. We aspire to create a culture that’s built on trust, care, inclusion, and fun – where everyone can succeed.Join us to transform the way the world works.

View all jobs
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Prompt Engineer Q&A's
Report this job
Apply for this job