AI Risk & Compliance Analyst

TLDR

Govern risk and compliance for AI systems by collaborating with cross-functional teams and managing vendor assessments and policy governance to ensure regulatory alignment.

RESPONSIBILITIES:
  • Lead governance, risk assessment, and compliance activities specific to AI/ML systems, LLM integrations, AI agents, and retrieval-augmented workflows

  • Partner with the Senior Security Engineer, AI/ML to integrate risk assessment findings into GRC frameworks and translate technical risk into governance requirements

  • Develop, maintain, and refine AI risk and compliance controls aligned with relevant frameworks, including ISO/IEC 27001, NIST Cybersecurity Framework, NIST AI Risk Management Framework, EU AI Act, GDPR, and other applicable standards

  • Execute risk assessments for new AI vendors, LLM platforms, AI APIs, and enterprise AI tools, including third-party risk scoring, control mapping, and remediation tracking

  • Manage the vendor risk assessment lifecycle for AI/ML related suppliers, ensuring documented controls, evidence collection, and follow-up on remediation items

  • Support audit activities, capturing evidence and coordinating cross-functional stakeholders for internal and external compliance reviews involving AI systems

  • Develop and maintain AI-specific GRC policies, standards, and procedures that map to AI risk domains, explainability requirements, and compliance obligations

  • Facilitate AI risk and compliance reporting to leadership, including risk dashboards, trend analysis, control effectiveness measurements, and key metrics

  • Monitor emerging AI governance requirements, guidance, and best practices, translating them into GRC program updates and compliance recommendations

  • Support security incident documentation and post-incident analysis for AI system events, coordinating with Legal and Security teams to ensure appropriate governance response

  • QUALIFICATIONS:
  • 6+ years of experience in Governance, Risk & Compliance, including risk assessment, policy development, audit coordination, and third-party risk management

  • Demonstrated experience performing governance or risk assessments for AI/ML systems, including LLM integrations, model pipelines, AI agents, or data-driven algorithmic systems

  • Experience translating AI-specific risks (i.e., data poisoning, prompt injection, model misuse, data leakage, explainability gaps) into documented control requirements and governance standards

  • Hands-on experience conducting third-party risk assessments for AI vendors, LLM platforms, AI APIs, or machine learning service providers

  • Experience mapping AI-related risks and controls to frameworks such as ISO/IEC 27001, NIST CSF, NIST AI RMF, ISO/IEC 42001, GDPR, PCI DSS, or similar standards

  • Strong understanding of data governance concepts relevant to AI systems, including training data lineage, data retention, model output handling, and human oversight requirements

  • Experience supporting regulatory readiness or compliance efforts related to AI systems

  • Proven ability to collaborate with engineering and security teams to validate control implementation and remediation

  • Experience with GRC tools, risk registers, and evidence-based compliance workflows

  • Bachelor’s degree in Information Security, Computer Science, Business Risk, Compliance, or a related field, relevant certifications CISA, CISM, CRISC, CISSP, AIGP, or equivalent practical experience

  • This role is based in the WHOOP office located in Boston, MA. The successful candidate must be prepared to relocate if necessary to work out of the Boston, MA office.

    Interested in the role, but don’t meet every qualification? We encourage you to still apply! At WHOOP, we believe there is much more to a candidate than what is written on paper, and we value character as much as experience. As we continue to build a diverse and inclusive environment, we encourage anyone who is interested in this role to apply.

    WHOOP is an Equal Opportunity Employer and participates in E-verify to determine employment eligibility

    The WHOOP compensation philosophy is designed to attract, motivate, and retain exceptional talent by offering competitive base salaries, meaningful equity, and consistent pay practices that reflect our mission and core values.

    At WHOOP, we view total compensation as the combination of base salary, equity, and benefits, with equity serving as a key differentiator that aligns our employees with the long-term success of the company and allows every member of our corporate team to own part of WHOOP and share in the company’s long-term growth and success.

    The U.S. base salary range for this full-time position is $85,000 - $135,000. Salary ranges are determined by role, level, and location. Within each range, individual pay is based on factors such as job-related skills, experience, performance, and relevant education or training. 

    In addition to the base salary, the successful candidate will also receive benefits and a generous equity package.

    These ranges may be modified in the future to reflect evolving market conditions and organizational needs. While most offers will typically fall toward the starting point of the range, total compensation will depend on the candidate’s specific qualifications, expertise, and alignment with the role’s requirements.

    Whoop builds a performance optimization platform that helps individuals understand their bodies and health through advanced wearable technology. Targeted at fitness enthusiasts and health-conscious individuals, this startup stands out by focusing on personalized metrics and insights that drive improved performance and longevity.

    View all jobs
    Salary
    $85,000 – $135,000 per year
    Ace your job interview

    Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

    Compliance Analyst Q&A's
    Report this job
    Apply for this job