Operative
Operative

Principal AI Engineering Lead

TLDR

Lead AI engineering maturity across a 100-person organization, driving hands-on enablement and infrastructure implementation for autonomous coding workflows.

About the Role

We are looking for a Principal AI Engineering Lead to own and drive our AI engineering maturity journey across a 100-person R&D, QA, and DevOps organization.

This is a hands-on individual contributor role with outsized influence — you will be the internal expert, practitioner, and change agent who takes us from inconsistent AI tool adoption to a fully agentic, multi-phase autonomous engineering capability.

You will not be managing people. You will be changing how 100 engineers work.

This role is equal parts engineering, enablement, and architecture. You will write real code, build real agent workflows, and make the abstract concrete — turning a defined AI maturity framework into daily practice across Java/JVM, Python, C#/.NET, Oracle PL/SQL, and C++ codebases.

 

Scope of your position

  1. AI Adoption & Enablement
  • Audit current AI tool usage across R&D, QA, and DevOps — identify where adoption is genuine vs. nominal
  • Establish and maintain CLAUDE.md-equivalent constitution files: encoding team conventions, architectural standards, testing patterns, and security policies so AI tools produce consistent, codebase-aware output from day one
  • Drive daily active usage above 70% across the engineering org, measured by tool telemetry — not seat count
  • Design and deliver hands-on enablement: prompt engineering, output validation, effective task decomposition, and AI-assisted debugging across our primary stacks (Java/Spring, Python, C#/.NET, C++, Oracle PL/SQL)
  • Run monthly retrospectives to surface what context AI is still missing and close those gaps systematically
  1. Agentic Infrastructure & Workflow Engineering
  • Architect and implement the infrastructure that makes autonomous agent execution safe: sandboxed execution environments, audit logging for all agent actions, and state checkpointing for mid-task recovery
  • Build and enforce the specification discipline: structured spec templates with machine-verifiable acceptance criteria, spec completeness gates before agent assignment, and a lightweight spec-driven development workflow appropriate for our codebases
  • Stand up self-verifying test loops — agents that write tests, implement, run CI, and iterate to green without human intervention — with coverage gates enforced in CI (80%+ on AI-generated code paths)
  • Shift CI/CD pipelines to increase build throughput: automated rollback, feature flags for deploy/release decoupling, and tiered review workflows (auto-merge → single reviewer → full review)
  • Deploy and tune Claude Code as the primary agentic coding platform, alongside evaluation and integration of other tools (GitHub Copilot, Cursor, or equivalent) where they complement the workflow
  1. Technical Roadmap Ownership
  • Define and maintain the AI engineering maturity roadmap with quarterly milestones, gate criteria, and investment priorities
  • Identify the binding constraint at each phase — test coverage, spec formalization, deploy automation, observability — and sequence investments to unblock the next transition
  • Establish observability and feedback loops: OpenTelemetry-instrumented pipelines, production signals routed back to agent context, and SLOs per component as the foundation for eventual multi-agent orchestration
  • Design the agent topology and inter-agent interface contracts that will enable specialized agents (build, test, infra) to coordinate on complete features
  • Advise engineering leadership on where the organization is vs. where it needs to be, using quantitative markers (PR throughput, cycle time, rework rate, intervention rate) and qualitative signals (mental model shift, cultural adoption)

 

Required experience

  • 7+ years of software engineering experience, with at least 2 years of hands-on work with AI-assisted or agentic coding workflows in production environments
  • Deep, practical experience with Claude Code (constitution files, skills, subagents, hooks, headless/agentic mode) and familiarity with the broader AI coding tool landscape
  • Fluency in two or more of our primary stacks: Java/Spring, Python, C#/.NET, C++, or Oracle PL/SQL — enough to earn the trust of engineers working in those languages and to diagnose where AI tooling struggles
  • Strong understanding of test infrastructure and CI/CD: coverage gates, automated rollback, pipeline design for high throughput, and what it takes to make a self-verifying agent loop reliable
  • Demonstrated ability to write and enforce machine-parseable specifications: structured acceptance criteria, EARS notation or equivalent, scope-bounded task definitions that agents can work from without re-prompting
  • Proven track record of cross-functional influence without authority — changing how a team works through demonstration, enablement, and trust, not mandate
  • Comfort operating in a large, heterogeneous codebase with varying levels of test coverage, documentation, and technical debt

Strong Advantage

  • Experience with multi-agent orchestration patterns: agent topology design, inter-agent interface contracts, fan-out/fan-in workflows, and circuit breakers between agents
  • Experience implementing MCP (Model Context Protocol) server integrations for internal tools, databases, or CI/CD systems
  • Understanding of AI cost management: model routing (Opus/Sonnet/Haiku), per-workflow spend tracking, and cost optimization after capability is established



You Are Probably Not the Right Fit If

  • You think AI adoption means buying licenses and watching the numbers
  • You are more comfortable advising than doing — this role requires you to build the thing, not describe it
  • You have only worked in greenfield, well-tested codebases — our environment is real, with legacy code, variable coverage, and multiple languages
  • You expect quick wins — cultural and workflow change at this scale takes a deliberate, sustained effort

Operative builds a powerful SaaS platform that streamlines advertisement management for media companies, effectively centralizing sales, ad operations, and finance. Serving over 300 clients worldwide and supporting more than 25,000 users, Operative stands out by optimizing revenue and operational efficiency in both linear and digital advertising spaces.

Founded
Founded 2000
Employees
500+ employees
Industry
Media
Total raised
$26M raised
View company profile
Report this job
Apply for this job