Lirio is a technology/software company that provides expertise in a variety of behavioral science domains (e.g., behavioral economics, social psychology, public health), data science, and machine learning to drive consumer engagement, close gaps in preventive and chronic care, and promote health and well-being across an individual’s lifespan. Lirio’s behavior change AI platform unites behavioral science with advanced artificial intelligence (AI) to deliver Precision Nudging health interventions. Precision Nudging is the application of behavioral science to health interventions personalized by AI to each individual that overcome barriers to action at the right time and place for scalable, behavior change.
This is a remote role with the opportunity to be hybrid if located in Tennessee. All applicants must be authorized to work in the US without sponsorship.
To ensure an excellent onboarding experience and integration into the company, new colleagues will spend their first week onsite at one of our offices in Tennessee. Travel expenses will be paid. This is a requirement.
Position Summary
The Senior AI Developer Platform Engineer is responsible for designing, building, and maintaining the AI-augmented software delivery platform that enables Lirio's engineering team to build software faster and safer using AI coding agents. This role owns the end-to-end developer toolchain, from work item intake through AI agent coding loops to validated, compliant pull requests, ensuring that developers and AI coding agents can produce production-ready code together within Lirio's HIPAA/HITRUST-regulated environment.
This is a developer platform role, not a product engineering role. The focus is entirely on accelerating how we build software: the tooling, agent workflows, compliance guardrails, CI/CD integrations, and developer experience that make every engineer on the team measurably more productive. While Lirio's product includes AI-powered capabilities (precision nudging, behavioral science models, engagement optimization), this role does not work on those product AI systems. Instead, it builds the platform and processes that the teams building those systems use to deliver faster and with higher quality.
The Senior AI Developer Platform Engineer will collaborate cross-functionally with platform, cloud, security, and machine learning research engineers, as well as system architects, to ensure the AI developer platform integrates cleanly with Lirio's existing infrastructure and compliance posture.
This role carries urgency. The advantage from AI-assisted development compounds over time, and this person needs to deliver working developer platform capabilities in weeks, not quarters, starting with what we have today and iterating based on real results.
Essential Duties & Responsibilities
AI Coding Tool Evaluation & Selection
- Evaluate and recommend AI coding tools (Cursor, Claude Code, GitHub Copilot, Codex CLI, and emerging tools) against Lirio's developer workflows, compliance constraints, and codebase characteristics.
- Conduct structured evaluations of new models and tools as they launch, testing against real coding tasks in our environment, not just vendor benchmarks.
- Maintain the evaluation framework and tooling inventory, ensuring the team uses approved, security-reviewed tools on compliance-sensitive systems.
Developer Harness Architecture & Implementation
- Design and build the agent orchestration layer: instruction files (.cursor/rules/, AGENTS.md, CLAUDE.md), MCP connectors to Azure DevOps and/or GitHub, context packaging templates, and agent routing configurations.
- Enable AI coding agents to execute multi-step software development tasks autonomously (decompose, plan, code, test, validate, and submit PRs) with quality gates at each phase and defined escalation points.
- Design agent coordination patterns (planner-coder-reviewer, sub-agent delegation) and workflow state management for complex tasks that span multiple agent steps.
- Define human escalation triggers so that when agents encounter ambiguity, scope boundaries, or compliance-sensitive decisions, the workflow surfaces the decision to a human rather than guessing.
- Ensure AI coding agents receive the right context for each task type, including project conventions, compliance constraints, coding standards, and relevant codebase context. Manage context window budgets so agents maintain accuracy across large codebases.
- Build and maintain work decomposition patterns and templates that structure work items for effective agent execution.
- Architect integrations between the AI developer platform and the development ecosystem, including work item tracking, source control, CI/CD pipelines, and code review workflows, forming a coherent, automated delivery chain.
Compliance Guardrails for AI-Generated Code
- Build rules, instruction files, and CI pipeline checks that flag PHI exposure, tenant isolation concerns, and security issues in AI-generated code before it reaches human review.
- Translate HIPAA/HITRUST compliance requirements into automated guardrails, using defense-in-depth controls spanning instruction files, sandbox configurations, CI checks, and human review gates.
- Define and maintain permission tiers for AI agent operations (read-only, metadata access, code writes with approval) to maintain BAA compliance.
- Ensure AI-assisted delivery produces auditable artifacts, including PR conventions, work item linking, and AI-usage tracking that support compliance evidence collection.
- Secure the agent input chain by evaluating and mitigating prompt injection risks from work item descriptions, code comments, PR content, and 3rd party instructions that flow through the harness into agent context.
Model Rotation & Evaluation
- Systematically test new AI models (Claude Opus, GPT Codex, Gemini Pro, etc.) against Lirio's actual coding tasks to determine when to adopt, swap, or route differently across team workflows.
- Maintain model routing guidance: which models are best suited for which task types (complex architecture vs. boilerplate generation vs. test writing vs. code review).
- Monitor model quality across updates and pin versions where stability is critical.
Standards for AI-Assisted Software Delivery
- Define which types of code changes AI agents can submit with minimal review vs. which require full human compliance review.
- Establish quality gates and agent supervision practices that define what "done" looks like for AI-assisted work before it reaches human review.
- Create and maintain branch/PR conventions for AI-assisted work (agent/<work-item-id>-<short-desc>, PR templates with work item links, AI-assisted tagging).
- Define work item conventions (readiness criteria, acceptance criteria format, agent status tags) that structure work for both human and AI execution.
Incident Response for AI Tooling
- When AI-generated code introduces defects, vulnerabilities, or compliance issues, diagnose whether the root cause is in the instruction files, the context packaging, the model routing, or the review process.
- Tighten guardrails and adjust workflows based on incident learnings.
- Maintain a feedback loop between production issues and the developer platform's safety controls.
Measurement & Observability for the AI Developer Platform
- Build and maintain observability for the AI developer platform, tracking agent task completion rates, quality gate pass rates, cost per task, guardrail trigger frequency, and model performance trends across the team's workflows.
- Use platform telemetry to identify where AI-assisted delivery is producing value vs. where it's creating friction, and feed those insights back into harness design, model routing, and workflow standards.
Engineering Support & Technical Leadership
- Provide subject matter expertise on AI-assisted development practices to engineering teams.
- Build prototypes, reference integrations, and proof-of-concept solutions to validate platform design decisions and de-risk implementations.
- Promote AI-assisted engineering tools and modern development practices consistent with Lirio's engineering culture.
- Document platform architecture, workflows, integration guides, and best practices.
Cross-Functional Collaboration
- Serve as a contributing member of Lirio's Architecture Team, ensuring the AI developer platform maintains architectural coherence with the broader system.
- Partner with Product Management and delivery leadership to shape how AI-assisted delivery integrates with planning and execution workflows.
- Work closely with Cloud, Security, and DevOps teams to ensure the AI developer platform operates within Lirio's infrastructure and security boundaries.
- Participate in the Engineering Council, contributing to engineering standards, patterns, and technical governance as they relate to AI-augmented delivery.
Basic Qualifications
- Bachelor's degree in related field
- 5-7 years of related experience
-
AI-assisted development fluency: Hands-on experience with AI coding tools (Cursor, GitHub Copilot, Claude Code, Codex CLI, or similar). Not just casual use, but experience building workflows, instruction files, or agent orchestration patterns around them.
-
Platform engineering or developer productivity background: Experience building internal developer platforms, CI/CD pipelines, developer tooling, or infrastructure that accelerates how engineering teams deliver software.
-
Compliance in regulated environments: Experience working within HIPAA, HITRUST, SOC 2, or equivalent compliance frameworks. Ability to translate compliance requirements into automated guardrails rather than manual review bottlenecks.
-
Programming proficiency: Strong skills in Java and/or Python (Lirio's primary stack). Ability to work across codebases, write tooling, and understand the code that AI agents produce.
-
Agent orchestration and integration: Familiarity with MCP (Model Context Protocol), LLM APIs, instruction file systems, or similar patterns for configuring and constraining AI agent behavior. Experience building integrations between developer tools and enterprise systems (work item tracking, source control, CI/CD).
-
CI/CD and DevOps proficiency: Strong experience with CI/CD pipelines, automated testing, code review workflows, and deployment automation. Experience with Azure DevOps (ADO) is a plus; GitHub Actions/Workflows experience is also valuable.
-
Security awareness: Understanding of secure software delivery practices, including code scanning, dependency management, access controls, and audit trail requirements in the context of AI-generated code.
-
Bias toward rapid, iterative delivery: This role needs to produce working developer platform capabilities fast, shipping in weeks, learning from real usage, and improving continuously. We're looking for someone who builds momentum by delivering early wins, not someone who designs a complete system on paper before starting.
- Experience in healthcare technology or another heavily regulated industry.
- Familiarity with multi-model AI routing, model benchmarking, or model evaluation frameworks preferred.
- Experience with infrastructure as code (Terraform), containerization (Docker, Kubernetes/AKS), and cloud platforms (Azure preferred).
- Demonstrated ability to lead platform adoption and drive organizational change across engineering teams.
- Technical writing ability. This role produces documentation, guides, and standards that the entire team uses.
Benefits
-
Medical (HSA available)
-
Dental
-
Vision
-
Short-term & long-term disability (company-paid)
-
Life & AD&D (company-paid)
-
401K with company match
-
10 paid holidays, quarterly company closure dates, + holiday week company closure
-
Flexible time off policy
- Work from home
- 6 weeks paid parental leave
- Salary range: $165k-$185k