Machine Learning Ops Engineer - AI

AI overview

Build and maintain MLOps infrastructure for AI systems, ensuring efficient deployment and monitoring of machine learning models in production.

As Opus 2 continues to embed AI into our platform, we need robust, scalable data systems that power intelligent workflows and support advanced model behaviours. We’re looking for an MLOps Engineer to build and maintain the infrastructure that powers our AI systems. You will be the bridge between our data science and engineering teams, ensuring that our machine learning models are deployed, monitored, and scaled efficiently and reliably. You’ll be responsible for the entire lifecycle of our ML models in production, from building automated deployment pipelines to ensuring their performance and stability. This role is ideal for a hands-on engineer who is passionate about building robust, scalable, and automated systems for machine learning, particularly for cutting-edge LLM-powered applications.

What you'll be doing

  • Design, build, and maintain our MLOps infrastructure, establishing best practices for CI/CD for machine learning, including model testing, versioning, and deployment.
  • Develop and manage scalable and automated pipelines for training, evaluating, and deploying machine learning models, with a specific focus on LLM-based systems.
  • Implement robust monitoring and logging for models in production to track performance, drift, and data quality, ensuring system reliability and uptime.
  • Collaborate with Data Scientists to containerize and productionize models and algorithms, including those involving RAG and Graph RAG approaches.
  • Manage and optimize our cloud infrastructure for ML workloads on platforms like Amazon Bedrock or similar, focusing on performance, cost-effectiveness, and scalability.
  • Automate the provisioning of ML infrastructure using Infrastructure as Code (IaC) principles and tools.
  • Work closely with product and engineering teams to integrate ML models into our production environment and ensure seamless operation within the broader product architecture.
  • Own the operational aspects of the AI lifecycle, from model deployment and A/B testing to incident response and continuous improvement of production systems.
  • Contribute to our AI strategy and roadmap by providing expertise on the operational feasibility and scalability of proposed AI features.
  • Collaborate closely with Principal Data Scientists and Principal Engineers to ensure that the MLOps framework supports the full scope of AI workflows and model interaction layers.

What excites us?

We’ve moved past experimentation. We have live AI features and a strong pipeline of customers excited to get access to more improved AI-powered workflows. Our focus is on delivering real, valuable AI-powered features to customers and doing it responsibly. You’ll be part of a team that owns the entire lifecycle of these systems, and your role is critical to ensuring they are not just innovative, but also stable, scalable, and performant in the hands of our users.

Requirements

What we're looking for in you

  • You are a practical and automation-driven engineer. You think in terms of reliability, scalability, and efficiency.
  • You have hands-on experience building and managing CI/CD pipelines for machine learning.
  • You're comfortable writing production-quality code, reviewing PR's, and are dedicated to delivering a reliable and observable production environment.
  • You are passionate about MLOps and have a proven track record of implementing MLOps best practices in a production setting.
  • You’re curious about the unique operational challenges of LLMs and want to build robust systems to support them.

Qualifications

  • Experience with model lifecycle management and experiment tracking.
  • Ability to reason about and implement infrastructure for complex AI systems, including those leveraging vector stores and graph databases.
  • Proven ability to ensure the performance and reliability of systems over time.
  • 3+ years of experience in an MLOps, DevOps, or Software Engineering role with a focus on machine learning infrastructure.
  • Proficiency in Python, with experience in building and maintaining infrastructure and automation, not just analyses.
  • Experience working in Java or TypeScript environments is beneficial.
  • Deep experience with at least one major cloud provider (AWS, GCP, Azure) and their ML services (e.g., SageMaker, Vertex AI). Experience with Amazon Bedrock is a significant plus.
  • Strong familiarity with containerization (Docker) and orchestration (Kubernetes).
  • Experience with Infrastructure as Code (e.g., Terraform, CloudFormation).
  • Experience in deploying and managing LLM-powered features in production environments.
  • Bonus: experience with monitoring tools (e.g., Prometheus, Grafana), agent orchestration, or legaltech domain knowledge.

Benefits

Working for Opus 2

Opus 2 is a global leader in legal software and services, trusted partner of the world’s leading legal teams. All our achievements are underpinned by our unique culture where our people are our most valuable asset. Working at Opus 2, you’ll receive:

  • Contributory pension plan.
  • 26 days annual holidays, hybrid working, and length of service entitlement.
  • Health Insurance.
  • Loyalty Share Scheme.
  • Enhanced Maternity and Paternity.
  • Employee Assistance Programme.
  • Electric Vehicle Salary Sacrifice.
  • Cycle to Work Scheme.
  • Calm and Mindfulness sessions.
  • A day of leave to volunteer for charity or dependent cover.
  • Accessible and modern office space and regular company social events.

Perks & Benefits Extracted with AI

  • Health Insurance: Health Insurance.
  • Modern office space and social events: Accessible and modern office space and regular company social events.
  • Paid Parental Leave: Enhanced Maternity and Paternity.

Opus 2 provides game-changing, cloud-based legal technology and services to connect people, case information, analysis and data throughout the lifecycle of a dispute. Our secure platform, tailor-made for lawyers, provides a connected and flexible way of working for case teams and their clients. Combined with our best-in-class services, we also deliver electronic trials and hearings worldwide. Opus 2 is headquartered in London and also has offices in San Francisco, Edinburgh and Singapore.Equal OpportunitiesOpus 2 International is an Equal Opportunities employer and applicants are selected solely on the basis of their relevant aptitudes, skills and abilities. No applicant shall receive less favourable treatment on the grounds of sex, marital status, civil partnership status, trans-gender status, pregnancy, maternity, colour race, nationality, ethnic origin, religion, belief, sexual orientation, disability, age. This is not an exclusive listRecruitment Privacy PolicyOpus 2 is a privacy conscious organisation, committed to protecting the privacy of our people and those who seek employment with us. It is important to us that you understand what information we collect, how we use it and how we protect it. This information, alongside the rights available to you in respect of the personal data you share with us, is set out in our Privacy Policy and we would encourage you to read and ensure you understand it. Unfortunately, we are unable to respond to all applications. If you have not been contacted within one week of your application, then it is likely you have been unsuccessful.

View all jobs
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Machine Learning Engineer Q&A's
Report this job

This job is no longer available