Provectus is an AWS Premier Consulting Partner and AI consultancy featured in Forrester's AI Technical Services Landscape, with 15+ years of experience and 400+ engineers. We build production AI for global enterprises in partnership with Anthropic, Cohere, and AWS.
Role Purpose
As a Mid-Level ML Engineer at Provectus, you will work with increasing independence to design, implement, and deploy production-grade ML solutions for our clients. You sit at the bridge between learning and leading: you no longer require task-by-task guidance, yet you continue to grow toward senior technical ownership. A defining characteristic of this role is proficiency in AI-assisted development. You will leverage AI coding tools, contribute to agentic engineering initiatives, and actively shape Provectus's internal AI toolkit. You will also mentor junior engineers and contribute meaningfully to technical design decisions.
Core Responsibilities:
Technical Delivery (55%)
Design and implement ML pipelines from experimentation to production with limited supervision
Build, evaluate, and optimize models across supervised, unsupervised, and generative AI tasks
Develop and maintain production-grade Python code: modular, tested, and well-documented
Set up reproducible experimentation environments and maintain experiment pipelines
Deploy and monitor ML models in production, ensuring stability and performance
Actively contribute to LLM-based applications, including RAG systems and agent workflows
Leverage AI-assisted development tools to increase velocity and code quality on all tasks
Agentic Engineering & AI-Assisted Development (20%)
Claude Ecosystem Integration: practical use of Claude Code or the Claude Agent SDK to deliver high-quality greenfield customer engagements
Transform existing brownfield projects into AI-friendly setups
Active usage of the Provectus AI toolkit in daily workflows
Internal Contributions: contribute back to the Provectus AI toolkit, developing specific agents, building MCP servers, submitting bug fixes, adding features, or improving documentation
Agent Frameworks: hands-on experience with Amazon Bedrock AgentCore, Strands, CrewAI, or equivalent orchestration frameworks for building tool-using and multi-step agentic systems
MCP Integration: understanding of Model Context Protocol and ability to integrate or build MCP servers for client or internal use
Stay current with emerging AI coding tools and agentic frameworks, sharing relevant findings with the team
Collaboration and Contribution (15%)
Mentor and support junior ML engineers on technical tasks, code quality, and best practices
Conduct meaningful code reviews with constructive, actionable feedback
Collaborate with cross-functional teams: DevOps, Data Engineering, Solutions Architects
Share knowledge through documentation, presentations, and internal workshops on AI tooling
Innovation and Growth (10%)
Stay current with ML research and emerging frameworks, especially in GenAI and agentic AI
Propose improvements to existing solutions, pipelines, and team processes
Contribute to the development of reusable ML accelerators and internal quick-starts
Participate in technical design discussions and architectural trade-off conversations
Technical Requirements:
Machine Learning Core
Strong grasp of supervised and unsupervised ML: algorithms, evaluation, and real-world trade-offs
Practical experience with classification, regression, and feature engineering in production or near-production contexts
Hands-on experience with deep learning: CNNs, RNNs, Transformers training and fine-tuning
Solid understanding of model evaluation, bias-variance trade-offs, and validation strategies
Experience with at least one ML domain in depth: NLP, Computer Vision, Recommendation, or Time Series
LLMs and Generative AI
Practical experience building LLM-based applications using OpenAI, Anthropic, or Hugging Face APIs
Hands-on experience designing and implementing RAG systems (chunking, embedding, retrieval, generation)
Working knowledge of vector databases (OpenSearch, Pinecone, Chroma, FAISS) and embedding models
Understanding of prompt engineering, chain-of-thought reasoning, and LLM evaluation techniques
Awareness of Amazon Bedrock capabilities: model invocation, Knowledge Bases, and Agent capabilities
Agentic Engineering & AI-Assisted Development
AI-Assisted Development: demonstrated proficiency with AI coding tools (Claude Code, Cursor, GitHub Copilot, or similar) not just autocomplete, but strategic use for generation, refactoring, debugging, and documentation
Agent Frameworks: hands-on experience with Amazon Bedrock AgentCore, Strands, CrewAI, or similar orchestration frameworks; ability to build stateful, tool-using agents
MCP Integration: working understanding of Model Context Protocol; ability to consume or contribute to MCP servers for internal or client-facing integrations
Tool Use & Function Calling: practical experience implementing tool-using agents with proper error handling, fallbacks, and state management
Spec-Driven Development: ability to write clear technical specifications that AI tools can execute effectively, reviewing and correcting AI-generated output
AgentOps Awareness: understanding of agent monitoring, evaluation, and cost optimization patterns in production
Cloud and Infrastructure
Solid AWS experience with core ML services: SageMaker, Lambda, S3, ECR, ECS, API Gateway
Familiarity with Amazon Bedrock: model invocation, Knowledge Bases, and Agent capabilities
Understanding of cloud-native ML architectures and serverless patterns
Awareness of Infrastructure as Code (Terraform, CloudFormation) at a conceptual or hands-on level
MLOps and Production
Practical experience deploying ML models to production environments
Experience with experiment tracking: MLflow, Weights & Biases, or equivalent
Working knowledge of CI/CD pipelines for ML (GitHub Actions, Jenkins, or similar)
Model monitoring: tracking performance degradation, drift detection, and alerting
Familiarity with orchestration tools: Airflow, Prefect, or Step Functions
Data and Programming
Advanced Python proficiency: async/await patterns, OOP, modular code, packaging
Expert-level pandas and NumPy; familiarity with Spark or Dask for larger data sets
Strong SQL: complex queries, window functions, optimization basics
Docker: building, running, and debugging containerized ML workloads
Nice-to-Have Technical Skills
AWS Certifications (Cloud Practitioner, Solutions Architect Associate, or ML Specialty)
Experience with Kubernetes or container orchestration beyond Docker Compose
GraphRAG implementation experience
Experience building custom MCP servers
Contributions to open-source ML projects or AI toolkit repositories
Core Competencies:
Problem-Solving
Breaks down complex ML problems into well-scoped, testable components
Makes sound technical decisions with moderate uncertainty and available data
Proactively identifies and addresses technical debt before it becomes critical
Considers operational constraints: cost, latency, reliability, and maintainability
Communication
Clear technical writing for documentation, design docs, and pull requests
Able to explain ML concepts to non-technical stakeholders at an appropriate level
Effective in distributed, async team environments with global collaborators
Fluent English (B2+ written and verbal)
Professional Excellence
Delivers assigned components with minimal supervision and consistent quality
Proactively raises blockers and proposes solutions rather than waiting for direction
Maintains high code quality standards, including testing and documentation
Self-directed learner who tracks ML and AI tooling advancements
Collaboration and Emerging Mentorship
Provides helpful, specific feedback in code reviews
Supports junior engineers without blocking their growth or creating dependency
Contributes positively to team culture and knowledge sharing on AI tooling
Approaches disagreements with data and reasoning, not authority
Experience and Education:
Required
Demonstrated competency equivalent to 1-3 years of hands-on ML engineering experience
Track record of deploying at least one ML model to a production or production-like environment
Experience working on team-based or client-facing projects (not solely academic or solo)
Demonstrated proficiency with AI-assisted development tools and agentic frameworks
Education (one of)
Bachelor's or Master's degree in Computer Science, Data Science, Mathematics, Engineering, or related field
Equivalent self-taught expertise with a demonstrable production or near-production project history
Bootcamp or certification with significant practical ML engineering experience
Nice-to-Have
Experience working in consulting or client-facing environments
Previous experience in distributed or remote international teams
Contributions to technical blogs, conference talks, or open-source
Published work on agentic systems or AI tooling
What You'll Get:
Competitive salary based on competencies and market rates
Hands-on work with cutting-edge ML technologies, LLMs, and agentic systems
Access to premium AI tooling: Claude Code, Cursor, and Provectus AI toolkit
Mentorship from Senior ML Engineers and Tech Leads
Clear advancement path: Mid-Level → Senior ML Engineer → Tech Lead
Learning budget for courses, certifications, and conferences
Remote-first culture with regular team meetups
Health benefits
Vacation and public holidays
Corporate equipment
Opportunities to work on diverse client projects across LATAM, North America, and Europe