At R2, we believe that small and medium businesses are the productive engine of society. Small and medium businesses (SMBs) make up over 90% of companies in Latin America, yet they face a trillion-dollar credit gap. Our mission is to unlock SMBs’ potential by providing financial solutions that are tailored to their needs. We are reimagining the financial infrastructure of Latin America, where SMBs financial needs are satisfied without ever having to go to a bank.
R2 enables platforms in Latin America to embed financial services that SMBs can then leverage (starting with revenue-based financing). We are a tight-knit team coming from organizations such as Google, Amazon, Nubank, Uber, Capital One, Mercado Libre, Globant, and J.P. Morgan. We are entering a new phase of growth following a strategic investment from Ant International, focused on rapidly expanding our partner footprint, strengthening our credit and underwriting capabilities, and scaling our operations across multiple markets.
We are a data-first company. Machine Learning (ML) and Deep Learning (DL) are the core of our product, and data is the lifeblood for all of our decision-making. We are seeking a Lead Machine Learning Engineer (Lead MLE) to spearhead the design, development, and deployment of ML/DL models into production. As a Lead Machine Learning Engineer, you will own the end-to-end lifecycle of machine and deep learning systems at R2, from model deployment and monitoring to retraining, governance, and reliability in production. You will define the standards, tooling, and architectural patterns that allow data scientists and analysts to safely and efficiently ship models that directly power our credit and business decisions.
What you’ll work on:
-
Own ML systems & tooling in production :
- Define and evolve R2’s ML platform architecture, including model registries, feature pipelines, training infrastructure, and inference services.
- Evaluate and introduce tooling that improves developer velocity, reproducibility, and safety across the ML stack.
- Architect, implement, and deploy ML/DL models into production environments.
- Ensure models are optimized for scalability, latency, and reliability.
-
Automate Monitoring & Maintenance:
- Design and build automated monitoring systems to track model performance, drift, and data quality of ML/DL models that consume data from various sources.
- Establish alerting and retraining pipelines to maintain model performance and robustness sustainably over time .
-
Automate Data Science processes:
- Develop frameworks to automate recurrent Data Science workflows (e.g. model evaluation, and retraining).
- Standardize best practices across the team for reproducibility and efficiency.
-
Collaborate with technical teams & Lead other team members:
- Partner with Product, Engineering, and Risk teams to align ML/DL solutions to be productionized with business goals. Although you won’t be developing the models first hand initially, you will be involved with in-sample, out-of-sample, and production testing.
- Mentor junior and senior data scientists and analysts, fostering a culture of innovation, experimentation, and excellence.
-
Research & Innovate in the MLOps spectrum:
- Stay ahead of emerging ML & DL production techniques and technologies, evaluating their applicability to organizational challenges.
- Drive experimentation and prototyping of novel production and automation approaches.
Who you are:
-
Background:
- You have at least five (5) years of experience with machine and deep learning engineering in a practical setting.
- You have a good understanding of fintech products, and risk management to interpret business data effectively.
-
Technical expertise:
- You have strong programming abilities (structured, object-oriented, and/or event-oriented programming) and are comfortable programming in Python/R and SQL (with a focus on Snowflake, preferably).
- You have strong proficiency in ML/DL frameworks in Python (e.g. Tensorflow, PyTorch, Scikit-learn).
- You are comfortable consuming data through APIs, SFTP, or straight-up CSVs.
- You are experienced with MLOps tools (e.g. MLflow, Kubeflow, Docker, Kubernetes, AWS microservices).
- You have a solid understanding of cloud platforms, preferably AWS, distributed computing, and version control using GitHub & GitLab.
- You have a strong understanding of model serving patterns (batch vs. online, synchronous vs. asynchronous).
- You have experience designing feature pipelines with clear ownership, freshness guarantees, and backfills.
- You understand data engineering practices for ETL pipelines development, and datawarehouses/datalakes management.
-
Leadership & Business acumen:
- You have a data-oriented mindset: you care about getting to the bottom of how to make decisions based on data.
- You have stakeholder management experience, keeping everyone up-to-date with key findings and explaining in a non-technical way results, methodologies and processes for data-driven decision making.
To be considered a strong candidate, you:
- Are familiar with real-time ML systems.
- Have exposure to reinforcement learning, graph neural networks, or advanced time series techniques.
- Have contributed to open-source ML/DL projects.
- Have productionised A/B tests, multivariate tests, and other controlled experiments to assess the effectiveness of changes in product features, user experience, and marketing initiatives.
- Have partnered with cross-functional teams to define key success metrics, ensuring alignment with business objectives.
Locations: Buenos Aires, São Paulo, Santiago de Chile, Bogotá.