Company Overview
At Lean Tech, we are dedicated to building sophisticated and scalable digital solutions. As a forward-thinking organization, we are undertaking a significant strategic initiative to modernize our data infrastructure, migrating from a custom legacy stack to a state-of-the-art, cloud-based platform. Our culture is rooted in collaboration, continuous learning, and adaptability, fostering a dynamic and global team environment where engineers are empowered to solve complex challenges and uphold high engineering standards.
Position: Senior Data Engineer
Location: Remote - LATAM
Position Overview
We are seeking a highly skilled engineer to play a pivotal role in both maintaining our current data infrastructure and spearheading its evolution. This position requires mastery of custom Python development, particularly with Object-Oriented principles, and expert-level SQL skills to manage and troubleshoot our bespoke ETL frameworks and data warehouse. The successful candidate will be responsible for ensuring the stability and reliability of our existing systems by diagnosing and resolving complex issues within a custom codebase. Concurrently, you will be a key technical contributor in our strategic migration to a modern, cloud-based data stack, with platforms like Snowflake and Databricks under evaluation. This role demands a high degree of autonomy and a proactive problem-solving mindset, with the ability to navigate unfamiliar or undocumented code to perform root cause analysis and implement effective solutions. As a senior member of our global team, you will work collaboratively to execute the technical roadmap for our 2026 modernization strategy, contributing to a significant technological transformation while upholding high engineering standards.
Key Responsibilities
Design, build, and maintain robust, scalable ETL/ELT pipelines using Python
and advanced SQL, ensuring data quality and reliability across legacy and modern systems.
Act as a key technical contributor in the migration from a custom legacy stack to a modern cloud data platform, such as Snowflake or Databricks, implementing new architecture and data models.
Proactively identify, diagnose, and resolve complex issues in existing data pipelines and systems, performing root cause analysis to prevent future failures.
Develop custom integrations for new data sources, including third-party APIs, applying object-oriented programming principles to extend the existing framework.
Take ownership of technical tickets and project deliverables, working autonomously to implement solutions that align with the team's modernization roadmap.
Ensure system health and consistency through daily monitoring, standardized inspections, and the implementation of automated recovery processes.
Mentor other engineers through comprehensive code reviews, technical guidance, and the promotion of engineering best practices.
Required Skills & Experience
3–5 years of recent, hands-on experience in a Data Engineering role.
Advanced proficiency in Python, with a deep understanding of Object-Oriented Programming for building and maintaining custom ETL frameworks.
Expert-level SQL skills, including the ability to write, optimize, and troubleshoot complex queries, Stored Procedures, and Functions.
Proven ability to design, build, and maintain robust, scalable custom ETL/ELT pipelines.
Practical experience with Git-based development workflows and working within CI/CD pipelines for continuous integration and deployment.
Basic familiarity with AWS services such as EventBridge and Batch, along with experience running jobs within a Spark environment.
Demonstrated ability to autonomously deconstruct, troubleshoot, and resolve issues within complex and unfamiliar codebases.
Experience mentoring other engineers through code reviews, technical guidance, and the promotion of engineering best practices.
Nice to Have Skills
Familiarity with modern data platforms such as Snowflake or Databricks.
Experience with, or a desire to learn, data orchestration and transformation tools like Airflow and dbt.
Previous experience working with legacy systems such as DB2, Denodo, or other data virtualization technologies.
General familiarity with Infrastructure as Code (IaC) principles and tools, for example, Terraform or CloudFormation.
Soft Skills
Proactive Problem-Solving: Demonstrating the ability to independently analyze
complex, unfamiliar, or poorly documented systems, perform root cause
analysis, and implement effective solutions to ensure data pipeline reliability.
Autonomy and Ownership: Taking full ownership of technical tickets and
project deliverables, working with a high degree of independence to navigate
legacy codebases and drive tasks to completion in alignment with the team's
modernization roadmap.
Collaborative Mentorship: Actively engaging with a global team of engineers,
providing technical guidance, participating in comprehensive code reviews,
and promoting engineering best practices to elevate the team's collective skill
Set.
Adaptability: Maintaining a positive and forward-looking mindset toward
learning new technologies and methodologies, embracing the transition from
legacy systems to a modern, cloud-based data stack.
Responsible AI Usage: Understanding how to leverage AI tools as a powerful
supplement to enhance engineering knowledge and productivity, rather than
as a replacement for fundamental problem-solving skills.
Why You Will Love Working with Us
Join a powerful tech workforce and help us change the world through technology
Professional development opportunities with international customers Collaborative
work environment Career path and mentorship programs that will lead to new
levels. Join Lean Tech and contribute to shaping the data landscape within a
dynamic and growing organization. Your skills will be honed, and your contributions
will play a vital role in our continued success. Lean Tech is an equal opportunity
employer. We celebrate diversity and are committed to creating an inclusive
environment for all employees.