Senior Data Engineer

About Arkham

Arkham is a Data & AI Platform—a suite of powerful tools designed to help you unify your data and use the best Machine Learning and Generative AI models to solve your most complex operational challenges.

Today, industry leaders like Circle K, Mexico Infrastructure Partners, and Televisa Editorial rely on our platform to simplify access to data and insights, automate complex processes, and optimize operations. With our platform and implementation service, our customers save time, reduce costs, and build a strong foundation for lasting Data and AI transformation.

About the Role

We are looking for a Senior Data Engineer to own our high-performance Data Platform based on the Lakehouse architecture. In this role, you will work with cutting-edge technologies such as Apache Spark, Trino, and Delta Lake, ensuring data governance and interoperability across platforms. You'll play a key role in shaping our data infrastructure, working across the entire data lifecycle—from ingestion to transformation and activation.

Requirements

Key Responsibilities

  • Lead the next phase of our Data Platform – Develop and enhance Arkham’s Data Platform, following Lakehouse architecture principles and ensuring data governance.
  • Data Ingestion Pipelines – Design and implement pipelines to extract data from structured, semi-structured, and unstructured sources.
  • Data Pipeline Orchestration – Create, monitor, and optimize multiple data extraction and transformation pipelines.
  • Data Catalog Integration – Ensure interoperability between data catalogs and various query engines.
  • Cluster Management & Observability – Oversee cluster performance and implement observability solutions to maintain optimal execution of data pipelines.

End-to-End Data Lifecycle Management – Maintain high data quality and usability across integration, transformation, and activation stages.

Qualifications

  • Experience: 5+ years in data engineering, data architecture, or a related field.
  • Technical Expertise: Proficiency in Apache Spark, Delta Lake, and Trino.
  • Programming Skills: Strong experience with Python for scripting and automation.
  • Cloud Knowledge: Hands-on experience with AWS services, including Glue, S3, and EMR.
  • Big Data: Understanding of distributed data systems and query engines.

Problem-Solving: Excellent analytical and debugging skills.

Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Senior Data Engineer Q&A's
Report this job
Apply for this job