Data Engineer

TLDR

Design and maintain data pipelines and AI-driven workflows, ensuring reliable delivery of customized data products and continuous operational efficiency improvement.

Samba TV tracks streaming and broadcast video across the world with our proprietary data and technology. We are on a mission to fundamentally transform the viewing experience for everyone. Our data enables media companies to connect with audiences for new shows and movies, and enables advertisers to engage viewers and measure reach across all their devices. We have an amazing story with a unique perspective on culture formed by a global footprint of data and AI-driven insights. We are seeking a Data Engineer to join our Technology Operations team. This team drives the operational delivery of Samba TV’s measurement and data licensing products—the core revenue engines of the business. As a Data Engineer, you will design, build, and maintain the data pipelines, automation workflows, and infrastructure that ensure reliable, on-time delivery of customized data products to clients. In addition, you will be building and maintaining agentic AI-driven workflows to automate repetitive operational tasks, enhance data validation, and streamline end-to-end delivery processes. You will work hands-on with production systems, debug complex delivery issues across distributed environments, and collaborate cross-functionally to continuously improve operational efficiency through both traditional engineering and intelligent automation. WHAT YOU WILL DO
  • Design, develop, and maintain data pipelines for the end-to-end delivery of measurement reports and data licensing products to clients, using Apache Airflow, Databricks, and PySpark
  • Configure and troubleshoot push delivery workflows, including database migrations, DAG configuration, GCS/S3 bucket management, and client-facing file delivery verification
  • Build and operate agentic automation workflows to reduce manual operational toil, improve data validation, and accelerate delivery turnaround times
  • Investigate and resolve production data issues by navigating complex systems spanning Airflow DAGs, PostgreSQL databases, cloud storage (AWS/GCP), and client-specific delivery configurations
  • Manage and execute custom product delivery requests, including measurement requests, data licensing operations, matching file generation, cross-reference file creation, and client delivery setup
  • Develop data validation and quality assurance tooling to ensure accuracy and consistency of custom datasets before they reach clients
  • Write and maintain database migrations to update delivery configurations, report integrations, and client setup across staging and production environments
  • Collaborate cross-functionally with product, measurement sciences, client services, and engineering teams to translate delivery requirements into reliable, automated solutions
  • Document operational processes, runbooks, and delivery workflows to enable knowledge sharing and team scalability
  • WHO YOU ARE
  • Bachelor’s degree in Computer Science, Engineering, Data Science, or a related technical field, or equivalent practical experience
  • 3–5+ years of professional experience in data engineering, software engineering, or a related operational engineering role
  • Strong proficiency in Python, with hands-on experience building and debugging data pipelines and automation scripts
  • Experience with Apache Airflow for workflow orchestration, including DAG development, operator configuration, and troubleshooting failed runs
  • Proficiency in SQL for data extraction, transformation, and database administration, including complex queries with joins, window functions, and JSONB manipulation
  • Experience with cloud infrastructure (AWS and/or GCP), including S3/GCS bucket management, IAM role assumption, and ephemeral credential workflows
  • Familiarity with Databricks and PySpark for large-scale data processing and transformationExperience with database migration workflows and version-controlled configuration management (Git)
  • Strong debugging and problem-solving skills with the ability to trace issues across distributed systems (databases, orchestration tools, cloud storage, delivery endpoints)
  • Ability to work independently, manage a queue of operational tickets, and prioritize based on SLA urgency
  • Samba TV is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.  We strive to empower connection with one another, reflect the communities we serve, and tackle meaningful projects that make a real impact.

    Samba TV may collect personal information directly from you, as a job applicant, Samba TV may also receive personal information from third parties, for example, in connection with a background, employment or reference check, in accordance with the applicable law. For further details, please see Samba's Applicant Privacy Policy. For residents of the EU , Samba Inc. is the data controller.

    Samba TV provides advanced tracking of streaming and broadcast video globally using proprietary data and technology. Our mission is to transform the viewing experience, catering to audiences who demand more insightful and engaging interactions with content.

    View all jobs
    Salary
    250 000 zł – 350 000 zł per year
    Ace your job interview

    Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

    Data Engineer Q&A's
    Report this job
    Apply for this job