Samba is an AI-powered media intelligence company on a mission to give marketers the complete picture of their audiences. Our AI indexes media consumption across millions of smart TVs and 2.5 billion web pages, combining that data with third-party signals through the Samba Knowledge Graph, a map of the real interests, behaviors, and purchase intent of 1.5 billion user profiles globally. Brands, agencies, publishers, and platforms use Samba to make smarter decisions across every stage of the marketing funnel.
We are seeking a skilled Data Engineer to strengthen our data platform team. Our team is tasked with building and maintaining the data platform that powers the entire organization; from ingestion to analytics and reporting; from valuable viewership and contextual datasets to scalable applications that enable data-driven decision making. While our organization is hybrid, you'll be building mostly on AWS, Databricks, BigQuery and Snowflake technology. The ideal candidate will have strong experience in cloud-based data engineering, distributed data processing, and data governance and metadata management to support analytics, reporting, and machine learning use cases.
What You'll Do
Build scalable data product architecture, capable of supporting both internal and external data consumers
Responsible for modernizing our data frameworks and integrations with Databricks and BigQuery
Upgrade and reduce toil for developers on Apache Airflow
Develop and optimize data transformations using Apache Spark (PySpark/Scala)
Build procedures and guidelines to help teams operate with data
Identify bottlenecks in our development lifecycle and find solutions to improve them
Work directly with our data teams and FinOps teams to drive efforts that span across teams
Implement data governance, access control, and auditing using Databricks Unity Catalog
Build and integrate automated, reusable data validation suites using data quality frameworks (Great Expectations or similar)Implement monitoring and anomaly detection systems for data quality, reliability and performance
Develop and manage REST APIs to support secure data access, automation, and integration
Collaborate with data scientists, analysts, and software engineers to deliver governed, reusable data assets
Implement monitoring, logging, and alerting for data workflows
Optimize cost and performance of cloud-based data infrastructure
Who You Are
Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
2+ years of experience in data engineering or a related role
Strong hands-on experience with Databricks, Apache Spark and BigQuery or Snowflake
Proven experience with Modern table formats such as Delta Lake and Iceberg
A deep understanding of the data lifecycle and how teams operate with data
Hands-on experience implementing data governance and metadata management using Databricks Unity Catalog
Experience managing and extending Apache Airflow (custom operators, plugins, infrastructure)
Experience with Kubernetes
Solid experience with AWS cloud services, especially S3 and data-related services
Experience with data validation and data quality principles and working with SLA systems
Proficiency in Python and SQLExperience with data modeling, data lakes, and lakehouse architectures and a strong understanding of distributed systems and big data processing
Samba is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We strive to empower connection with one another, reflect the communities we serve, and tackle meaningful projects that make a real impact.
Samba may collect personal information directly from you, as a job applicant, Samba may also receive personal information from third parties, for example, in connection with a background, employment or reference check, in accordance with the applicable law. For further details, please see Samba's Applicant Privacy Policy. For residents of the EU , Samba Inc. is the data controller.