Architect scalable data egress solutions by optimizing throughput and latency while elevating engineering standards through CI/CD workflows and mentoring junior members.
At Adsquare, our mission is driven by our core focus:
Passion – Solving complex challenges with great people, tech, and data.
Niche – Location Intelligence for Programmatic Advertisers.
Our core values are integral to everything we do:
Drive: We turn ambition into action to deliver valuable outcomes.
Resilience: We adapt, persevere, and grow stronger.
No BS: We value honesty, transparency, and clear communication.
Humble: We choose modesty over vanity and let results speak for themselves.
Moral Compass: We do the right thing with fairness, integrity, and respect.
We seek candidates who not only bring top-tier technical expertise but also embody these values in every aspect of their work.
You will join the Integrations Squad, a high-performance team responsible for the "final mile" of our data product. This team ensures our massive datasets reach external partners reliably and securely.
Structure: You will join a team of 4 Backend Engineers, a Senior Data Engineer (DE) and a Technical Product Owner. Together with the other DE will be the Subject Matter Experts (SMEs) for Data Engineering within this squad.
Culture: You will work under the guidance of a Technical Team Lead and the broader Data Engineering Guild at Adsquare but will operate with a high degree of autonomy. Because you are sitting alongside Backend Engineers, we value a software engineering mindset applied to data problems.
We are looking for a Senior Engineer to architect privacy-first, massive-scale data egress solutions.
Architect Scalable Egress: Design and build the big data architectures required to transfer terabytes of data to third-party partners. You will optimize for throughput, latency, and cost.
Engineering Rigor: Elevate the data engineering standards within the squad. Implement CI/CD workflows, infrastructure-as-code (Terraform), and automated testing.
Data Pipeline Ownership: Take full ownership of the pipeline lifecycle—from raw data ingestion to transformation and external delivery—using Python, Spark, and AWS native tools.
Collaboration: Bridge the gap between Data and Backend. You will align with other Data Experts at Adsquare on best practices (e.g., file formats like Parquet/Iceberg) and introduce these standards to your squad.
Mentorship: Act as a technical leader. While you don't manage people, you are expected to mentor junior members and conduct code reviews that raise the bar for quality.
We are looking for a candidate with 4+ years of experience in Data Engineering or Backend Development with a heavy data focus. You must be comfortable working in a self-organized, agile environment.
Must-Have Technical Skills:
Big Data Mastery: Deep experience with large-scale processing frameworks (e.g., Apache Spark, AWS Glue, EMR). You understand how to handle TB-scale datasets efficiently.
Database & Storage Architecture: You don't just query data; you choose the right home for it. You have deep expertise in the trade-offs between OLAP and OLTP systems and have built solutions using NoSQL document stores, key-value stores, and horizontally scalable data warehouses (e.g., Redshift, Snowflake, StarRocks or Lakehouse architectures) as core building blocks.
AWS Native: You have architected solutions using the AWS ecosystem (S3, Athena, Batch, Glue, Lambda).
Infrastructure as Code: Production experience with Terraform is mandatory. You treat infrastructure as software.
Language Proficiency:
Python: Advanced proficiency. You write modular, production-ready code, employing appropriately both functional programming and objected oriented programming approaches. You use test-driven development.
SQL: Excellent command of SQL for transformation and analysis.
Pipeline Orchestration: Experience with tools like Airflow and dbt to manage complex dependency graphs.
Engineering Fundamentals: Solid grasp of computer science principles, data structures, algorithms, and git-flow/CI/CD pipelines / gitlab / github.
Nice-to-Have Skills:
Polyglot Programming: Experience with a second compiled or strongly typed language is highly valued (e.g., Scala, Go, Kotlin, C++ or Java).
Data Formats: Expertise in optimizing file formats (Parquet, Avro, Iceberg) for performance.
Backend Context: Experience working closely with Backend engineers or familiarity with Backend architectural patterns (microservices, API design).
Flexible Work Hours
We are open to flexible work models: we work on a hybrid mode and remotely from anywhere in the world up to 3 months per year
Regular team and company events
Regular team events and company events organised by our People team (Trust us, they know how to throw a party!)
Paid Time Off
You are entitled to 30 vacation days per year
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.
Senior Data Engineer Q&A's