Define and build foundational data infrastructure systems for Astronomer's products, impacting their capabilities over the coming years.
Astronomer empowers data teams to bring mission-critical software, analytics, and AI to life and is the company behind Astro, the industry-leading unified DataOps platform powered by Apache Airflow®. Astro accelerates building reliable data products that unlock insights, unleash AI value, and powers data-driven applications. Trusted by more than 800 of the world's leading enterprises, Astronomer lets businesses do more with their data. To learn more, visit www.astronomer.io.
Astronomer’s products run on a complex, multi-cloud platform — and where we're going as a product company requires a level of data infrastructure sophistication we're actively building towards. The work ahead in our Platform team isn't just about wrangling pipelines or curating datasets; it's about building the foundational data systems that our products will depend on for years.
We're looking for a Staff+-level engineer who has built serious data infrastructure before — not pipelines, not transformations, but the storage systems, retrieval infrastructure, and data platforms that sit underneath everything else and that other engineers build on top of. You'll be joining our Platform Engineering team with a mandate to define and deliver this capability from the ground up, with the sponsorship and organisational backing to do it properly.
This is a foundational role at an interesting moment: your work will directly shape what Astronomer's products — Astro, Observe, and our IDE — are capable of over the next several years. This role reports directly to the VP responsible for delivering these platforms reliably.
Astronomer has a healthy and complex data estate spanning multiple cloud providers, a mix of managed and self-hosted systems, and an increasingly ambitious set of requirements as our products mature. We have a clear sense of where we want to go; we need the right person to figure out how to get there and then go build it.
This is very much a technical role — you'll be just as involved in building these systems as in specifying and designing them. We're not looking for someone to write data strategy documents; we're looking for someone who writes the strategy and the code, and who has done exactly that before at scale.
Blaze a Trail: Own and develop our data infrastructure strategy and practice, with sponsorship and responsibility to match. Map out what we need, make the calls, and own the outcomes.
Be an Owner: Be directly involved in deciding what we work on and how we work on it. Make promises, and keep them.
Do Sensible Things: Make principled build vs. buy assessments and advocate for the right tools for the right job — not the fashionable ones, not the ones already in the estate just because they're there.
Garage Door Open: Create and maintain comprehensive internal documentation and decision records for systems and processes. Participate in architectural forums and make principled, open decisions that the rest of the organisation can learn from and hold us to.
Extensive, hands-on experience designing and building data infrastructure at scale — storage systems, retrieval and indexing infrastructure, data and/or streaming platforms that serve production traffic for multiple teams and products.
Strong proficiency in Go and deep, practical experience with Kubernetes at the operator level. You know what happens when things go wrong, because you've been there.
In-depth knowledge of the database and data systems landscape — relational, NoSQL, blob, timeseries, vector — and the hard-won experience to know when each is and isn't the right choice.
Deep understanding of distributed systems and Non-Abstract Large Systems Design: you can draw the diagram, explain the failure modes, and know which ones to actually care about.
Experience working across multiple cloud providers (AWS, GCP, Azure) — not just as a consumer, but as someone who has made considered choices between them and understood the trade-offs.
Experience defining requirements and making and justifying technology choices in cross-functional engineering organisations.
Strong written and verbal communication skills, with experience working in a globally-distributed team.
Experience with Postgres-compatible managed services (CloudSQL, RDS, AlloyDB) or distributed databases like Spanner, with hands-on experience in provisioning, development practices, and migration.
Experience building internal data platforms from cloud-native component parts. We have a healthy mix of build vs. buy; sometimes building is the right call.
Experience working on a SaaS/PaaS product across multiple cloud providers. Experience with Apache Airflow.
#LI-Hybrid
At Astronomer, we value diversity. We are an equal opportunity employer: we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Astronomer builds Astro, a unified DataOps platform powered by Apache Airflow®, designed to empower data teams in creating mission-critical software, analytics, and AI solutions. Serving over 800 leading enterprises, Astronomer accelerates the building of reliable data products that unlock insights and fuel data-driven applications.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.
Staff Software Engineer, Platform Q&A's