Stripe is hiring a

Staff Software Engineer, Batch Compute

Who we are

About Stripe

Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.

About the team

The Batch Compute team at Stripe manages the infrastructure, tooling and systems behind running batch processing systems at Stripe, which are currently powered by Hadoop and Spark. Batch processing systems power several core asynchronous workflows at Stripe and operate at significant scale. 

What you’ll do

We're looking for a Software Engineer with experience designing, building and maintaining high-scale, distributed systems. You will work with a  team that is in charge of the core infrastructure used by the product teams to build and operate batch processing jobs. You will have an opportunity to play a hands-on role in significantly rearchitecting our current infrastructure to be much more efficient and resilient. This re-architecture will introduce disaggregation of Hadoop storage and compute with open source solutions.

 

Responsibilities

  • Scope, design, implement, and deploy robust solutions, making appropriate tradeoffs between reliability, throughput, latency, resiliency, engineering velocity and cost
  • Innovate, design and implement software solutions that contribute towards improvement in resiliency, reliability, efficiency and management at scale for batch processing infrastructure

Who you are

We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.

Minimum requirements

  • 8+ years of professional hands-on software development experience
  • Proven track record of building large scale, complex distributed systems; identifying shortcomings and optimization opportunities; and making data driven cost performance tradeoffs to influence design decisions
  • Experience building and operating infrastructure and tools that empower developers/product teams to deliver business value
  • Experience in operational maintenance of large scale distributed systems

Preferred qualifications

  • Track record of open source contributions to data processing or big data systems (Hadoop, Spark, Celeborn, Flink, etc)

This job is no longer available

Enter your email address below to get notified whenever we find a similar job post.

Unsubscribe at any time.