Stripe is hiring a

Staff Engineer - Batch Compute

Bengaluru, India

Who we are

About Stripe

Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.

About the team

The Batch Compute team at Stripe manages the infrastructure, tooling and systems behind running batch processing systems at Stripe, which are currently powered by Hadoop and Spark. Batch processing systems power several core asynchronous workflows at Stripe and operate at significant scale. 

What you’ll do

We're looking for a Software Engineer with experience designing, building and maintaining high-scale, distributed systems. You will work with a  team that is in charge of the core infrastructure used by the product teams to build and operate batch processing jobs. You will have an opportunity to play a hands-on role in significantly rearchitecting our current infrastructure to be much more efficient and resilient. This re-architecture will introduce disaggregation of Hadoop storage and compute with open source solutions.

 

Responsibilities

  • Scope and lead technical projects within the Batch Compute domain
  • Build and maintain the infrastructure which powers the core of Stripe.
  • Directly contribute to core systems and write code.
  • Work closely with the open source community to identify opportunities for adopting new open source features as well contribute back to the OSS.
  • Ensure operational excellence and enable a highly available, reliable and secure Batch Compute platform

 

Who you are

We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.

Minimum requirements

  • 8+ years of professional experience writing high quality production level code or software programs.
  • Have experience with distributed data systems such as Spark, Flink, Trino, Kafka ,etc
  • Experience developing, maintaining and debugging distributed systems built with open source tools.
  • Experience building infrastructure as a product centered around user needs.
  • Experience optimizing the end to end performance of distributed systems.
  • Experience with scaling distributed systems in a rapidly moving environment.

Preferred qualifications

  • Experience as a user of batch processing systems (Hadoop, Spark) 
  • Track record of open source contributions to data processing or big data systems (Hadoop, Spark, Celeborn, Flink, etc)

 

Apply for this job

Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!

Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Staff Engineer Q&A's
Report this job
Apply for this job