Engineer backend systems that ensure data protection at scale, supporting real-time governance decisions and handling complex data interactions across various technologies.
About Ketch
We’re rebuilding the internet’s data layer for the AI era. Ketch runs systems that analyze billions of events and make millions of decisions every day, giving global brands real-time intelligence and control over how data moves across their entire ecosystem. This requires infrastructure that can keep up with the speed, volume, and complexity of modern AI-driven systems and deliver high-integrity outcomes across billions of data interactions.
We’re a well-funded Series B startup, backed by top-tier investors and led by a team with multiple exits. We build fast, ship with ownership, and work directly with customers to solve their hardest data challenges.
Why Us?
The AI era is breaking every old assumption about data protection. Sensitive data moves faster, through more systems, with higher stakes. Traditional privacy and governance tools can’t keep up. You’ll help build the infrastructure that replaces them.
Join us to engineer real-time systems that actively safeguard personal data at massive scale. You’ll design pipelines that classify and control data as it flows, services that react instantly to risk, and distributed systems that enforce user rights across hundreds of technologies. Your work ensures that companies can use data responsibly and that individuals retain control over how their data is collected and used.
Role Overview
As a Backend Engineer at Ketch, you’ll build reliable, scalable systems that make sense of messy, high-stakes data flowing through databases, pipelines, SaaS tools, websites, and mobile apps. You’ll build Go microservices that connect across an organization’s entire stack, design APIs that support real-time governance decisions, and help construct pipelines that classify and route data with speed, accuracy, and resilience.
This is backend engineering where correctness matters, latency matters, and scale is real. You’ll help create the APIs and data engines that support automated governance, power agentic intelligence, and enable companies to act on insights across their data ecosystem.
This is a hybrid role based out of the San Francisco HQ 3 days a week.
What You’ll Own
Build and evolve high-performance Go microservices that process distributed data flows and support real-time decisions.
Develop clean, well-versioned APIs and backend systems that orchestrate permissions, data directives, and automated actions across external systems.
Design and implement distributed pipelines and event-driven architectures that transform structured/unstructured data and feed downstream intelligence.
Own the full lifecycle: design, implementation, deployment, monitoring, iteration.
Apply resilience patterns (timeouts, retries, circuit breakers) to keep systems predictable under real-world load.
Drive performance improvements through concurrency tuning and efficient CPU/memory usage.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Ace your job interview
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.