Saviynt is the most innovative cloud identity and access governance platform on the market. We secure hundreds of millions of identities at many of the world’s largest enterprises, helping them transform their identity programs and protect their people, assets, and infrastructure. We are growing aggressively and need senior cloud architects to help us scale our infrastructure.
As a Principal Engineer in the Cloud Architecture team, you will be an integral member of a small, high-performance/high-impact team responsible for compute, data storage and pipelines, and network architecture. You are a hands-on technical leader writing design docs and code. Successfully scaling our infrastructure requires not only making smart technical design decisions, but empowering our engineers to build upon and operate it.
WHAT YOU WILL BE DOING
Understand and deeply focus on the real-world benefits your systems and products will have on our customers. No ivory tower architecture.
Be driven by and have bias toward autonomy. You’ll be given context on the problems we’re trying to solve, but you’ll need to figure out how to solve them on your own.
Be driven by and have a bias toward execution. You’ll need to employ excellent judgement, communicate your decisions clearly and widely, and be accountable for the results.
Be invested in the long term view. While we need to deliver value quarter, we need to avoid technical debut and other forms of unnecessary complications will serve us poorly in the future.
Possess engineering breadth and depth. We need generalists, but you also need to be deeply skilled in one or more areas of network, data storage, data pipelines, compute, or software delivery
What you bring
Data expertise includes Deep understanding of relational, document, and columnar database architecture, schema design, performance tuning, and operational considerations
Experience with distributed relational databases such as CockroachDB or Vitess, including cluster operations, sharding/partitioning, and transactional consistency models
Strong knowledge of Change Data Capture (CDC) systems (e.g., Debezium), including designing reliable change streams and integrating them into downstream services
Experience building and operating streaming and batch data pipelines, ideally using technologies like Kafka for event streaming and modern orchestration/processing tools (e.g., Flink, Spark, Beam, or equivalent systems)
Expertise with object-store-based data architectures, including table formats such as Apache Iceberg, and an understanding of how they enable large-scale analytics, versioning, and schema evolution
Please note that we do not expect you to be an expert in every single one of the above. This should give you an idea of the type of challenges we have