Software Engineer, Agentic Runtime
TLDR
Design and operate low-latency core runtime services for AI agents, integrating distributed systems with a focus on performance, reliability, and observability.
- Own impactful runtime problems end‑to‑end — from architecture and design to production launch and ongoing reliability.
- Build and evolve core services for session lifecycle, streaming responses (e.g., gRPC/WebSockets), structured tool execution, memory/state, and policy/guardrails.
- Design for performance, correctness, and cost: reduce p50/p95 latency, improve tail behavior, and optimize token/tool budgets.
- Integrate with leading LLM providers (e.g., OpenAI, Anthropic, Google Gemini) and internal evaluation frameworks to improve quality and predictability.
- Harden the platform with fault isolation, retries, timeouts, circuit‑breaking, backpressure, and graceful degradation.
- Instrument deep observability (tracing, metrics, logs) and create playbooks/SLOs for high availability and on‑call excellence.
- Collaborate closely with product, quality, and application teams to prioritize the most impactful roadmap investments.
- 3+ years of software engineering experience building production distributed systems or cloud‑native applications.
- BS/BA in Computer Science or related field, or equivalent practical experience.
- Strong coding skills in at least one of: Python, Go, Java, or C++, with a focus on reliability, performance, and tests.
- Product‑minded: you prioritize customer impact, clear SLAs/SLOs, and pragmatic iteration.
- Ownership‑driven with a positive, proactive attitude; comfortable leading projects or learning from battle‑tested engineers.
- Experience operating services on Kubernetes and at least one major cloud (e.g., GCP, AWS, or Azure).
- Familiarity with event/streaming systems (e.g., Pub/Sub, Kafka), caching (e.g., Redis), and data stores for low‑latency paths.
- Practical understanding of LLM/agents building blocks: tool/function calling, structured outputs, streaming, and model selection/routing.
- Strong observability and debugging skills: tracing (e.g., OpenTelemetry), metrics, dashboards, and production forensics.
- Background in one or more areas is a plus: policy/guardrails, multi‑tenant isolation, rate‑limiting, concurrency control, cost optimization.
- This role is hybrid (3-4 days a week in one of our SF Bay Area offices)
AI-First Mindset at Glean:
At Glean, AI fluency is core to how we work and we're committed to ensuring every new hire feels confident integrating AI into their everyday work. As part of the interview process, you'll complete a brief AI-focused exercise or discussion so we can understand how you think about, design, and use AI to drive impact in your role. Feel free to reference any tools, platforms, or workflows you use today — prior Glean experience isn't required.
Benefits
Education Stipend
as well as an annual education and wellness stipends to support your growth and wellbeing
Free Meals & Snacks
provide healthy lunches daily to keep you fueled and focused
Home Office Stipend
When you join, you'll receive a home office improvement stipend
Glean is a Work AI platform designed to help organizations optimize their operations through intelligent search and AI-driven capabilities. By offering a scalable and secure infrastructure, Glean empowers businesses across various industries to harness the full potential of AI while maintaining control and customization.
- Founded
- Founded 2019
- Employees
- 51-200 employees
- Total raised
- $160M raised