Own the infrastructure powering AI Care platform innovations, tackling AI-specific challenges in real-time processing and optimization while collaborating across engineering and product teams.
Design, build, and maintain the inference infrastructure that powers Sword Health's AI products, ensuring models are served with high throughput, low latency, and cost efficiency.
Own the end-to-end deployment pipeline for AI models - from real-time computer vision powering movement analysis to large language models driving conversational AI experiences.
Architect and scale Kubernetes clusters for GPU-accelerated workloads, including autoscaling strategies, resource scheduling, and multi-model serving.
Build and operate the infrastructure behind Sword Health's real-time AI agents, including WebRTC cluster provisioning and deploying speech-to-text and text-to-speech capabilities at low latency.
Drive inference scaling strategies - evaluate and implement techniques such as speculative decoding, continuous batching, and model parallelism to meet growing demand without proportionally increasing costs.
Develop and maintain Infrastructure as Code (Terraform) and GitOps workflows tailored to GPU-enabled, AI-specific environments.
Instrument and monitor AI inference systems, building observability around GPU utilization, model latency, throughput, and error rates to ensure reliability and performance.
Collaborate closely with ML Engineers, Data Scientists, and Product teams to translate model requirements into robust, production-ready infrastructure.
Evaluate emerging AI infrastructure tools, frameworks, and hardware to keep Sword Health at the cutting edge of inference performance and efficiency.
Mentor team members on AI infrastructure best practices, fostering knowledge sharing around GPU workloads, model serving patterns, and production ML systems.
5+ years of experience in infrastructure engineering, with at least 2 years focused on AI/ML workloads in production environments.
Strong experience with Kubernetes for orchestrating GPU-accelerated workloads, including scheduling, resource management, and autoscaling for inference services.
Hands-on experience with model serving and inference optimization frameworks for both real-time computer vision and large language model workloads.
Solid understanding of LLM inference optimization techniques, including speculative decoding, batching strategies, quantization, and inference scaling patterns.
Experience provisioning and managing infrastructure for real-time AI systems, including WebRTC clusters and AI agent architectures.
Familiarity with real-time video/computer vision inference pipelines and the infrastructure challenges of processing continuous visual data streams at low latency.
Familiarity with speech-to-text and text-to-speech serving infrastructure and the challenges of running voice AI at low latency.
Experience with Infrastructure as Code (Terraform or similar) and GitOps methodologies for managing complex, GPU-enabled environments.
Working knowledge of GPU infrastructure - NVIDIA CUDA ecosystem, multi-GPU setups, and GPU monitoring/profiling.
Strong Linux systems fundamentals and networking knowledge, particularly for latency-sensitive, real-time workloads.
Fluent in English (written and oral).
A proactive, ownership-driven mindset - you see a bottleneck in an inference pipeline and you fix it before it becomes a problem.
AI Inference & Model Serving:
Experience with LLM serving engines such as vLLM, SGLang, or LLM-D.
Experience with NVIDIA Triton Inference Server and TensorRT for real-time computer vision workloads.
Familiarity with NVIDIA Riva or similar platforms for STT/TTS serving.
Understanding of speculative decoding, continuous batching, quantization, and model parallelism techniques.
Kubernetes & Infrastructure:
Experience with Istio or similar service mesh.
Experience with Kafka for event streaming.
Experience with Prometheus, AlertManager, and Grafana for monitoring and observability.
Experience with Elasticsearch, Logstash, and Kibana (ELK) for log management.
Experience with Vault for secrets management.
Experience with Redis, MySQL, and DNS management.
Experience provisioning infrastructure on AWS, Azure, or GCP.
Good knowledge of cloud networking including VPC management, routing, NAT, and troubleshooting with tools like TCPdump.
General:
Experience with WebRTC infrastructure and real-time media streaming.
Experience with Python, Go, or similar languages commonly used in ML infrastructure tooling.
Familiarity with SCRUM methodology.
A stimulating, fast-paced environment with lots of room for creativity;
A bright future at a promising high-tech startup company;
Career development and growth, with a competitive salary;
The opportunity to work with a talented team and to add real value to an innovative solution with the potential to change the future of healthcare;
A flexible environment where you can control your hours (remotely) with unlimited vacation;
Access to our health and well-being program (digital therapist sessions);
Remote or Hybrid work policy;
To get to know more about our Tech Stack, check here.
Equity Compensation
Equity shares
Flexible Work Hours
Flexible working hours
Health Insurance
Health, dental and vision insurance
English class
Paid Time Off
Discretionary vacation
Remote-Friendly
Remote-first company
Sword Health is transforming healthcare with its AI Care platform, making healthcare more accessible while drastically lowering costs for payers and organizations. Initially focused on pain management, Sword has expanded into women's health, movement health, and mental health, serving over 700,000 members across three continents and helping enterprise clients save over $1 billion in unnecessary healthcare expenses.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.
Infrastructure Engineer Q&A's