Senior AI Infrastructure Engineer

TLDR

Design and scale high-performance AI infrastructure for autonomous driving models, bridging research and production while enhancing system efficiency and reliability.

Who we are

Gatik, the leader in autonomous middle-mile logistics, is revolutionizing the B2B supply chain with its autonomous transportation-as-a-service (ATaaS) solution and prioritizing safe, consistent deliveries while streamlining freight movement by reducing congestion. The company focuses on short-haul, B2B logistics for Fortune 500 retailers and in 2021 launched the world’s first fully driverless commercial transportation service with Walmart. Gatik's Class 3-7 autonomous trucks are commercially deployed across major markets, including Texas, Arkansas, and Ontario, Canada, driving innovation in freight transportation. 

The company's proprietary Level 4 autonomous technology, Gatik Carrier™, is custom-built to transport freight safely and efficiently between pick-up and drop-off locations on the middle mile. With robust capabilities in both highway and urban environments, Gatik Carrier™ serves as an all-encompassing solution that integrates advanced software and hardware powering the fleet, facilitating effortless integration into customers' logistics operations. 

About the role

We are seeking a Senior AI Infrastructure Engineer to design, build, and scale the high-performance AI platform powering our autonomous driving models. While researchers focus on developing perception, planning, and world models, you will be responsible for the underlying infrastructure that enables distributed training, experiment tracking, and seamless model deployment. You will bridge the gap between research and production, ensuring our AI stack is scalable, resilient, and highly efficient

This role is onsite 5 days a week at our Mountain View, CA office!

What you'll do

  • Distributed Training & ML Systems Support
    • Scale Research Workloads: Enable researchers to scale complex models (VLA, World Models) across multi-node setups using PyTorch Distributed, and Ray Train.
    • Performance Optimization: Architect and optimize multi-GPU setups, ensuring efficient model parallelism and data parallelism techniques across H100/A100 clusters.
    • Networking & Hardware Tuning: Optimize low-level communication (e.g., NCCL tuning, InfiniBand, or RoCE v2) to minimize latency for 3D Gaussian Splatting (3DGS) and large-scale training.
    • Intelligent Resource Scheduling: Optimize hardware utilization and cost-efficiency through Kubernetes-native GPU scheduling (NVIDIA GPU Operator, KubeFlow).
    • Inference Performance Engineering: Deploy and scale optimized model artifacts using TensorRT, ONNX Runtime, and Triton Inference Server, fine-tuning pipelines for both real-time and batch processing
  • Agentic Infrastructure & Automation
    • Self-Healing AI Infrastructure: Architect and deploy Autonomous AI Agents (LangGraph, CrewAI, or AutoGen) to monitor GPU cluster health, enabling automated real-time triage of hardware failures and NCCL timeouts.
    • Agentic DevOps & CI/CD: Develop agent-driven automation, such as Agentic PR Reviewers for infrastructure code and AI agents that proactively suggest model-specific Kubernetes resource optimizations.
    • Agentic Data Curation: Support researchers in building "Data Machines" where AI agents autonomously curate, label, and verify high-priority edge cases from raw data.
  • Model Management & Lifecycle (MLOps)
    • Automated Lifecycle Management: Design and maintain ML infrastructure leveraging MLFlow, Argo Workflows, and Kubernetes to automate the end-to-end model lifecycle.
    • Experiment & Model Tracking: Integrate feature stores and experiment tracking systems to provide a robust system of record for every model iteration.
    • Deployment Strategies: Implement robust serving mechanisms, including A/B testing, shadow deployments, and rollback mechanisms
  • Cloud-Native Foundations & Data Integration
    • Infrastructure as Code: Drive the "Everything as Code" philosophy using Terraform and Helm.
    • Data Pipelines: Collaborate with data teams to scale ETL pipelines using Apache Airflow, Kafka, and Spark for large-scale dataset management. ○
    • Integrated Data Factories: Collaborate with data engineering teams to scale high-bandwidth ETL pipelines using Apache Airflow, Kafka, and Spark, ensuring seamless data flow from raw sensor logs to optimized storage in S3, GCS, or Delta Lake
  • Monitoring & Observability
    • System Metrics: Define and track key ML system metrics, including training convergence, latency, throughput, and drift detection.
    • Infrastructure Health: Maintain deep visibility into platform health using Prometheus, Grafana, OpenTelemetry, and ELK Stack.
    • Deep Stack Observability: Develop comprehensive monitoring using Prometheus, Grafana, and OpenTelemetry to track low-level infrastructure health alongside high-level ML metrics like training convergence and throughput.
    • AI-Specific Metrics & Drift: Define and monitor critical ML system KPIs, including model latency, inference throughput, and feature drift detection

What we're looking for

  • Experience: 5+ years in ML infrastructure, MLOps, or DevOps supporting high-scale compute environments.
  • ML Expertise: Deep understanding of multi-GPU training strategies (FSDP, DeepSpeed, Ray Train) and high-performance networking (NCCL, InfiniBand).
  • Infrastructure Automation: Mastery of Kubernetes, Terraform, and Helm, with a focus on GPU-native orchestration.
  • AI Agent Frameworks: Proven experience building or supporting Agentic Workflows for infrastructure or data automation (e.g., using LLMs to drive DevOps tasks).
  • Platform Mastery: Expertise in MLFlow, Argo Workflows, and Kubernetes.
  • Containerization: Strong experience with Docker, Kubernetes, and Helm.
  • Data & CI/CD: Proficiency in Apache Airflow, Kafka, Spark, and GitOps automation.
  • Core Skills: Proficiency in Python and Bash; experience with Go or Rust is a plus

Bonus Qualifications

  • Advanced AI Protocols: Familiarity with the Model Context Protocol (MCP) to standardize how AI agents interact with internal databases and orchestration APIs.
  • Hybrid & Physical AI: Experience in hybrid cloud and on-prem GPU cluster management for Physical AI workloads (e.g., 3DGS, World Models).
  • Agentic Observability: Experience utilizing LLMs for semantic monitoring and log analysis to detect complex distributed system failures that traditional threshold-based alerts miss.

Salary Ranges - $180,000- $240,000

More about Gatik

Founded in 2017 by experts in autonomous vehicle technology, Gatik has rapidly expanded its presence to Mountain View, Dallas-Fort Worth, Arkansas, and Toronto. As the first and only company to achieve fully driverless middle-mile commercial deliveries, Gatik holds a unique and defensible position in the AV industry, with a clear trajectory toward sustainable growth and profitability.

We have delivered complete, proprietary AV technology - an integration of software and hardware - to enable earlier successes for our clients in constrained Level 4 autonomy.  By choosing the middle mile – with defined point-to-point delivery, we have simplified some of the more complex AV challenges, enabling us to achieve full autonomy ahead of competitors. Given extensive knowledge of Gatik’s well-defined, fixed route ODDs and hybrid architecture, we are able to hyper-optimize our models with exponentially less data, establish gate-keeping mechanisms to maintain explainability, and ensure continued safety of the system for unmanned operations.

Visit us at Gatik for more company information and Careers at Gatik for more open roles.

Notable News

Taking care of our team

At Gatik, we connect people of extraordinary talent and experience to an opportunity to create a more resilient supply chain and contribute to our environment’s sustainability. We are diverse in our backgrounds and perspectives yet united by a bold vision and shared commitment to our values. Our culture emphasizes the importance of collaboration, respect and agility.

We at Gatik strive to create a diverse and inclusive environment where everyone feels they have opportunities to succeed and grow because we know that together we can do great things. We are committed to an inclusive and diverse team. We do not discriminate based on race, color, ethnicity, ancestry, national origin, religion, sex, gender, gender identity, gender expression, sexual orientation, age, disability, veteran status, genetic information, marital status or any legally protected status.

 

Gatik AI specializes in autonomous middle-mile delivery, offering a unique transportation-as-a-service (ATaaS) solution that streamlines B2B logistics for Fortune 500 retailers. By leveraging its fleet of autonomous box trucks, Gatik optimizes short-haul freight movement, providing safe, consistent deliveries while significantly reducing congestion in the supply chain.

View all jobs
Salary
$180,000 – $240,000 per year
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Infrastructure Engineer Q&A's
Report this job
Apply for this job