Lightning AI
Lightning AI

Platform Support Engineer (APAC)

TLDR

Support ML engineers in resolving complex distributed system challenges while enhancing the reliability of large-scale AI workloads on cloud infrastructure and Kubernetes.

Who We Are

Lightning AI is the company behind PyTorch Lightning. Founded in 2019, we build an end-to-end platform for developing, training, and deploying AI systems—designed to take ideas from research to production with less friction.

Through our merger with Voltage Park, a neocloud and AI Factory, Lightning AI combines developer-first software with cost-efficient, large-scale compute. Teams get the tools they need for experimentation, training, and production inference, with security, observability, and control built in.

We serve solo researchers, startups, and large enterprises. Lightning AI operates globally with offices in New York City, San Francisco, Seattle, and London, and is backed by Coatue, Index Ventures, Bain Capital Ventures, and Firstminute.

Our Values

  • Move Fast: We act with speed and precision, breaking down big challenges into achievable steps.

  • Focus: We complete one goal at a time with care, collaborating as a team to deliver features with precision.

  • Balance: Sustained performance comes from rest and recovery. We ensure a healthy work-life balance to keep you at your best.

  • Craftsmanship: Innovation through excellence. Every detail matters, and we take pride in mastering our craft.

  • Minimal: Simplicity drives our innovation. We eliminate complexity through discipline and focus on what truly matters.

What We’re Looking For

Lightning AI is looking to hire a Platform Support Engineer to join our APAC Customer Experience team, supporting ML engineers running large-scale training and inference workloads across cloud infrastructure, Kubernetes, and GPU platforms in production environments.

This role is not a ticket router or traditional support engineer. You are a technical partner to ML teams - helping diagnose failures, improve reliability, and guide customers through complex distributed systems problems.The problems range from Kubernetes scheduling and GPU orchestration to distributed PyTorch failures, inference latency, networking bottlenecks, storage performance, and platform reliability. You’ll gain exposure to a wide variety of real world AI workloads across industries and help shape the infrastructure powering the next generation of ML applications.

This role is remote and open to candidates based in either the Philippines or Singapore. The role follows a Thursday–Sunday schedule, with working hours from 7:00 AM to 5:00 PM local time (UTC+8).


What You'll Do

Work Directly With ML Engineers

  • Partner directly with customer engineering teams running training and inference workloads in production
  • Help customers diagnose and resolve complex distributed systems and ML infrastructure issues
  • Act as a technical advisor during high impact incidents and platform degradation events
  • Translate infrastructure level issues into actionable guidance for ML engineers
  • Build credibility with customers through strong technical reasoning and clear communication

Debug ML Infrastructure & Distributed Workloads

  • Investigate failures involving distributed training, Kubernetes orchestration, GPU allocation, networking, and storage systems
  • Troubleshoot PyTorch, CUDA, NCCL, and inference serving related issues
  • Analyze logs, metrics, traces, and system behavior to isolate root causes
  • Debug containerized workloads running across Kubernetes and bare metal GPU environments
  • Support customers scaling workloads across multi node GPU systems
  • Diagnose performance bottlenecks involving compute, memory, networking, or storage

Improve Reliability & Platform Operations

  • Identify recurring patterns across customer issues and drive long term reliability improvements
  • Contribute to post incident reviews and operational improvements
  • Build internal tooling, automation, documentation, and runbooks
  • Partner closely with infrastructure, networking, and platform engineering teams
  • Help improve observability, operational visibility, and troubleshooting workflows
  • Improve the customer experience through better processes and technical guidance

What This Role Is Not

To set clear expectations:

  • This is not a traditional help desk or ticket routing support role
  • This is not purely customer success or account management
  • This is not a backend engineering role
  • This is not a passive escalation position

This role is for engineers who enjoy solving difficult technical problems while working closely with other engineers.

 

What You’ll Need

Required Qualifications

Infrastructure & Systems

  • Strong software engineering and systems troubleshooting background
  • Experience with Kubernetes and containerized environments
  • Linux systems knowledge, including networking, storage, process management, and performance tuning
  • Experience with cloud infrastructure and distributed systems
  • Experience with observability and debugging tools such as Prometheus, Grafana, or OpenTelemetry

ML Infrastructure Experience

  • Hands on experience operating machine learning workloads in production or research environments
  • Experience with distributed ML systems and tooling such as PyTorch, CUDA, or NCCL
  • Familiarity with GPU infrastructure and orchestration
  • Experience troubleshooting performance, reliability, or scaling issues in ML infrastructure
  • Understanding of the operational challenges involved in running ML systems at scale

Collaboration

  • Strong communication skills and ability to work directly with highly technical customers and engineering teams
  • Comfortable operating in fast moving, highly ambiguous environments
  • Enjoys solving complex technical problems collaboratively

Nice-to-Haves

  • Experience with large scale model training or distributed inference systems
  • Familiarity with Ray, Kubeflow, Slurm, or similar distributed scheduling platforms
  • Experience with InfiniBand, RDMA, or high-performance networking
  • Experience operating bare metal infrastructure
  • Familiarity with storage systems commonly used in ML environments
  • Experience working at an AI infrastructure, cloud, MLOps, or developer tooling company
  • Contributions to platform engineering, developer infrastructure, or operational tooling projects
  • Experience writing automation, tooling, or scripts in Python or similar languages

Benefits and Perks

We offer a comprehensive and competitive benefits package designed to support our employees’ health, well-being, and long-term success. Benefits may vary by location, team, and role.

Benefits include:

  • Comprehensive medical, dental and vision coverage (U.S.); Private medical and dental insurance (U.K.)
  • Retirement and financial wellness support (U.S.); Pension contribution (U.K.)
  • Generous paid time off, plus holidays
  • Paid parental leave
  • Professional development support
  • Wellness and work-from-home stipends
  • Flexible work environment

 

At Lightning AI, we are committed to fostering an inclusive and diverse workplace. We believe that diverse teams drive innovation and create better products. We provide equal employment opportunities to all employees and applicants without regard to race, color, religion, gender, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other protected characteristic. We are dedicated to building a culture where everyone can thrive and contribute to their fullest potential.

Benefits

Flexible Work Hours

Flexible work environment

Health Insurance

Comprehensive medical, dental and vision coverage (U.S.); Private medical and dental insurance (U.K.)

Learning Budget

Professional development support

Paid Parental Leave

Paid Time Off

Generous paid time off, plus holidays

Wellness Stipend

Wellness and work-from-home stipends

Lightning AI builds an end-to-end platform for developing, training, and deploying AI systems, simplifying the transition from research to production. Catering to solo researchers, startups, and large enterprises, our platform integrates powerful software with cost-efficient, large-scale compute resources.

Founded
Founded 2019
Employees
51-200 employees
Industry
Internet Software & Services
Total raised
$59M raised
View company profile
Report this job
Apply for this job