Senior Data Scientist - LLM Training & Fine-tuning

AI overview

Lead end-to-end improvements for open-source LLMs focused on Indian languages, enhancing instruction following and tool reliability with an emphasis on production performance.

Job Title
Senior Data Scientist— LLM Training & Fine-tuning (Indian Languages, Tool Calling, Speed)
Location: Bangalore

About the Role

We’re looking for a hands-on Data Scientist / Research Scientist who can fine-tune and train open-source LLMs end-to-end—not just run LoRA scripts. You’ll own model improvement for Indian languages + code-switching (Hinglish, etc.)instruction following, and reliable tool/function calling, with a strong focus on latency, throughput, and production deployability.
This is a builder role: you’ll take models from research → experiments → evals → production.

What You’ll Do (Responsibilities)


• Train and fine-tune open LLMs (continued pretraining, SFT, preference optimization like DPO/IPO/ORPO, reward modeling if needed) for:
Indian languages + multilingual / code-switching
Strong instruction following
Reliable tool/function calling (structured JSON, function schemas, deterministic outputs)
• Build data pipelines for high-quality training corpora:
Instruction datasets, tool-call traces, multilingual data, synthetic data generation
De-duplication, contamination control, quality filtering, safety filtering
• Develop evaluation frameworks and dashboards:
Offline + online evals, regression testing
Tool-calling accuracy, format validity, multilingual benchmarks, latency/cost metrics
• Optimize models for speed and serving:
Quantization (AWQ/GPTQ/bnb), distillation, speculative decoding, KV-cache optimizations
Serve via vLLM/TGI/TensorRT-LLM/ONNX where appropriate
• Improve alignment and reliability:
Reduce hallucinations, improve refusal behavior, enforce structured outputs
Prompting + training strategies for robust compliance and guardrails
• Collaborate with engineering to ship:
Model packaging, CI for evals, A/B testing, monitoring drift and quality
• Contribute research:
Read papers, propose experiments, publish internal notes, and turn ideas into measurable gains

What We’re Looking For (Qualifications)

Must-Have

• 4 - 6 years in ML/DS, with direct LLM training/fine-tuning experience
• Demonstrated ability to run end-to-end model improvement:
data → training → eval → deployment constraints → iteration
• Strong practical knowledge of:
Transformers, tokenization, multilingual modeling
Fine-tuning methods: LoRA/QLoRA, full fine-tune, continued pretraining
Alignment: SFT, DPO/IPO/ORPO (and when to use what)
• Experience building or improving tool/function calling and structured output reliability
• Strong coding skills in Python, deep familiarity with PyTorch
• Comfortable with distributed training and GPU stacks:
DeepSpeed / FSDP, Accelerate, multi-GPU/multi-node workflows
• Solid ML fundamentals: optimization, regularization, scaling laws intuition, error analysis

Nice-to-Have

• Experience with Indian language NLP:
Indic scripts, transliteration, normalization, code-mixing, ASR/TTS text quirks
• Experience with pretraining from scratch or large-scale continued pretraining
• Practical knowledge of serving:
vLLM / TGI / TensorRT-LLM, quantization + calibration, profiling
• Experience with data governance: privacy, PII redaction, dataset documentation
Tech Stack (Typical)

  • PyTorch, Hugging Face Transformers/Datasets, Accelerate
  • DeepSpeed / FSDP, PEFT (LoRA/QLoRA)
  • Weights & Biases / MLflow
  • vLLM / TGI / TensorRT-LLM
  • Ray / Airflow / Spark (optional), Docker/Kubernetes
  • Vector DB / RAG stack familiarity is a plus


What Success Looks Like (90–180 Days)

• Ship a fine-tuned open model that measurably improves:
Instruction following and tool calling correctness
Indic language performance + code-switching robustness
Lower latency / higher throughput at equal quality
• Stand up a repeatable pipeline:
dataset versioning, training recipes, eval harness, regression gates
• Build a roadmap for next upgrades (distillation, preference tuning, multilingual expansion)

Interview Process

  • 30-min intro + role fit
  • Technical deep dive: prior LLM work (training/evals/production constraints)
  • Take-home or live exercise: design an LLM fine-tuning + eval plan for tool calling + Indic language
  • Systems round: training/serving tradeoffs, cost/latency, failure modes
  • Culture + collaboration round

Founded in 2019, the Apna mobile app is India’s largest professional networking platform dedicated to helping India’s burgeoning working class to unlock unique professional networking, and skilling opportunities. The app is currently live in 14 cities - Mumbai, Delhi-NCR, Bengaluru, Hyderabad, Pune, Ahmedabad, Jaipur, Ranchi, Kolkata, Surat, Lucknow, Kanpur, Ludhiana, and Chandigarh. Having raised $90+ million from marquee investors like Insight Partners, Tiger Global, Lightspeed India, Sequoia Capital, Rocketship.vc and Greenoaks Capital, Apna is on a mission to enable livelihoods for billions in India. With over 10 million users, present in 14 cities and counting, and over 100,000 employers that trust the platform - India has a new destination to discover relevant opportunities.

View all jobs
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Senior Data Scientist Q&A's
Report this job
Apply for this job