AI Engineer (Istanbul / Ankara / Izmir)

AI overview

Develop, fine-tune, and optimize large language models while managing cloud deployments and collaborating with cross-functional teams on impactful international AI projects.

About the Role

We are seeking a skilled AI Engineer with hands-on experience in LLM deployment, fine-tuning, and cloud-based AI workflows. This is a full onsite position, and candidates must have no international travel restrictions, as the role may require short-term overseas assignments. Fluent English communication is mandatory for this position due to international project collaborations.

Key Responsibilities:

Develop, fine-tune, and optimize Large Language Models (LLMs) and Computer Vision (CV) for production use

Deploy AI models in production using Docker, container-based environments, and CI/CD workflows

Build and maintain scalable inference pipelines and AI-driven APIs

Manage workloads in cloud environments (AWS, GCP, Azure, Huawei Cloud, etc.)

Improve model accuracy, performance, and efficiency through evaluation and optimization

Implement data preprocessing, model training, and experiment tracking workflows

Collaborate closely with cross-functional teams and communicate effectively in English

Troubleshoot model, infrastructure, and deployment-related issues

Work onsite and participate in international travel when required

Requirements

  • Min 2–3 years of professional experience in AI Engineering / ML Engineering / LLM Engineering
  • Strong, practical experience with LLM deployment, serving frameworks, and inference optimization
  • Hands-on experience in LLM fine-tuning (LoRA, QLoRA, instruction tuning, RLHF/DPO, etc.)
  • Strong programming skills in Python; experience with PyTorch or TensorFlow
  • Proficiency with Docker and containerized application development
  • Experience with cloud platforms (AWS / Azure / GCP / Huawei Cloud)
  • Familiarity with MLOps tools (Git, CI/CD, model registry, experiment tracking)
  • Fluent in English, both written and spoken
  • No restrictions on international travel
  • Willingness to work full-time onsite (not remote)
  • Preferred Skills (Nice to Have):
  • Experience with distributed training, model parallelism, or GPU clusters
  • Familiarity with Kubernetes and cloud-native AI infrastructure
  • Experience with vLLM, Hugging Face ecosystem, or similar serving technologies
  • Knowledge of vector databases, embeddings, and RAG systems
  • Strong Linux system skills (bash scripting, process management, system monitoring, environment setup, troubleshooting)

12 yılda yaklaşık 5.000 mühendis ve araştırmacı ile bilişim ve iletişim teknolojileri profesyonelleri yetiştirerek ekosistemin büyümesine katkı sağlıyor ve global projelere imza atıyoruz. Birlikte geleceği kodluyoruz!

View all jobs
Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

AI Engineer Q&A's
Report this job
Apply for this job