Staff Engineer - DataOps Engineer

AI overview

Effectively manage end-to-end data operations and automate data pipelines while collaborating with cross-functional teams to enhance analytics performance across global business units.

We are seeking a DataOps Engineer to join Tech Delivery and Infrastructure Operations teams, playing a key role in ensuring the reliability, automation, and performance of our analytics and data platforms. This role is primarily DataOps-focused, combining elements of DevOps and SRE to sustain and optimize data-driven environments across global business units.

You will manage end-to-end data operations from SQL diagnostics and data pipeline reliability to automation, monitoring, and deployment of analytics workloads on cloud platforms. You'll collaborate with Data Engineering, Product, and Infrastructure teams to maintain scalable, secure, and high-performing systems.

Key Responsibilities

  • Manage and support data pipelines, ETL processes, and analytics platforms, ensuring reliability, accuracy, and accessibility
  • Execute data validation, quality checks, and performance tuning using SQL and Python/Shell scripting
  • Implement monitoring and observability using Datadog, Grafana, and Prometheus to track system health and performance
  • Collaborate with DevOps and Infra teams to integrate data deployments within CI/CD pipelines (Jenkins, Azure DevOps, Git)
  • Apply infrastructure-as-code principles (Terraform, Ansible) for provisioning and automation of data environments
  • Support incident and request management via ServiceNow, ensuring SLA adherence and root cause analysis
  • Work closely with security and compliance teams to maintain data governance and protection standards
  • Participate in Agile ceremonies within Scrum/Kanban models to align with cross-functional delivery squads

Required Skills & Experience

  • 6 years in DataOps, Data Engineering Operations, or Analytics Platform Support, with good exposure to DevOps/SRE practices
  • Proficiency in SQL and Python/Shell scripting for automation and data diagnostics
  • Experience with cloud platforms (AWS mandatory; exposure to Azure/GCP a plus)
  • Familiarity with CI/CD tools (Jenkins, Azure DevOps), version control (Git), and IaC frameworks (Terraform, Ansible) - Working knowledge of monitoring tools (Datadog, Grafana, Prometheus)
  • Understanding of containerization (Docker, Kubernetes) concepts
  • Strong grasp of data governance, observability, and quality frameworks
  • Experience in incident management and operational metrics tracking (MTTR, uptime, latency)

Must have Skills: Python (Strong), SQL (Strong), DevOps - AWS (Strong), DevOps - Azure (Strong), DataDog.

👋🏼 We're Nagarro.We are a digital product engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (19,500+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in!By this point in your career, it is not just about the tech you know or how well you can code. It is about what more you want to do with that knowledge. Can you help your teammates proceed in the right direction? Can you tackle the challenges our clients face while always looking to take our solutions one step further to succeed at an even higher level? Yes? You may be ready to join us.

View all jobs
Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Operations Engineer Q&A's
Report this job
Apply for this job