Data Engineer

TLDR

Support large-scale data transformation projects by designing, building, and deploying data pipelines and platforms using Databricks and AWS.

We’re hiring experienced Data Engineer to support large-scale data transformation project. You’ll be embedded within a high-performing team delivering mission-critical data platforms using Databricks and AWS.

This is a hands-on engineering role focused on architecture, implementation, and optimization of robust data solutions at scale.

Key Responsibilities

  • Design, build, and deploy data pipelines and platforms using Databricks and cloud infrastructure (preferably AWS)
  • Lead or contribute to end-to-end implementation of data solutions in enterprise environments
  • Collaborate with architects, analysts, and client stakeholders to define technical requirements
  • Optimize data systems for performance, scalability, and security
  • Ensure data governance, quality, and compliance in all solutions

Required Skills & Experience

  • 7+ years of experience in data engineering
  • Deep expertise with Databricks (Spark, Delta Lake, MLflow, Unity Catalog)
  • Strong experience with cloud platforms, ideally AWS (S3, Glue, Lambda, etc.)
  • Proven track record of delivering complex data solutions in commercial space like Sales, Marketing, Pricing, Customer Insights
  • At least 4 years of hands-on data pipeline design and development experience with Databricks, including specific -platform features like Delta Lake, Uniform (Iceberg), Delta Live Tables (Lake flow Declarative pipelines), and Unity Catalog.
  • Strong programming skills using SQL, Stored Procedures and Object-Oriented Programming languages (Python, PySpark etc.).
  • Experience with CI/CD for data pipelines and infrastructure-as-code tools (e.g., Terraform)
  • Strong understanding of data modeling, Lakehouse architectures, and data security best practices
  • Familiarity with NoSQL Databases and Container Management Systems.
  • Exposure to AI/ML tools (like mlflow), prompt engineering, and modern data and AI agentic workflows.
  • An ideal candidate will have Databricks Data Engineering Associate and/or Professional certification completed with multiple Databricks project delivery experience.

Nice to Have

  • Experience with Azure or GCP in addition to AWS
  • Knowledge of DevOps practices in data engineering
  • Familiarity with regulatory frameworks (e.g., GDPR, SOC2, PCI-DSS)
  • AWS Redshift, AWS Glue/ Spark (Python,Scala)

Bachelor's of Engineering in CS

Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, gender identity, marital or veteran status, disability, or any other legally protected status. 

Follow us on: Twitter & LinkedIn

https://twitter.com/SyngentaAPAC 

https://www.linkedin.com/company/syngenta/

India page

https://www.linkedin.com/company/70489427/admin/

Syngenta Group is a global agribusiness dedicated to empowering farmers through innovative crop protection and seed solutions. Catering to the agricultural sector, they focus on enhancing food security while promoting sustainable practices, making them a key player in the quest for efficient and responsible farming.

View all jobs
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job