Data Engineer for Seeds

AI overview

Support large-scale data transformation projects using Databricks and AWS while optimizing performance, scalability, and security in a collaborative team environment.

We’re hiring experienced Data Engineer to support large-scale data transformation project. You’ll be embedded within a high-performing team delivering mission-critical data platforms using Databricks and AWS.

This is a hands-on engineering role focused on architecture, implementation, and optimization of robust data solutions at scale.

Key Responsibilities

  • Design, build, and deploy data pipelines and platforms using Databricks and cloud infrastructure (preferably AWS)
  • Lead or contribute to end-to-end implementation of data solutions in enterprise environments
  • Collaborate with architects, analysts, and client stakeholders to define technical requirements
  • Optimize data systems for performance, scalability, and security
  • Ensure data governance, quality, and compliance in all solutions

Required Skills & Experience

  • 7+ years of experience in data engineering
  • Deep expertise with Databricks (Spark, Delta Lake, MLflow, Unity Catalog)
  • Strong experience with cloud platforms, ideally AWS (S3, Glue, Lambda, etc.)
  • Proven track record of delivering complex data solutions in commercial space like Sales, Marketing, Pricing, Customer Insights
  • At least 4 years of hands-on data pipeline design and development experience with Databricks, including specific -platform features like Delta Lake, Uniform (Iceberg), Delta Live Tables (Lake flow Declarative pipelines), and Unity Catalog.
  • Strong programming skills using SQL, Stored Procedures and Object-Oriented Programming languages (Python, PySpark etc.).
  • Experience with CI/CD for data pipelines and infrastructure-as-code tools (e.g., Terraform)
  • Strong understanding of data modeling, Lakehouse architectures, and data security best practices
  • Familiarity with NoSQL Databases and Container Management Systems.
  • Exposure to AI/ML tools (like mlflow), prompt engineering, and modern data and AI agentic workflows.
  • An ideal candidate will have Databricks Data Engineering Associate and/or Professional certification completed with multiple Databricks project delivery experience.

Nice to Have

  • Experience with Azure or GCP in addition to AWS
  • Knowledge of DevOps practices in data engineering
  • Familiarity with regulatory frameworks (e.g., GDPR, SOC2, PCI-DSS)
  • AWS Redshift, AWS Glue/ Spark (Python,Scala)

Bachelor's of Engineering in CS

Note: Syngenta is an Equal Opportunity Employer and does not discriminate in recruitment, hiring, training, promotion or any other employment practices for reasons of race, color, religion, gender, national origin, age, sexual orientation, gender identity, marital or veteran status, disability, or any other legally protected status. 

Follow us on: Twitter & LinkedIn

https://twitter.com/SyngentaAPAC 

https://www.linkedin.com/company/syngenta/

India page

https://www.linkedin.com/company/70489427/admin/

Syngenta is a global leader in agriculture; rooted in science and dedicated to bringing plant potential to life. Each of our 28,000 employees in more than 90 countries work together to solve one of humanity’s most pressing challenges: growing more food with fewer resources. A diverse workforce and an inclusive workplace environment are enablers of our ambition to be the most collaborative and trusted team in agriculture. Our employees reflect the diversity of our customers, the markets where we operate and the communities which we serve. No matter what your position, you will have a vital role in safely feeding the world and taking care of our planet. Join us and help shape the future of agriculture.

View all jobs
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job