Big Data Engineer with Databricks

  • Develop a comprehensive technical plan for the migration, including data ingestion, transformation, storage, and access control within Azure’s Data Factory and data lake.
  • Design and implement scalable and efficient data pipelines to ensure smooth data movement from multiple sources using Azure Databricks.
  • Develop scalable and reusable frameworks for data ingestion.
  • Ensure data quality and integrity throughout the data pipeline, implementing robust data validation and cleansing mechanisms.
  • Work with event-based and streaming technologies to ingest and process data.
  • Provide technical guidance and support to the team, resolving technical challenges or issues during the migration and post-migration phases.
  • Stay current with advancements in cloud computing, data engineering, and analytics technologies, recommending best practices and industry standards for implementing data lake solutions.
  • 4 to 8 years of experience in Data Engineering or similar roles. 
  • Experience working with Azure Databricks.
  • Expertise in Data Modeling and Source System Analysis.
  • Proficiency with PySpark.
  • Mastery of SQL.
  • Knowledge of Azure components: Azure Data Factory, Azure Data Lake, Azure SQL DW, and Azure SQL.
  • Experience with Python programming for data engineering purposes.
  • Ability to conduct data profiling, cataloging, and mapping for technical design and construction of technical data flows.
  • Experience with data visualization and exploration tools.

Careers at Nagarro. Find Great Talent with Career Pages. | powered by SmartRecruiters | Find Great Talent with a Career Page.

View all jobs
Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job