Senior Data Engineers (Talent Pool - Hybrid - UAE)

AI overview

Senior Data Engineers at DVT play a key role in shaping robust, scalable data platforms that support critical business outcomes for clients in a collaborative consulting environment.

DVT is one of the leading software development consultancies on the African continent, partnering with top organisations across South Africa and globally to deliver cutting-edge technology solutions. Our engineers consult on complex, high-impact projects, working with modern platforms and technologies while collaborating with some of the most established developers locally and internationally. As part of our strategic expansion, DVT is embarking on a targeted recruitment drive to build a strong pipeline of experienced Data Engineers in the UAE. This initiative is aimed at ensuring we are well positioned to support upcoming client engagements in the region, enabling us to move quickly and effectively as new projects come online.

Senior Data Engineers at DVT play a key role in shaping robust, scalable data platforms that support critical business outcomes for our clients. Working in a collaborative, consulting-led environment, they contribute to the design and evolution of data solutions that are reliable, efficient, and aligned with long-term business objectives.DVT is deeply committed to the growth and development of its people. We foster a strong culture of continuous learning, knowledge sharing, and technical excellence through ongoing training, internal speaking opportunities, and participation in sponsored technical events across the broader technology ecosystem.

Requirements

Job Description: Senior Data Engineer

Position Overview

The Senior Data Engineer is a senior consulting role responsible for designing, building, and delivering enterprise-grade data and analytics platforms, with a strong focus on Databricks-based lakehouse architectures. This role requires deep hands-on experience in data migration, ETL/ELT development, data architecture, governance, and cloud-native platform builds on Microsoft Azure and AWS. The Senior Data Engineer will work closely with solution architects, analysts, data scientists, and business stakeholders to deliver secure, scalable, and well-governed analytics solutions.

Technical Knowledge

Strong knowledge and extensive hands-on experience in:

  • Databricks platform implementation including Apache Spark, Delta Lake, Unity Catalog, Notebooks, DLT (Delta Live Tables), Lakeflow and AI/BI Genie

  • Advanced data engineering using PySpark, Python, and SQL for large-scale data processing

  • Designing and implementing lakehouse architectures and analytics platforms

  • ETL / ELT pipeline development for batch and streaming data use cases

  • Data migration from on-premise, legacy data warehouses, and ETL tools to cloud-based lakehouse platforms

  • Data modelling techniques including dimensional modelling, star schemas, and medallion architecture

  • Data governance, security, and access control using Unity Catalog and cloud-native security services

  • Microsoft Azure data services: Azure Databricks, Azure Data Lake Storage (ADLS Gen2), Azure Data Factory (ADF), Synapse Analytics, Azure Key Vault

  • AWS data services: S3, Glue, EMR, Redshift, IAM, and CloudWatch

  • Data orchestration and scheduling using ADF, Databricks Workflows, Airflow or similar tools

  • Performance tuning, cost optimisation, monitoring, and data reliability best practices

  • CI/CD and DevOps practices for data platforms using Git, Azure DevOps, or similar tools

Behavioural Competencies

  • Strong technical leadership with the ability to own and drive complex delivery outcomes

  • Excellent analytical, problem-solving, and troubleshooting skills

  • Ability to communicate complex technical concepts clearly to non-technical stakeholders

  • Consulting mindset with strong client engagement and stakeholder management skills

  • Ability to mentor junior engineers and uplift team capability

  • Comfortable working in fast-paced, ambiguous, and multi-client environments

  • Proactive, delivery-focused, and quality-driven approach

Responsibilities

  • Design, build, and implement scalable Databricks-based data and analytics platforms

  • Lead and execute data migration initiatives from legacy and on-premise systems to cloud platforms

  • Develop and maintain robust ETL / ELT pipelines using PySpark, Python, SQL, ADF, and AWS Glue

  • Define and implement data architecture patterns aligned to enterprise and cloud best practices

  • Implement data governance, security, and access control frameworks in Unity Catalog

  • Enable analytics and BI use cases through well-modelled and trusted data layers

  • Collaborate with solution architects, data scientists, analysts, and business stakeholders

  • Optimise data pipelines and Spark workloads for performance, scalability, and cost efficiency

  • Provide technical leadership and mentoring to junior and mid-level data engineers

  • Create and maintain technical documentation, architecture diagrams, and data flow artefacts

  • Contribute to best practices, standards, and reusable assets within the Data & AI practice

Minimum Experience Required

  • 5–8+ years of professional experience in Data Engineering or related roles

  • 3+ years of hands-on experience delivering solutions on Databricks

  • Strong experience with Microsoft Azure cloud data technologies

  • Working knowledge and practical experience with AWS data services

  • Proven experience delivering data migration and modernisation projects

  • Experience working in consulting or client-facing delivery environments

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Senior Data Engineer Q&A's
Report this job
Apply for this job