Data Engineer (Databricks Focus)
About the Role
We are seeking an experienced Data Engineer to join our growing data team and play a key role in modernizing our analytics platform. In 2026, we will be executing a large-scale migration and rehydration of ~500 existing PowerBI reports, including re-connecting and optimizing data sources in a new lakehouse environment.
Key Responsibilities
- Design, build, and maintain scalable data pipelines using Databricks (Delta Lake, Unity Catalog, Spark).
- Lead or significantly contribute to the migration and rehydration of approximately 500 PowerBI reports in 2026, including re-pointing and optimizing data sources.
- Implement and maintain CI/CD pipelines for data assets using Databricks Asset Bundles (DAB), GitHub Actions, and other modern DevOps practices.
- Collaborate with data analysts, BI developers, and business stakeholders to ensure data availability, performance, and reliability.
- Optimize ETL/ELT processes for performance, cost, and maintainability.
- Establish best practices for version control, testing, and deployment of notebooks, workflows, and Delta Live Tables.
Required Experience & Skills
- 4+ years of hands-on data engineering experience (5-7+ years of experience overall).
- Strong proficiency in Python and SQL.
- Deep experience with Databricks (workspace administration, cluster management, Delta Lake, Unity Catalog, workflows, and notebooks).
- Proven track record implementing CI/CD for data workloads (preferably using Databricks Asset Bundles and GitHub Actions).
- Solid understanding of Spark (PySpark and/or Spark SQL).
- Experience with infrastructure-as-code and modern data DevOps practices.
- Relevant certifications strongly preferred:
- Databricks Certified Data Engineer Associate or Professional
- Azure Data Engineer Associate (DP-203) or equivalent AWS/GCP certifications
Nice-to-Have / Bonus Skills
- Experience extracting data from SAP/HANA or S/4HANA systems (via ODP, CDS views, SDA, etc.).
- Previous large-scale PowerBI migration or re-platforming projects.
- Familiarity with Databricks SQL warehouses, Serverless, or Lakehouse Monitoring.
- Experience with dbt, Delta Live Tables, or Lakeflow.
If you have strong Databricks + CI/CD + Asset Bundles experience and are excited about transforming a large PowerBI footprint into a modern Lakehouse architecture, you’re a good fit.