This is a hands on data platform engineering role that places significant emphasis on consultative data engineering engagements with a wide range of customer stakeholders; Business Owners, Business Analytics, Data Engineering teams, Application Development, End Users and Management teams.
- Design and build resilient and efficient data pipelines for batch and real-time streaming
- Architect and design data infrastructure on cloud using Infrastructure-as-Code tools.
- Collaborate with product managers, software engineers, data analysts, and data scientists to build scalable and data-driven platforms and tools.
- Provide technical product expertise, advise on deployment architectures, and handle in-depth technical questions around data infrastructure, PaaS services, design patterns and implementation approaches.
- Collaborate with enterprise architects, data architects, ETL developers & engineers, data scientists, and information designers to lead the identification and definition of required data structures, formats, pipelines, metadata, and workload orchestration capabilities
- Address aspects such as data privacy & security, data ingestion & processing, data storage & compute, analytical & operational consumption, data modeling, data virtualization, self-service data preparation & analytics, AI enablement, and API integrations.
- Lead a team of engineers to deliver impactful results at scale.
- Execute projects with an Agile mindset.
- Build software frameworks to solve data problems at scale.
· 7+ years of data engineering experience leading implementations of large-scale lakehouses on Databricks, Snowflake, or Synapse. Prior experience using DBT and PowerBI will be a plus.
· 3+ years' experience architecting solutions for developing data pipelines from structured, unstructured sources for batch and realtime workloads.
· Extensive experience with Azure data services (Databricks, Synapse, ADF) and related azure infrastructure services like firewall, storage, key vault etc. is required.
· Strong programming / scripting experience using SQL and python and Spark.
· Strong Data Modeling, Data lakehouse concepts.
· Knowledge of software configuration management environments and tools such as JIRA, Git, Jenkins, TFS, Shell, PowerShell, Bitbucket.
· Experience with Agile development methods in data-oriented projects
· Highly motivated self-starter and team player and demonstrated success in prior roles.
· Track record of success working through technical challenges within enterprise organizations
· Ability to prioritize deals, training, and initiatives through highly effective time management
· Excellent problem solving, analytical, presentation, and whiteboarding skills
· Track record of success dealing with ambiguity (internal and external) and working collaboratively with other departments and organizations to solve challenging problems
· Strong knowledge of technology and industry trends that affect data analytics decisions for enterprise organizations
· Certifications on Azure Data Engineering and related technologies.