About the Role
We are looking for a Senior Data Engineer with strong experience in modern data platforms and cloud-based architectures. The ideal candidate has deep hands-on experience with Snowflake or Databricks, has built scalable data pipelines, and is proficient in Azure cloud services.
Key Responsibilities
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows.
- Develop and optimize data lake and data warehouse solutions using Snowflake and/or Databricks.
- Implement data ingestion, transformation, and processing frameworks for structured and unstructured datasets.
- Work closely with cross-functional teams (Data Analytics, Data Science, Product, Engineering) to support data consumption needs.
- Build reusable and modular data pipeline components aligned with best practices.
- Implement data quality validation, reconciliation, and monitoring controls.
- Optimize performance for data storage, compute, and query execution.
- Ensure adherence to data governance, security standards, and compliance requirements.
- Participate in solution architecture discussions and contribute to technical design decisions.
Required Skills & Experience
- 7–8+ years of hands-on experience as a Data Engineer.
- Strong experience in Snowflake and/or Databricks (any one is fine, both preferred).
- Hands-on experience on Azure Cloud (e.g., Azure Data Factory, ADLS, Azure Synapse, Azure Key Vault, Azure Functions, Azure DevOps).
- Expertise in SQL and performance tuning for large datasets.
- Experience developing ETL/ELT pipelines using modern data frameworks.
- Proficiency with scripting languages such as Python or Scala.
- Experience with data modeling techniques (star schema, normalized models, dimensional modeling).
- Experience with CI/CD for data pipelines and version control (Git).
- Exposure to data governance, metadata management, and data quality frameworks.