Data Engineer (Snowflake)

Allata is a fast-growing technology strategy and data development consulting firm delivering scalable solutions to our enterprise clients. Our mission is to inspire our clients to achieve their most strategic goals through uncompromised delivery, active listening, and personal accountability. We are a group who thrive in fast-paced environments, working on complex problems, continually learning, and working alongside colleagues to be better together. At IMRIEL(An Allata Company), we are seeking a technically skilled Data Engineer with deep expertise in Business Intelligence (BI) concepts, including Star Schema, Hierarchies, Dimensions, Facts, Metrics, and Calculations. If you are passionate about data engineering, experienced in DAX, T-SQL,  and Snowflake, Databricks this opportunity is for you! Experience: 3 to 7 years. Location: Vadodara & Pune What you'll be doing: • Lead the migration of our existing SQL Server backend and SSAS (SQL Server Analysis Services) models to Databricks, ensuring smooth system integration and minimizing downtime. • Conduct a comprehensive POC to evaluate three potential semantic layer platforms: Microsoft Fabric, AtScale, and Cube.dev. This includes assessing their performance, scalability, and integration with Databricks. • Convert complex DAX (Data Analysis Expressions) calculations from existing Power BI models into the new semantic layer technology, ensuring consistency in data performance and accuracy. • Architect and implement scalable Star Schema structures to support advanced reporting and analytics, focusing on creating and optimizing Hierarchies, Dimensions, Facts, and Metrics. • Develop and optimize ETL (Extract, Transform, Load) pipelines to integrate and load data into the new semantic layer, ensuring smooth transitions from legacy systems. • Fine-tune and optimize complex DAX and T-SQL queries, ensuring that the new semantic layer platform delivers high-performance querying and analytical capabilities. What you need: Basic Skills: • Expertise in BI concepts, including Star Schema, Hierarchies, Dimensions, Facts, Metrics, and Calculations as applied to large-scale data warehousing and analytics systems. • Advanced experience in developing and optimizing DAX expressions for complex calculations in Power BI models, with a proven ability to translate these into new semantic layer technologies like AtScale or Cube.dev. • Strong proficiency in T-SQL for querying, manipulating, and transforming large datasets. Experience in writing complex SQL queries involving joins, subqueries, and window functions. • Hands-on experience with modern data platforms like Snowflake, Databricks, or Microsoft Fabric, with a focus on system migration, performance tuning, and building robust data architectures. • In-depth understanding of ETL pipelines, with experience in data integration,  transformation, and loading into semantic layers for scalable, high performance analytics. Responsibilities: • Lead the migration of SQL Server and SSAS data models to Databricks, ensuring that the transition to new architectures is seamless and performance optimized. • Conduct a technical evaluation of Microsoft Fabric, AtScale, and Cube.dev for use as new semantic layers. Provide insights and recommendations on which platform is best suited for the business based on scalability, performance, and cost-efficiency. • Build comprehensive star schema models to support multi-dimensional analytics, ensuring data is structured for optimal performance in reporting and business intelligence. • Continuously monitor and optimize DAX and T-SQL queries to ensure that both query response times and resource consumption are minimized in the new semantic layer. • Migrate large-scale datasets from Azure Analysis Services (AAS) in Power BI to the new semantic layer, maintaining data integrity and ensuring system scalability. • Work closely with BI developers, data engineers, and business analysts to integrate the new semantic layer into existing workflows, ensuring that business requirements are met efficiently and effectively. • Collaborate in defining the overall architecture of the solution. This includes knowledge of modern Enterprise Data Warehouse and Data Lakehouse architectures that implement Medallion or Lamda architectures • Design, develop, test, and deploy processing modules to implement data-driven rules using SQL, Stored Procedures, and Python. • Understands and owns data product engineering deliverables relative to a CI-CD pipeline and standard devops practices and principles • Build and optimize data pipelines on platforms like Snowflake, Databricks, SQL Server, or Azure Data Fabric. Hard Skills - Must have: • Current knowledge of an using modern data tools like (Snowflake, Databricks, FiveTran, Tableau, Power BI and others); Core experience with data architecture, data integrations, data warehousing, and ETL processes. • Applied experience in SQL, Stored Procedures, and Python based on area of data platform specialization. • Strong knowledge of relational database systems, such as MS SQL Server, Postgres SQL, Oracle, Snowflake, Azure SQL, AWS RDS, Aurora or a comparable engine. Hard Skills - Nice to have/It's a plus: • Automation experience with CICD pipelines to support deployment and integration workflows including trunk-based development using GitHub Enterprise • Proficiency in Python for advanced data processing tasks. • Experience with DBT and Airflow for rapid model prototyping and collaboration. Soft Skills / Business Specific Skills: • Ability to identify, troubleshoot, and resolve complex data issues   effectively. • Strong teamwork, communication skills and intellectual curiosity to work collaboratively and effectively with cross-functional teams. • Commitment to delivering high-quality, accurate, and reliable data   products solutions. • Willingness to embrace new tools, technologies, and methodologies. • Innovative thinker with a proactive approach to overcoming challenges.
At Allata, we value differences.

Allata is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Allata makes employment decisions without regard to race, color, creed, religion, age, ancestry, national origin, veteran status, sex, sexual orientation, gender, gender identity, gender expression, marital status, disability or any other legally protected category.

This policy applies to all terms and conditions of employment, including but not limited to, recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job