(Services) Data Engineer

Hyderabad , India

AI overview

Design and optimize ETL/ELT data pipelines and cloud data platforms, ensuring data governance and collaboration with cross-functional teams to enhance data analytics and decision-making.

As a Data Engineer
You design, build, and optimize large-scale data pipelines and platforms across cloud environments. You manage data integration from multiple business systems, ensuring high data quality, performance, and governance. You collaborate with cross-functional teams to deliver trusted, scalable, and secure data solutions that enable analytics, reporting, and decision-making.

Meet the job

Data Engineering: Design, build, and optimize scalable ETL/ELT pipelines using Azure Data Factory, Databricks, PySpark, and SQL;
 ● Cloud Data Platforms: Manage and integrate data across Azure (Synapse, Data Lake, Event Hub, Key Vault) and GCP (BigQuery, Cloud Storage);
 ● API Integration: Develop workflows for data ingestion and processing via REST APIs and web services, including integrations with BambooHR, Salesforce, and Oracle NetSuite;
 ● Data Modeling & Warehousing: Build and maintain data models, warehouses, and lakehouse structures to support analytics and reporting needs;
 ● Performance Optimization: Optimize Spark jobs, SQL queries, and pipeline execution for scalability, performance, and cost-efficiency;
 ● Governance & Security: Ensure data privacy, security, and compliance while maintaining data lineage and cataloging practices;
 ● Collaboration: Partner with business stakeholders, analysts, and PMO teams to deliver reliable data for reporting and operations;
 ● Documentation: Create and maintain technical documentation for data processes, integrations, and pipeline workflows;

How about you

Education: Bachelor’s/Master’s degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent;
 ● Experience: 5+ years of experience in Data Engineering and large-scale data migration projects;
 ● Technical Skills: Proficient in SQL, Python, and PySpark for data processing and transformation;
 ● Big Data & Cloud: Hands-on expertise with Apache Spark, Databricks, and Azure Data Services (ADF, Synapse, Data Lake, Event Hub, Key Vault);
 ● GCP Knowledge: Exposure to Google Cloud Platform (BigQuery, Cloud Storage) and multi-cloud data workflows;
 ● Integration Tools: Exposure to tools such as Workato for API-based data ingestion and automation;
 ● Best Practices: Strong understanding of ETL/ELT development best practices and performance optimization;
 ● Added Advantage: Certifications in Azure or GCP cloud platforms;
 ● Domain Knowledge: Preferable to have knowledge of Oracle NetSuite, BambooHR, Salesforce data ingestion, and PMO data operations;
 ● Soft Skills: Strong problem-solving skills, effective communication, and ability to work both independently and in cross-functional teams while mentoring junior engineers.

Backbase is a leading provider of Engagement Banking solutions, empowering financial institutions with seamless digital customer experiences and driving rapid digital growth globally.

View all jobs
Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job