We are looking for 2 Data Engineers to support key strategic banking projects throughout 2026. The role focuses on data pipeline development, integration of existing systems, and preparing the platform for future cloud adoption. The environment is highly technical and includes Openshift, microservices, Kafka, Spark, and multiple relational databases.
Key Responsibilities
• Develop, optimize, and maintain data pipelines on the On-Premise environment (Openshift, DIH).
• Integrate and manage data flows from OracleDB2, MySQL, MariaDB, and other relational systems.
• Build and maintain APIs and microservices using Spring Boot/Java.
• Work with Kafka for streaming and real-time data processing.
• Perform large-scale data processing using Apache Spark and Trino.
• Implement and manage CI/CD pipelines using GitLab.
• Conduct performance testing with JMeter and monitor system stability using Dynatrace.
• Contribute to the gradual transition to cloud platforms (Azure / GCP / AWS) and the integration of MCP announced for 2026.
Technical Requirements
o Openshift, Apache Kafka, Apache Spark, Spring Boot, Trino
o Databases: OracleDB2, MySQL, MariaDB
o Languages: Python, Java
o GitLab CI/CD, JMeter, Dynatrace
Soft Skills
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.
Data Engineer Q&A's