Description:
We are looking for a skilled Scala Developer with experience in constructing data engineering frameworks on Apache Spark. The successful candidate will be instrumental in developing and optimizing our data processing pipelines, ensuring they are efficient, scalable, and support the bank's data-driven goals.
Key Responsibilities:
Ø Develop and maintain scalable data processing pipelines using Scala and Apache Spark.
Ø Solid foundation in software engineering, including Object-Oriented Design (OOD) and design patterns
Ø Exposure to Cloudera or Hortonworks Hadoop distribution including: HDFS, Yarn and Hive.
Ø Solid foundation in software engineering, including Object-Oriented Design (OOD) and design patterns.
Ø Write clean, efficient, and maintainable code that meets the functional and non-functional project requirements.
Ø Optimize Spark jobs for performance and cost efficiency.
Ø Work closely with the data architecture team to implement data engineering best practices.
Ø Troubleshoot and resolve technical issues related to data processing.
Minimum Qualifications:
Ø bachelor's degree in computer science, Engineering, or a related field.
Ø 3+ years of professional experience in Scala programming.
Ø Demonstrated experience with Apache Spark and building data engineering pipelines.
Ø Strong knowledge of data structures, algorithms, and distributed computing concepts.
Preferred Qualifications:
Experience with AWS or other cloud services is advantageous.
Exposure to other big data technologies and databases.