Work on challenging big data projects utilizing Scala and Apache Spark while contributing to a collaborative team environment.
Hybrid
Lokation: Prague (Pankrác)
Language: English required, Czech an advantage
Level: Senior
Form of cooperation: Contraktor
Start date: asap
Allocation: Full-Time
Allocation length: 6 months, with the possibility of extension
Requirements:
Strong programming experience in Scala and Apache Spark (Core, SQL, Streaming).
Experience with big data tools and frameworks (e.g., Hadoop, Hive, Kafka).
Proficiency in working with distributed systems
Solid understanding of data structures, algorithms, and software engineering principles.
Familiarity with CI/CD pipelines and DevOps practices is a plus.
Excellent problem-solving and communication skills.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Ace your job interview
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.