1. 3 to 5 years of experience in building Data pipelines using Databricks.
2. Hands on experience in pyspark, Spark SQL, spark structured streaming for processing diverse datasets.
3. Hands on in Python, SQL.
4. Working knowledge of Databricks workflows and scheduling concepts.
5. Working knowledge of Git code repo, Git Branching strategy and CICD.
6. Good understanding of Apache Spark and Delta lake core concepts.
7. Working knowledge of public cloud platforms, Azure preferred.
8. Good debugging and problem solving skills with good interpersonal and communication skill.
Location: Hyderabad
Experience level: 4-6 years only
Job type : Contract to HIre