What you’ll do:
Your role will be to lead a team of Data Engineers to continue delivering exceptional results for our clients. A large portion of your time will be in the weeds working alongside of your team architecting, designing, implementing, and optimizing data solutions. You’ll lead the technical process from ideation to delivery to migrate and/or scale cloud data solutions; build pipelines and scalable analytic tools using leading technologies including AWS, Azure, GCP, Spark, Hadoop, etc.
What you’ll get:
An amazing, holistic experience in deploying full data products end to end making the decisions and helping with implementations from start to finish. Not only will you get to manage individuals during the development cycles, but you will also be a key leader driving your team forward!
Who you are:
You are a Data Engineer with 5+ years of experience in data engineering with a passion for data and a deep understanding of cloud technologies.
What you have:
- Strong understanding, both conceptually and in practice of Python
- Expert SQL skills and a good understanding of existing SQL warehouses
- Strong experience in Hadoop and Pyspark required
- True understanding of cloud ecosystems (AWS, Azure, GCP), the services available within the respective clouds as well as the limitations of such services
- Excellent understanding of enterprise coding best practices as well as general CI/CD practices
- Expert level code architecture experience
- Good understanding of cloud security best practices
- Good people management and presentation skills
- Deep understanding of distributed systems (Spark, Dask)
- Good understanding of cloud DWs (Snowflake, DynamoDB)
- Good planning abilities in order to accurately make project timeline estimates
- Top tier Git practices with experience managing repos with a large number of contributors
- Previous experience setting up code review frameworks
- Good appreciation for the AGILE methodology
- Good understanding of API connectivity