- Implement stream and batch processing data pipelines that feed article indexes and build machine learning models.
- Developing machine learning algorithms, from single recommendation algorithms to whole page optimization algorithms.
- Designing and implementing data validation strategies to ensure model performance over time.
- Enable measurable asset performance by implementing A/B testing and tracking key metrics.
- Implement measures to improve the reliability, scalability, and performance of the recommendation infrastructure to deliver a personalized experience to millions of users.
- Work closely with Data Scientists, Infrastructure Engineers, Product Owners and other stakeholders to develop a highly scalable and intelligent data application.
- At least a Bachelor Degree in computer science or related field.
- Relevant job experience as a Data Engineer.
- Solid skills in programming with Python.
- Good knowledge in SQL, Python (asyncio), Docker, RESTAPI, Git, Kubernetes, strong cloud experience (GCP, AgroCD, Gitlab CI/CD), Elasticsearch (BigTable)
- Ability to write efficient, well-tested code with a keen eye on scalability and maintainability.
- Experience with at least one cloud, GCP is a plus.
- Enjoy working in an Agile team, like to get involved, think "out of the box" and are keen to tackle new topics.
- Good English skills.
- Good to have knowledge in Terraform and Redis.
All communication will be handled with absolute privacy. Only shortlisted candidates will be notified.