Project – the aim you’ll have
The project focuses on transitioning from on-prem data warehouse to a modern cloud-based microservices for a major telecommunications provider.
A key component of the initiative is the modernization and adaptation of an existing data flows to the new architecture. The work involves redesigning data flows and funnels, integrating customer data logic with the new data lake ecosystem, and ensuring scalable, real-time access to customer segmentation and analytics.
Position – how you’ll contribute
Your role will focus on building a pure backend environment (no frontend work) within a distributed, microservices-based architecture:
- Configuring and optimizing Apache NiFi to support current business data flows, ensuring scalability and performance of the solution
- Configuring and maintaining ETL processes between legacy systems and AWS (primarily using Apache NiFi)
- Analyzing, designing, and implementing data flows and data processing pipelines
- Data modeling for analytical and operational use cases
- Processing and integrating data from queueing/streaming systems (Kafka)
- Developing ETL pipelines using AWS Glue, Azure Data Factory, or similar tools
- Designing, developing, testing, and deploying backend services in Python
- Building and maintaining microservices-based systems (nice to have)
- Improving and optimizing existing backend and data processing services
- Translating business requirements into robust and scalable technical solutions
- Collaborating with architects, DevOps engineers, and data engineering teams
- Supporting infrastructure and platform integration initiatives
- Working with containerized environments (Docker) and Kubernetes
- Supporting integration with OpenSearch
Expectations – the experience you need
- Minimum 5 years of commercial experience in backend or data engineering with Python
- Strong experience in data analysis and data flow design
- Hands-on experience with ETL processes
- Strong practical knowledge of Apache NiFi
- Experience integrating legacy systems with cloud environments (AWS preferred)
- Experience working with data streaming or queueing systems (Kafka or similar)
- Data modeling experience
- Hands-on experience with Docker
- Kubernetes familiarity
- Experience with OpenSearch
- Testing experience (PyTest, Cucumber/Behave)
- Experience working with CI/CD pipelines (e.g., GitLab)
Additional skills – the edge you have
- Experience with Apache NiFi performance tuning and scalability optimization
- Experience developing Python microservices (Flask, FastAPI)
- Familiarity with Pydantic and Pandas
- Familiarity with cloud ETL platforms (AWS Glue, Azure Data Factory, or similar)
- Understanding of distributed data processing patterns
Our offer – professional development, personal growth:
- Flexible employment and remote work
- International projects with leading global clients
- International business trips
- Non-corporate atmosphere
- Language classes
- Internal & external training
- Private healthcare and insurance
- Multisport card
- Well-being initiatives