Your key responsibilities will include:
- Designing the long-term strategy for a global analytical data model, considering the diversity of source systems and business processes across all InPost markets.
- Defining the framework and methodology for data model development, including standards, design principles, and implementation guidelines.
- Establishing and maintaining a group-wide data naming and classification strategy, harmonizing existing structures and ensuring consistent adoption across teams.
- Partnering with senior stakeholders from multiple countries and domains to identify, prioritize, and align data initiatives with business objectives.
- Acting as a subject-matter expert for modern analytical architectures, continuously evaluating emerging technologies and recommending improvements.
- Supporting data engineering and analytics consulting teams through mentoring, training, and design reviews, covering theory, tools, and best practices.
- Creating and maintaining clear, multi-layered technical documentation describing data structures, modeling rules, and architectural decisions used across the InPost Group.
- Ensuring scalability and performance of analytical solutions by guiding optimization strategies for large data volumes.
Required experience and skills
- Minimum 6 years of experience in data engineering or analytics engineering roles
- At least 2 years of experience in a senior or lead data architecture–focused position
- Advanced proficiency in SQL
- Strong hands-on experience with Python, Spark, Databricks, Azure, and DBT
- Openness to working with internal frameworks for standardized data object creation
- Extensive experience in designing and operating modern data platforms, including data warehouses, lakehouses, and big data ecosystems
- Strong theoretical and practical knowledge of dimensional modeling, including star schema design and the Kimball methodology
- Deep understanding of medallion architecture concepts and layered data modeling approaches
- Proven ability to deliver large-scale analytics and Big Data solutions in complex environments
- Expertise in performance optimization, including:
- query tuning in big data environments
- ETL and data transformation optimization
- storage layout and partitioning strategies
- Practical experience with Parquet format and Delta Lake, including:
- data versioning
- storage optimization
- ACID transactions
- ensuring data reliability and integrity
- Solid understanding of the end-to-end lifecycle of analytical data products
- Experience working in environments using CI/CD pipelines and Git-based version control (GitLab preferred)
- Excellent communication skills and ability to collaborate across technical and non-technical teams
- Fluent Polish and English