This is a position at Qode's Client
Description
This role focused on data engineering teams with data warehousing, streaming and batch patterns, CI/CD for data pipelines,
- Drive and coach Agile teams to deliver on engineering standards, sprint backlogs and plans, engineers’ responsibilities and performance management, code quality, adherence to development guardrails, and testing;
- Drive Agile delivery across data platforms, ensuring high standards for; Data quality and testing, Code quality and review practices, CI/CD for data pipelines, Documentation and operational readiness
- Collaborate closely with data architects, product managers, analytics teams, platform teams, and governance stakeholders to deliver data capabilities aligned with business priorities
- Own the execution of the data engineering roadmap, balancing short-term delivery with long-term platform sustainability
- Contribute to data platform architecture and design, including ingestion, transformation, storage, and consumption layers
- Coach engineers to be T-shaped, capable of working across batch, streaming, analytics engineering, and platform concerns
- Own and prioritise the remediation of technical and data debt, including legacy pipelines, performance issues, and data quality gaps
- Stay current with modern data engineering tools, patterns, and methodologies, particularly within the Databricks ecosystem
- Be accountable for the full lifecycle of data solutions, from design through build, deployment, monitoring, and support
- Empower teams to be self-sufficient, disciplined, and accountable for the reliability of data products
- Lead initiatives to improve data delivery processes, including automation, observability, and operational excellence
- Motivate teams to continuously improve through innovation, experimentation, and continuous delivery
- Drive career development and progression for data engineers, partnering with HR on performance management and growth paths
Requirements
Experience with one of the following/similar Technologies:
- Strong experience with Databricks, Apache Spark, and lakehouse patterns
- Deep understanding of data warehousing concepts, dimensional modelling, and analytics use cases
- Experience building and operating batch and streaming data pipelines
- Familiarity with Delta Lake, data versioning, and schema evolution
- Understanding of data quality, data validation, lineage, and observability practices
- Understanding about AWS (Lambda, S3, API Gateway, CLI, ECS, EKS…) or Google cloud platform is a must;
- Formal Development methodologies;
Experience Required:
- 10+ years’ experience in software or data engineering
-
4+ years’ experience leading engineering teams, ideally in data, analytics, or platform domains
- Proven ability to design and deliver end-to-end data platforms or major data initiatives
- Experience working in regulated environments (banking or financial services preferred)
- Strong people leadership skills, with a proven ability to mentor and grow data engineers
- Experience producing and maintaining technical documentation, including architecture diagrams, runbooks, and data specifications
- Proven ability to work with multiple stakeholders across geographies and manage competing priorities
- Strong understanding of quality, reliability, and operational excellence in production data systems
- Ownership of SLAs / SLOs for data availability and freshness
- Collaboration with risk, compliance, and audit teams
- Responsibility for data cost management and optimisation
Benefits
- Meal and parking allowances
- Full benefits and salary rank during probation.
- Insurances such as Vietnamese labor law and premium health care for employees & family members
- Values-driven, international working environment, and agile culture.
- Overseas travel opportunities for training and work-related.
- Internal Hackathons and company events (team building, coffee run, etc.).
- Pro-Rate and performance bonus.
- 15-day annual + 3-day sick leave per year from the company.
- Work-life balance 40-hr per week from Mon to Fri.