About the Role
We are looking for a seasoned Engineering Manager to lead our Data Platform. You will own the architecture and evolution of a Petabyte-scale Data Lakehouse and a Self-serve Analytics Platform, enabling real-time decision-making across the organization. In this role, you will drive consistency and quality by defining the right engineering strategies. You will oversee multiple engineering projects, ensure timely execution, collaborate across functions, and mentor engineers to grow into high-performing contributors.
What You Will Do
Drive technical roadmap and architecture for driving efficiency across the platform along with automated data governance.
Define and monitor SLOs/SLAs for data availability, latency, and quality across real-time and batch pipelines.
Drive excellence in engineering quality and lead solutioning for complex product problems
Collaborate with Analytics, Data Science and Product teams to drive adoption of the Self-Serve Platform and reduce time to high quality insights.
Champion robust data security standards, including RBAC, PII masking, and encryption, to ensure DPDP compliance in a democratized data environment.
Manage engineers end-to-end, taking ownership of project delivery and product scalability
Conduct regular planning, review, and retrospective meetings
Create and present progress reports for ongoing projects and initiatives
What You Will Need
Bachelor’s or Master’s degree in Computer Science or a related field
8+ years of overall professional experience
2+ years of experience managing software development teams
Experience managing a high-performing team of engineers with varying seniority.
Proven ability to develop and implement distributed systems and platforms at scale.
Expertise in Scala, Java, Python, or Go
Proficiency in Apache Spark and its core architecture, including platform-based optimization on Spark batch and streaming workloads.
Deep experience with Open Table Formats like Delta Lake, Apache Iceberg or Apache Hudi.
Deep understanding of transactional and NoSQL databases
Knowledge of messaging systems, especially Kafka
Hands-on experience with cloud infrastructure, preferably GCP/AWS
Good understanding of streaming and real-time data pipelines
Expertise in data modeling, data quality, and utilizing data validation tools is essential.
Proficiency in Business Intelligence (BI) tools, including but not limited to Tableau, Metabase, and Superset.
Proficiency in Real-time OLAP engines (Apache Pinot, Apache Druid, or ClickHouse) and Stream Processing (Apache Flink or Spark Streaming).
Good to Have
Experience managing infrastructure utilizing Kubernetes and Helm.
Understanding of workflow orchestration utilizing Apache Airflow etc.
Experience designing dynamic DAGs, managing backfills, and optimizing scheduler performance at scale.
Experience managing infrastructure components utilizing Kubernetes and Helm.
Experience implementing centralized data access control layers (e.g., Apache Ranger) and auditing frameworks
Leadership & Collaboration
Ability to drive sprints and OKRs effectively.
Strong stakeholder management and cross-functional collaboration skills.
Exceptional people management and mentorship capabilities.
Why Join Meesho
Work on data systems at massive scale impacting millions of users
Own and influence core data infrastructure powering one of India’s fastest-growing platforms
Collaborate with smart, driven engineers and leaders in a high-ownership culture
Opportunity to shape technology, people, and processes in a rapidly evolving ecosystem