Senior Data Engineer – Data Pipeline Development & Operational Support
TLDR
Support data pipeline development, log analysis, and operational support activities while collaborating closely with client project teams for reliable data solutions.
Gigalogy Ltd. is seeking a highly skilled Senior Data Engineer to support the Operations & Maintenance (O&M) of modern data platforms.
This role focuses on data pipeline development in test environments, technical investigation, log analysis, and operational support activities. The position requires close collaboration with client project teams to deliver reliable and high-quality data solutions.
The ideal candidate will bring strong technical expertise, attention to detail, and the ability to work effectively in a structured and client-focused environment.
Key Responsibilities:
Data Pipeline Development
- Design, build, and maintain ETL/ELT pipelines using non-production datasets
- Implement data transformations and validation logic based on project requirements
- Develop reusable components (scripts, workflows, notebooks) for deployment
Log Analysis & Technical Investigation
- Analyze logs to identify failures, anomalies, and performance issues
- Conduct root cause analysis and prepare clear technical reports
- Collaborate with client teams to support issue resolution
Operational Support (Backend / Technical Tasks)
- Conduct sanity checks on test runs, schema changes, and data onboarding processes.
- Support regression testing and validation before production release.
- Maintain operational runbooks, technical documentation, and change logs.
Client Collaboration
- Work closely with the Client engineering team to clarify specifications and share progress.
- Provide technical deliverables (code, findings, documents) promptly and accurately.
- Participate in regular sync meetings to align on tasks and priorities.
- Minimum 5+ years of experience in data engineering or ETL/ELT development
- Strong proficiency in SQL and Python
- Hands-on experience with at least one cloud platform (AWS, Azure, or GCP)
- Experience with data pipelines and workflow orchestration tools
- Strong analytical and troubleshooting skills
- Good command of English (written and verbal)
- High level of accuracy, reliability, and process adherence
Preferred Qualifications:
- Experience with Databricks, Apache Spark, or Delta Lake
- Familiarity with CI/CD pipelines and DevOps practices
- Experience in data onboarding or integration projects
- Experience working with client-facing or cross-functional teams
Ideal Candidate Profile:
- Detail-oriented with strong ownership and accountability
- Proactive in identifying issues and clarifying requirements
- Quick learner with adaptability to new tools and technologies
- Comfortable working in a structured, delivery-focused environment
- Strong documentation and communication skills
What We Offer:
- An opportunity to work with a Tokyo-based startup and contribute to a truly innovative new AI-based service
- Work with talented colleagues in a cooperative, people-focused environment, where your contributions will be recognized
- The salary range is from 100,000 BDT to 180,000 BDT / month (Based on experience).
- Salary review twice a year
- Performance bonus twice a year
- Complementary meals and snacks.
- Comprehensive health insurance coverage
Working days: Sunday to Thursday. 5 days/week onsite.
Working hours: 9:00 am - 6:00 pm (BDST).
Location: 3rd & 4th Floor, House 1148, Road 9/A, Avenue 10, Mirpur DOHS, Dhaka-1216, Bangladesh.
Benefits
Free Meals & Snacks
Complementary meals and snacks.
Health Insurance
Comprehensive health insurance coverage.
Gigalogy Inc. develops an AI platform that empowers businesses to enhance customer experiences and improve operational efficiency. Tailored for enterprises, our technology offers scalable, turnkey solutions that can integrate effortlessly into existing systems, making it easier for companies to harness the power of AI.