This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer in United States.
This role offers the opportunity to design, build, and maintain large-scale, high-performance data infrastructure that powers analytics, machine learning, and business intelligence across global teams. You will work in a fully remote, flexible environment, collaborating with cross-functional groups to ensure data quality, reliability, and scalability. The position requires hands-on experience in cloud-native data pipelines, distributed data processing, and ETL frameworks, with a focus on continuous improvement and operational excellence. You will mentor other engineers, drive best practices, and contribute to the evolution of the data platform. Ideal candidates are technically strong, highly collaborative, and comfortable in a fast-paced, innovative environment. This role has a direct impact on how data drives decision-making and innovation across the organization.
Accountabilities:
Design, develop, and implement scalable, high-volume data infrastructure for data lakes and data warehouses
Build, maintain, and optimize ETL pipelines, ensuring accuracy, consistency, and reliability of data flows
Implement cloud-native data pipelines, automation routines, and database schemas for analytics and machine learning
Guide and mentor other data engineers, providing technical leadership for parts of the data platform
Establish and enforce coding standards, design patterns, and best practices to improve maintainability
Collaborate with business stakeholders, analysts, and technical teams to deliver actionable insights
Monitor data quality, perform validation, and implement telemetry for ongoing process improvement
Requirements:
BS in Computer Science, Software Engineering, or a related technical field; MS preferred
7+ years of professional experience, with 5+ years in data engineering, business intelligence, or similar roles
Expert knowledge of Python and SQL
Hands-on experience with ETL orchestration tools such as Airflow or Flink on AWS or GCP
Experience with distributed data processing frameworks like Spark or Presto, and streaming technologies such as Kafka or Flink
3+ years with cloud platforms, preferably AWS, and big data databases like Snowflake
Experience with containerization and orchestration using Kubernetes
Strong understanding of DevOps practices and cloud-native architecture principles
Excellent communication skills and ability to mentor and guide technical teams
Benefits:
Competitive salary range: $190,000 – $200,000 annually
Fully remote work environment with flexible hours
Professional development opportunities and mentorship programs
Cutting-edge technology projects and exposure to innovative data platforms
Collaborative, supportive team culture
Comprehensive benefits package including medical, dental, vision, and wellness programs
Why Apply Through Jobgether?
We use an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team.
We appreciate your interest and wish you the best!
Data Privacy Notice: By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre-contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time.
#LI-CL1