You will be a part of our Data Engineering team and will be focused on delivering exceptional results for our clients. A large portion of your time will be in the weeds working alongside your team architecture, designing, implementing, and optimizing data solutions. You’ll work with the team to deliver, migrate and/or scale cloud data solutions; build pipelines and scalable analytic tools using leading technologies including AWS, Azure, GCP, Spark, Hadoop, etc.
Responsibilities:
- Develop data pipelines to move and transform data from various sources to data warehouses
- Ensure the quality, reliability, and scalability of the organization's data infrastructure
- Optimize data processing and storage for performance and cost-effectiveness
- Collaborate with data scientists, analysts, and other stakeholders to understand their requirements and develop solutions to meet their needs
- Continuously monitor and troubleshoot data pipelines to ensure their reliability and availability
- Stay up-to-date with the latest trends and technologies in data engineering and apply them to improve our data capabilities
- Bachelor's degree in Computer Science, Software Engineering, or a related field
- 5+ years of experience in data engineering or a related field
- Strong programming skills in Python or Scala or Java
- Expert SQL skills and a good understanding of existing SQL warehouses
- Familiarity with big data technologies like Hadoop, Spark, or Hive
- Experience with cloud-based data storage and processing platforms like AWS, Azure, or GCP
- Strong problem-solving and communication skills