We are looking for a savvy Data Engineer to join our growing team of data experts. The hire will be responsible for expanding and optimizing our data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
Responsibilities:
- Build and deploy the data pipelines that power PatternAI’s machine learning platform.
- Develop warehouse architectures that integrate data from diverse sources.
- Design analytics solutions for both product development and internal collaborators.
Experience:
- A minimum of 4 years experience in data engineering, analytics engineering, or software engineering with an emphasis on large scale data management and processing systems.
- Proficiency in Python and advanced SQL.
- Advanced experience with ETL pipelines, data modeling and managing data warehouses (Snowflake, BigQuery, Redshift, etc).
- Experience with SQL RDBMS such as MySQL, Postgres, Oracle, SQL Server, etc., as well as NoSQL systems such as MongoDB is required.
- Experience with tools including Meltano, dbt, Airflow, and Superset is required.
- Experience with Docker, Git and AWS (EC2, ECS, S3, Step Functions, and Athena).
- Analytics and Business Intelligence knowledge is highly preferred.
- Experience working with common SaaS APIs (such as CRMs) is preferred.
About PatternAI
PatternAI is an early stage startup that is growing rapidly and recently closed a successful round of venture funding. We are emerging from stealth and with an exciting series of machine learning products and a rapidly growing number of enterprise customers.
All your information will be kept confidential according to EEO guidelines.