Machine Learning Engineer II
TLDR
Contribute to the design, development, and deployment of AI and ML models and build scalable machine learning pipelines for advanced analytics.
Design & Execute: Take ownership of the design and implementation of modern AI stack components, including data ingestion for AI/ML workloads and end-to-end model training and serving pipelines.
Scale & Optimize: Build and manage fault-tolerant AI platforms that scale economically. You will balance the maintenance of legacy models with the rapid development of advanced, scalable solutions.
Mentor & Collaborate: Provide technical mentorship to junior engineers and foster a collaborative environment. You will act as a bridge between data science and production engineering.
Drive Technical Excellence: Promote best practices in coding, testing, and MLOps. You thrive in ambiguous conditions by independently identifying opportunities to optimize model pipelines and improve AI workflows.
Cross-Functional Integration: Partner with data scientists, product managers, and software engineers to translate business needs into technical requirements and integrate AI solutions into production applications.
Implement Governance: Enforce model quality standards, integrity, and reliability. You will be responsible for implementing model lineage, fairness, and privacy controls within the automated pipelines.
Monitor & Measure: Build monitoring frameworks to track model performance and system KPIs, ensuring our AI initiatives drive measurable business outcomes.
Experience: Minimum of 4–6 years of professional experience in machine learning engineering, with a proven track record of deploying models into production environments.
Technical Depth: Deep understanding of the modern AI stack, including data ingestion workflows and experience working with curated data warehouses like Snowflake, Databricks, or Redshift.
Cloud Proficiency: At least 3 years of hands-on experience with AWS infrastructure, specifically SageMaker, Spark/AWS Glue, and Infrastructure as Code (IaC) using Terraform.
Orchestration Expert: High proficiency in managing multi-stage workflows using Airflow or similar orchestration systems to automate training and deployment cycles.
MLOps Toolkit: Practical experience with MLflow, Kubeflow, or SageMaker Feature Store to support the end-to-end machine learning lifecycle.
Governance Mindset: Familiarity with model governance practices (lineage, fairness, and privacy) and experience using data cataloging tools for compliance.
Communication: Strong ability to communicate complex technical concepts to non-technical stakeholders and influence project direction.
Industry Context: Experience in FinTech or SaaS environments is a significant advantage.
Wave HQ builds tools and resources specifically designed to help small businesses succeed, empowering them to enhance their operations and community impact. Our focus is on fostering creativity and collaboration in a supportive environment, making it easier for small business owners to thrive and innovate.
- Founded
- Founded 2010
- Employees
- 201-500 employees
- Industry
- Internet Software & Services
- Total raised
- $80M raised