Minimum of 5 years DevOps experience in AWS Cloud including managing ML Pipelines.
Built and Executed at least 2 MLOps projects in AWS cloud using Sagemaker or other services.
Skills:
Experience building cloud infrastructure as code
Expertise in MLOps best practices
Foundational understanding of data science and data science best practice
Experience AWS services (sagemaker, ECR, S3, lambda, step functions) is a must
Should able write CloudFormation scripts for dev/test/prod environments
Knowledge in Python
Should be able to build Docker images independently
AWS CodeCommit or Github (including github actions) experience is a must
Responsibilities:
Maintain and extend existing data science pipelines in AWS, with an emphasis on infrastructure as code (cloudformation)
For the purposes of this engagement, extensions will be minimal and limited to those required to support the four identified workstreams.
Maintain and create documentation on infrastructure usage and design (confluence, github wikis, diagrams)
Serve as the internal infrastructure expert, providing guidance to data scientists deploying models into the pipelines
Research new optimization opportunities based on the needs of specific data science products
Work independently and collaboratively with data scientists to implement optimizations and improvements to specific projects deploying or being re-platformed within the infrastructure.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Get hired quicker
Be the first to apply. Receive an email whenever similar jobs are posted.
Ace your job interview
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.