Are you motivated to participate in a dynamic, multi-tasking environment? Do you want to join a company that invests in its employees? Are you seeking a position where you can use your skills while continuing to be challenged and learn? Then we encourage you to dive deeper into this opportunity.
We believe in career development and empowering our employees. Not only do we provide career coaches internally, but we offer many training opportunities to expand your knowledge base! We have highly competitive benefits with a variety HMO and PPO options. We have company 401k match along with an Employee Stock Purchase Program. We have tuition reimbursement, leadership development, and even start employees off with 16 days of paid time off plus holidays. We offer wellness courses and have highly engaged employee resource groups. Come join the Neo team and be part of our amazing World Class Culture!
NeoGenomics is looking for Senior DevOps Engineer who wants to learn to continue to learn in order to allow our company to grow. This is an onsite position with a Monday – Friday, day shift working in the facility in Durham, NC.
Now that you know what we're looking for in talent, let us tell you why you'd want to work at NeoGenomics:
As an employer, we promise to provide you with a purpose driven mission in which you have the opportunity to save lives by improving patient care through the exceptional work you perform. Together, we will become the world's leading cancer reference laboratory.
Position Summary:
As a Senior DevOps Engineer you will be the primary builder and operator of NeoGenomics’ cloud-native Digital Pathology infrastructure. You will focus on automating the secure, scalable hosting of image management systems (VMS) and AI workloads primarily within AWS, while managing connectivity to enterprise applications in Microsoft Azure. You will own the “Infrastructure as Code” (IaC) strategy, ensuring that the massive storage requirements of Whole Slide Imaging (WSI) and the burst-compute needs of AI inference are handled with efficiency, security, and strict GxP compliance. This role acts as the bridge between on-premise scientific computing and the limitless scale of the cloud.
Responsibilities:
- Design and implement secure, scalable cloud architecture on AWS (S3, EC2, Batch, Lambda) using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation
- Automate intelligent storage lifecycle and tiering policies (for example, S3 Intelligent-Tiering and Glacier) to manage petabyte-scale pathology image archives cost-effectively while ensuring rapid retrieval for clinical review
- Build and maintain robust CI/CD pipelines (for example, Jenkins, GitHub Actions, or Azure DevOps) to automate testing and deployment of AI models, integration scripts, and application updates
- Implement comprehensive observability and reliability practices using monitoring and alerting tools (CloudWatch, Datadog, Splunk) to track system health, API latency, and data pipeline performance, ensuring high availability for clinical services
- Manage secure cross-cloud networking and API connectivity between the AWS data plane and Azure-based enterprise systems (such as LIMS, billing, and ESB), ensuring seamless identity management and data flow
- Enforce security-by-design principles by managing IAM roles, encryption keys (KMS), and network security controls to maintain compliance with HIPAA, GDPR, and GxP standards
- Manage containerized workloads using Docker and Kubernetes to support portable AI inference and microservices that scale dynamically based on lab volume
Education, Experience & Qualifications:
- Bachelor’s Degree or equivalent work experience required
- 5 or more years of experience in DevOps or Cloud Engineering with a primary focus on AWS environments required
- Previous experience managing Azure resources in Terraform preferred
- Extensive experience with Infrastructure as Code (IaC), specifically Terraform (preferred) or AWS CloudFormation
- Proven track record of managing hybrid cloud networking (Direct Connect/VPN) and cross-cloud integrations, including connecting AWS services to Azure AD or API Management
- Experience in regulated industries (healthcare, finance, biotech) managing sensitive data (PHI/PII) is strongly preferred
- Hands-on experience with container orchestration (EKS, ECS, or Kubernetes) and serverless computing
- AWS mastery with deep knowledge of core services, including S3 (object locking and lifecycle), EC2 and Auto Scaling, VPC networking, IAM, and Lambda
- Proficiency in Python, Bash, or Go for automation and glue code
- Expertise in building CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or Azure DevOps
- Strong understanding of encryption standards (TLS, AES), secrets management (Vault or Secrets Manager), and least-privilege access control
- Functional knowledge of Azure AD, Azure Functions, or Azure API Management to support integration tasks