Data Operations Engineer Intern (Internship)

AI overview

Support and enhance healthcare data operations by monitoring production data pipelines and automating workflows, while gaining hands-on experience with AWS technologies.

About Us

Abacus Insights is transforming how data works for health plans. Our mission is simple: make healthcare data usable, so the people responsible for care and cost decisions can act faster, with confidence.  
We help health plans break down data silos to create a single, trusted data foundation. That foundation powers better decisions —so plans can improve outcomes, reduce waste, and deliver better experiences for members and providers alike.  

Backed by $100M from top investors, we’re tackling big challenges in an industry that’s ready for change.  Our platform enables GenAI use cases by delivering clean, connected, and reliable healthcare data that can support automation, prioritization, and decision workflows—and it’s why we are leading the way.

Our innovation begins with people. We are bold, curious, and collaborative—because the best ideas come from working together. Ready to make an impact? Join us and let's build the future together.

About the Role

We are seeking a Data Operations Engineer Intern to join our TechOps organization within the Connector Factory team. This role provides hands-on experience supporting production grade data pipelines responsible for ingesting, transforming, validating, and delivering healthcare data from numerous external sources. As an intern, you will work closely with senior data, platform, and operations engineers to monitor pipeline health, debug data and system issues, automate operational workflows, and improve data reliability. You will gain exposure to both batch and streaming architectures and learn how modern data platforms are deployed and operated using AWS services such as Lambda, EMR, EKS, and Databricks.

Your day to day:

  • Monitor production data pipelines and systems, identifying failures, latency issues, schema changes, and data quality anomalies.
  • Debug pipeline failures by analyzing logs, metrics, SQL outputs, and upstream/downstream dependencies.
  • Assist in root cause analysis (RCA) for data incidents and contribute to implementing corrective and preventive solutions.
  • Support the maintenance and optimization of ETL/ELT workflows to improve reliability, scalability, and performance.
  • Automate recurring data operations tasks using Python, shell scripting, or similar tools to reduce manual intervention.
  • Assist with data mapping, transformation, and normalization efforts, including alignment with Master Data Management (MDM) systems.
  • Collaborate on the generation and validation of synthetic test datasets for pipeline testing and data quality validation.
  • Shadow senior engineers to deploy, monitor, and troubleshoot data workflows on AWS, Databricks, and Kubernetes-based environments.
  • Ensure data integrity and consistency across multiple environments (development, staging, production).
  • Clearly document bugs, data issues, and operational incidents in Jira and Confluence, including reproduction steps, impact analysis, and resolution details.
  • Communicate effectively with cross-functional, onsite, and offshore teams to escalate issues, provide status updates, and track resolutions.
  • Participate in Agile ceremonies and follow structured incident and change management processes

What you bring to the team:

  • Strong interest in data engineering, data operations, and production data systems.
  • Currently pursuing or recently completed a Master’s degree in Computer Science, Data Science, Engineering, Statistics, or a related quantitative discipline.
  • Solid understanding of ETL/ELT architectures, including ingestion, transformation, validation, orchestration, and error handling.
  • Proficiency in SQL, including complex joins, aggregations, window functions, and debugging data discrepancies at scale.
  • Working knowledge of Python for data processing, automation, and operational tooling.
  • Familiarity with workflow orchestration tools such as Apache Airflow, including DAG design, scheduling, retries, and dependency management.
  • Experience or exposure to data integration platforms such as Airbyte, including connector-based ingestion, schema evolution, and sync monitoring.
  • Understanding of Master Data Management (MDM) concepts and tools, with exposure to platforms such as Rhapsody, Onyx, or other enterprise MDM solutions.
  • Knowledge of data pipeline observability, including log analysis, metrics, alerting, and debugging failed jobs.
  • Exposure to cloud platforms (preferably AWS), with familiarity in services such as S3, Lambda, EMR, EKS, or managed data processing services.
  • Ability to communicate technical issues clearly and concisely, including writing actionable bug reports and collaborating on incident resolution.
  • Strong documentation habits and attention to detail in operational workflows.

What we would like to see, but not required:

  • Experience with cloud data warehouses such as Snowflake or BigQuery.
  • Familiarity with Databricks, Apache Spark, or distributed data processing frameworks.
  • Hands-on experience building automation for data operations or reliability engineering.
  • Exposure to healthcare data standards, regulated data environments, or HIPAA-compliant systems. 

Compensation: Compensation for this role is $30/hour or $62,400 per annum.

What you’ll get in return

  • Unlimited paid time off – recharge when you need it
  • Work from anywhere – flexibility to fit your life
  • Comprehensive health coverage – multiple plan options to choose from
  • Growth-focused environment – your development matters here #LI-MS1

Our Commitment as an Equal Opportunity Employer

As a mission-led technology company helping to drive better healthcare outcomes, Abacus Insights believes that the best innovation and value we can bring to our customers comes from diverse ideas, thoughts, experiences, and perspectives. Therefore, we dedicate resources to building diverse teams and providing equal employment opportunities to all applicants. Abacus prohibits discrimination and harassment regarding race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.

At the heart of who we are is a commitment to continuously and intentionally building an inclusive culture—one that empowers every team member across the globe to do their best work and bring their authentic selves. We carry that same commitment into our hiring process, aiming to create an interview experience where you feel comfortable and confident showcasing your strengths. If there’s anything we can do to support that—big or small—please let us know.

Perks & Benefits Extracted with AI

  • Health Insurance: Comprehensive health coverage – multiple plan options to choose from
  • Paid Time Off: Unlimited paid time off – recharge when you need it
  • Remote-Friendly: Work from anywhere – flexibility to fit your life
Salary
$30,400 per year
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Operations Engineer Q&A's
Report this job
Apply for this job