Senior Software Engineer, Data Pipelines

AI overview

Develop and operate critical biosecurity data systems to support threat detection and decision-making using advanced data pipelines and cloud infrastructure.

Our mission is to make biology easier to engineer. Ginkgo is constructing, editing, and redesigning the living world in order to answer the globe’s growing challenges in health, energy, food, materials, and more. Our bioengineers make use of an in-house automated foundry for designing and building new organisms. 


 

Note: This role strongly prefers candidates located in the Boston area with the ability to regularly commute to our Seaport headquarters.

Ginkgo Biosecurity is building next-generation biosecurity infrastructure to help governments and partners detect, attribute, and deter biological threats. Our mission extends across public health, national security, and global defense, ensuring nations can rapidly identify dangerous pathogens, understand where threats originate, and respond with confidence.

On our Biosecurity team, you are a software engineer focused on building and operating critical biosecurity data systems. You design reliable data pipelines and models, productionize analytics, and ensure data quality across programs spanning PCR, sequencing, wastewater, biosurveillance, and large-scale environmental monitoring.

This role requires strong software engineering fundamentals—including system design, testing, and code quality—applied to data infrastructure challenges. You will work primarily on backend data systems, designing data warehouses, building ETL/ELT pipelines, and managing data architecture. The role combines platform engineering (e.g., orchestration with Airflow, observability, infrastructure-as-code) with analytics engineering (SQL modeling, testing, documentation) to deliver reliable data products that support threat detection, pathogen attribution, and operational decision-making.

Responsibilities

Data Platform Architecture & Engineering

  • Plan, architect, test, and deploy data warehouses, data marts, and ETL/ELT pipelines primarily within AWS and Snowflake environments
  • Build scalable data pipelines capable of handling structured, unstructured, and high-throughput biological data from diverse sources
  • Develop data models using dbt with rigorous testing, documentation, and stakeholder-aligned semantics to ensure analytics-ready datasets

Data Quality & Governance

  • Ensure data integrity, consistency, and accessibility across internal and external biosecurity data products
  • Develop, document, and enforce coding and data modeling standards to improve code quality, maintainability, and system performance
  • Serve as the in-house data expert, making recommendations on data architecture, pipeline improvements, and best practices; define and adapt data engineering processes to deliver reliable answers to critical biosecurity questions

API & Integration Development

  • Build high-performance APIs and microservices in Python that enable seamless integration between the biosecurity data platform and user-facing applications
  • Design backend services that support real-time and batch data access for biosecurity operations
  • Create data products that empower public health officials, analysts, and partners with actionable biosecurity intelligence

AI & Data Democratization

  • Democratize access to complex biosecurity datasets using AI and LLMs, making data more discoverable and usable for stakeholders
  • Apply AI-assisted development tools to accelerate code generation, data modeling, and pipeline development while maintaining high quality standards

Cloud Infrastructure & Performance

  • Build robust, production-ready data workflows using AWS, Kubernetes, Docker, Airflow, and infrastructure-as-code (Terraform/CloudFormation)
  • Diagnose system bottlenecks, optimize for cost and speed, and ensure the reliability and fault tolerance of mission-critical data pipelines
  • Implement observability, monitoring, and alerting to maintain high availability for biosecurity operations

Technical Leadership & Collaboration

  • Lead data projects from scoping through execution, including design, documentation, and stakeholder communication
  • Collaborate with technical leads, product managers, scientists, and data analysts to build robust data products and analytics capabilities

Minimum Qualifications

  • 7+ years of professional experience in data or software engineering, with a focus on building production-grade data products and scalable architectures
  • Expert proficiency with SQL for complex transformations, performance tuning, and query optimization
  • Strong Python skills for data engineering workflows, including pipeline development, ETL/ELT processes, and data processing; experience with backend frameworks (FastAPI, Flask) for API development; focus on writing modular, testable, and reusable code
  • Proven experience with dbt for data modeling and transformation, including testing frameworks and documentation practices
  • Hands-on experience with cloud data warehouses (Snowflake, BigQuery, or Redshift), including performance tuning, security hardening, and managing complex schemas
  • Experience with workflow orchestration tools (Airflow, Dagster, or equivalent) for production data pipelines, including DAG development, scheduling, monitoring, and troubleshooting
  • Solid grounding in software engineering fundamentals: system design, version control (Git), CI/CD pipelines, containerization (Docker), and infrastructure-as-code (Terraform, CloudFormation)
  • Hands-on experience managing AWS resources, including S3, IAM roles/policies, API integrations, and security configurations
  • Strong ability to analyze large datasets, identify data quality issues, debug pipeline failures, and propose scalable solutions
  •  Excellent communication skills and ability to work cross-functionally with scientists, analysts, and product teams to turn ambiguous requirements into maintainable data products

Preferred Capabilities & Experience

  • Domain familiarity with biological data (PCR, sequencing, wastewater surveillance, TAT metrics) and experience working with lab, bioinformatics, NGS, or epidemiology teams
  • Production ownership of Snowflake environments including RBAC, secure authentication patterns, and cost/performance optimization
  • Experience with observability and monitoring stacks (Grafana, Datadog, or similar) and data quality monitoring (anomaly detection, volume/velocity checks, schema drift detection)
  • Familiarity with container orchestration platforms (Kubernetes) for managing production workloads
  • Experience with data ingestion frameworks (Airbyte, Fivetran) or building custom ingestion solutions for external partner data delivery
  • Familiarity with data cataloging, governance practices, and reference data management to prevent silent data drift
  • Experience designing datasets for visualization tools (Tableau, Looker, Metabase) with strong understanding of dashboard consumption patterns; familiarity with JavaScript for custom visualizations or front-end dashboard development
  • Comfort with AI-assisted development tools (GitHub Copilot, Cursor) to accelerate code generation while maintaining quality standards
  • Startup or fast-paced environment experience with evolving priorities and rapid iteration
  • Scientific or data-intensive domain experience (life sciences, healthcare, materials science)

The base salary range for this role is $134,300-$189,900. Actual pay within this range will depend on a candidate's skills, expertise, and experience. We also offer company stock awards, a comprehensive benefits package including medical, dental & vision coverage, health spending accounts, voluntary benefits, leave of absence policies, 401(k) program with employer contribution, 8 paid holidays in addition to a full-week winter shutdown and unlimited Paid Time Off policy.

 
It is the policy of Ginkgo Bioworks to provide equal employment opportunities to all employees, employment applicants, and EOE disability/vet. 
 

 
Privacy Notice
I understand that I am applying for employment with Ginkgo Bioworks and am being asked to provide information in connection with my application. I further understand that Ginkgo gathers this information through a third-party service provider and that Ginkgo may also use other service providers to assist in the application process.  Ginkgo may share my information with such third-party service providers in connection with my application and for the start of employment.  Ginkgo will treat my information in accordance with Ginkgo's Privacy Policy.  By submitting this job application, I am acknowledging that I have reviewed and agree to Ginkgo's Privacy Policy as well as the privacy policies of the third-party service providers used by Ginkgo's associated with the application process.

Perks & Benefits Extracted with AI

  • Health Insurance: A comprehensive benefits package including medical, dental & vision coverage
  • Paid Time Off: Unlimited Paid Time Off policy

Important Notices: If you have a disability and you wish to discuss potential accommodations related to applying for employment, please contact [email protected].   Recruitment & Staffing AgenciesGinkgo Bioworks does not accept unsolicited resumes from any source other than candidates. The submission of unsolicited resumes by recruitment or staffing agencies to Ginkgo or its employees is strictly prohibited unless contacted directly by Ginkgo’s internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Ginkgo Bioworks, and will not owe any referral or other fees with respect thereto.   Scam AlertPlease be aware of the potential for scams from individuals, organizations and internet sites that claim to represent Ginkgo Bioworks, Inc. in recruitment activities. Ginkgo conducts a formal recruitment process for all authorized positions posted and does not conduct interviews via social media or other third-party sites. This site is secure and any applications made here will feed to our legitimate Applicant Tracking System. If you’re contacted by a Ginkgo Recruiter, please ensure whomever is contacting you truly represents Ginkgo. We will never ask for the exchange of any money or credit card details during the recruitment process. Please be aware of any suspicious email activity from people who could be pretending to be recruiters or senior professionals at Ginkgo and please report any suspicious recruiting activity to: Your state’s Attorney General’s OfficeInternet Crime Complaint CenterUSA.govBetter Business Bureau     

View all jobs
Salary
$134,300 – $189,900 per year
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Senior Software Engineer Q&A's
Report this job
Apply for this job