Intuitive is hiring a

Data Engineer Intern

Sunnyvale, United States
Internship

Primary Function of Position

Join our high-energy, fast-paced Data Engineering team and help drive the future of surgical workflow and performance data in robotic surgery! As a Data Engineering Intern, you’ll be hands-on in designing, building, and optimizing data platforms that power our advanced analytics and innovative product development. You’ll work closely with our engineering, data, and product teams, taking ownership of real projects that ensure data integrity, security, and accessibility.

Essential Job Duties

  • Collaborate in the design, deployment, and operation of robust data architectures and solutions for capturing, storing, and utilizing structured and unstructured data from diverse sources.
  • Support the creation of data pipelines and ETL processes to channel data from various inputs and store it across distributed (cloud) or local databases, ensuring high standards of data integrity and security.
  • Assist in building technical tools and programs that leverage AI, machine learning, and big data techniques to cleanse, organize, and transform data while maintaining and defending data structures.
  • Contribute to developing microservices that ingest and process large amounts of log data through distributed systems.
  • Participate in the design, development, and management of distributed ETL pipelines, utilizing tools like Python, Airflow, Spark, EMR, Docker, and Elasticsearch to integrate surgical data with business processes.
  • Gain hands-on experience with our robotic systems, including uploading and streaming event data, video, kinematics, and other real-time data streams.
  • Implement model testing, validation, and optimization in automated ETL and training pipelines to ensure data accuracy and system performance.
  • Collaborate in integrating extensive databases (videos, system data, metadata) with advanced analytics and machine learning frameworks, supporting teams in product development.

Required Skills and Experience

  • Strong foundational skills in data engineering, ETL, and familiarity with tools like Jenkins, dbt, and Airflow.
  • Strong coding skills in Python, Scala, and/or Java, with an emphasis on clean, maintainable, and efficient code for data processing.
  • Proficient in designing, implementing, and optimizing ETL/ELT pipelines using tools like Apache Airflow, dbt, and AWS Glue to support scalable data workflows.
  • Basic knowledge of SQL and NoSQL databases (e.g., MySQL, Postgres, MongoDB) and time-series databases (e.g., Druid, Influx).
  • Familiarity with AWS (EC2, S3, RDS, Lambda, EKS, Kenesis, Athena, Glue, DynamoDB, Redshift, IAM) and an interest in cloud infrastructure.
  • Understanding of security protocols, including SAML, OAUTH, JWT Token, and SSO.
  • Experience in orchestrating data workflows and ETL processes using AWS Data Pipeline or AWS Step Functions.
  • Knowledge of interactive data preparation for cleaning and transforming data.
  • Interest or experience in data analytics (dashboards, insights) and tools like Tableau is a plus.
  • Experience with or an interest in CI/CD pipelines and build tools like Jenkins, CircleCI, or GitLab.
  • Deep knowledge of Apache Spark and Kafka for batch and real-time data processing at scale.

Required Education and Training

  • Currently pursuing a degree in Computer Science, Software/Computer Engineering, Information Technology, Data Science, or a related field.

Preferred Skills and Experience  

  • Basic knowledge of SQL and NoSQL databases (e.g., MySQL, Postgres, MongoDB) and time-series databases (e.g., Druid, Influx).
  • Familiarity with AWS (EC2, S3, RDS, Lambda, EKS, Kenesis, Athena, Glue, DynamoDB, Redshift, IAM) and an interest in cloud infrastructure.
  • Understanding of security protocols, including SAML, OAUTH, JWT Token, and SSO.
  • Experience in orchestrating data workflows and ETL processes using AWS Data Pipeline or AWS Step Functions.
  • Knowledge of interactive data preparation for cleaning and transforming data.
  • Interest or experience in data analytics (dashboards, insights) and tools like Tableau is a plus.
  • Experience with or an interest in CI/CD pipelines and build tools like Jenkins, CircleCI, or GitLab.
  • Deep knowledge of Apache Spark and Kafka for batch and real-time data processing at scale.

 

Due to the nature of our business and the role, please note that Intuitive and/or your customer(s) may require that you show current proof of vaccination against certain diseases including COVID-19.  Details can vary by role.

Intuitive is an Equal Employment Opportunity / Affirmative Action Employer. We provide equal employment opportunities to all qualified applicants and employees, and prohibit discrimination and harassment of any type, without regard to race, sex, pregnancy, sexual orientation, gender identity, national origin, color, age, religion, protected veteran or disability status, genetic information or any other status protected under federal, state, or local applicable laws.

EEO and AA Policy

We will consider for employment qualified applicants with arrest and conviction records in accordance with fair chance laws.

We provide market-competitive compensation packages, inclusive of base pay (paid at an hourly rate), benefits, and a housing allowance. It would not be typical for someone to be hired at the top end of range for the role, as actual pay will be determined based on several factors, including relevant skills and experience for this internship, degree-seeking academic program (PhD, Master’s, Bachelor’s, etc), year in school, and location. The hourly rate is prorated against the intern program salaries listed and total program compensation will be based on internship duration.

Apply for this job

Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!

Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job