(Independent Contractor): Content Maintenance Mentor- Data Engineering

AI overview

Join a collaborative contractor community to maintain and update cutting-edge data engineering content and tooling for Udacity's School of Data Science.

About Us

Udacity is now an Accenture company, and exciting things are happening! 🚀 We are on a mission of forging futures in tech through radical talent transformation in digital technologies. We offer a unique and immersive online learning platform, powering corporate technical training in fields such as Artificial Intelligence, Machine Learning, Data Science, Autonomous Systems, Cloud Computing and more. Our rapidly growing global organization is revolutionizing how the enterprise market bridges the talent shortage and skills gaps during their digital transformation journey.

Udacity is a pioneer in online technical education, offering high-quality courses across a wide range of disciplines. Our catalog includes short and long programs, Nanodegrees (bundled courses), and content tailored to multiple skill levels, foundational, beginner, intermediate, and advanced as well as business leadership audiences.

To ensure our content remains current, impactful, and industry-aligned, we continuously review and update our courses. We take a data-driven approach to evaluating content quality and identifying outdated material. Key performance metrics, such as student satisfaction, lesson ratings, and page-level feedback, help us determine whether a course requires maintenance. Throughout the year, various courses are kept under active maintenance to ensure they receive timely updates. To do this effectively, we regularly collaborate with expert contractors who help update the course content.

As new needs arise, we contact qualified candidates within our contractor pool to share project details, scope, and timelines. Contractors work closely with a Udacity team member who provides tooling, guidance, and logistical support. In most cases, contractors operate as individual contributors, though they may collaborate with other teams, such as Content Developers, Program Managers, and Learning Architects, to define scope, set priorities, and gather necessary information about the content under maintenance.

About the School of Data Science 

We’re building a contractor pool of data professionals to support our School of Data Science. The School of Data Science currently offers courses and Nanodegrees across the end-to-end data lifecycle, including (but not limited to):

  • Data Literacy and Data Fluency
  • Data Analytics and Business Analytics (SQL, Spreadsheets, Power BI, Tableau)
  • Data Visualization and Data Storytelling
  • Statistics, Probability, and Experimental Design
  • Data Science and Machine Learning Fundamentals (Python, R, ML pipelines)
  • Data Engineering and Streaming (Airflow, Kafka, Spark, Data Lakes/Lakehouses, Data Warehouses)
  • Data Architecture, Data Governance, and Data Privacy
  • Cloud Data Solutions on AWS and Azure (e.g., Redshift, Synapse, Databricks, S3/ADLS, etc.)

Our contractor pool helps ensure this catalog stays technically accurate, pedagogically sound, and aligned with industry practices in data, analytics, and AI.

Understanding Our Learning Infrastructure

To effectively maintain and update our data courses, you'll need to understand how students interact with our content. Our courses use two key technologies:

Udacity Workspaces
For practitioner content, we provide in-classroom workspaces so students don’t need to install or purchase any tools or set up environments locally. These workspaces are Docker containers running in Kubernetes, and students access them directly in the classroom page through their browser. For the School of Data Science, common workspace types include:

  • Jupyter Notebooks: for Python- and R-based data analysis, statistics, and machine learning
  • SQL Workspaces: browser-based SQL UIs against managed databases
  • VS Code Workspaces: for more complex data engineering, data science, and software-for-data workflows

These workspaces need continuous updates and patching, and the exercises/project starter code must be updated to remain compatible with the updated workspace (e.g., Python libraries and data engineering toolchains).


Udacity Cloud Labs
We also provide temporary access to various cloud services via Cloud Labs. For the School of Data Science, these are primarily AWS and Azure labs that power data engineering, data architecture, and streaming exercises and projects. Cloud Labs are federated accounts that allow students to use the AWS or Azure consoles using temporary credentials. These cloud labs are pre-configured with RBAC and policies. In some cases, we pre-create several data resources, such as data warehouses, data lakes, Kafka clusters, or compute environments, via Infrastructure as Code to provision the resources required for an exercise or project. 

If you thrive on challenges, want to make an impact, and are interested in joining our contractor community, we encourage you to read on and apply.

JOIN THE TEAM TODAY

Required skills/qualifications: 

  • Deep expertise with data engineering and data platforms on AWS and/or Azure, including:
  • Advanced knowledge of Infrastructure as Code tools (Terraform, CloudFormation, or Bicep) for provisioning data infrastructure.
  • Strong experience writing Dockerfiles, Makefiles, and shell scripts, especially for data science and data engineering environments.
  • Hands-on experience with Kubernetes or container orchestration (GKE or equivalent) is a strong plus.
  • Deep understanding of cloud security and data governance, including IAM policies, Azure RBAC, network controls, encryption, and handling of sensitive data.
  • Proven ability to create and configure images and environments (e.g., AMIs, VM images, cluster templates, workspace images) for data workloads.
  • Experience with Linux system administration and package management.
  • Proficiency in setting up and configuring development environments and toolchains for data teams (Python/R environments, SQL engines, CLI tools, etc.).
  • Strong automation skills and ability to build reproducible infrastructure and environments.
  • Experience with CI/CD pipelines and deployment automation (e.g., GitHub Actions, CircleCI, Azure DevOps) is a plus.

Responsibilities:

  • Udacity Workspaces
    • Install and update data science and data engineering toolchains in existing workspace images (e.g., Python, R, Jupyter, VS Code extensions, CLI tools such as AWS CLI, Azure CLI, Kafka/Spark tooling).
    • Create new Docker images for Udacity Workspaces tailored to data workloads. This may require:
      • Writing and maintaining Dockerfiles, Makefiles, and shell scripts.
      • Optimizing images for performance, reproducibility, and reliability.
      • Coordinating with platform teams when changes affect Kubernetes/GKE configuration.
  • Cloud Labs (AWS and Azure)
    • Update IAM policies, RBAC configurations, and resource provisioning scripts as needed to support data workloads (e.g., access to S3/ADLS, Redshift/Synapse, Databricks, Kafka, managed databases).
    • Create or update Cloud Labs using our partner tools. This will require knowledge of AWS Policies, Azure RBAC, and Infrastructure as Code (Terraform, CloudFormation, or Bicep).
    • Create and maintain images or templates (e.g., AMIs, VM images, Databricks clusters) pre-loaded with required tools, drivers, and configurations for data projects.
    • Install and configure tools in Linux environments (e.g., Python data stacks, Spark, Kafka clients, database clients) for student use.
    • Develop Infrastructure as Code templates to automate provisioning of data infrastructure for exercises and projects (pipelines, data stores, compute, networking, monitoring).

Why should you apply? 

  • Gain recognition for your technical knowledge
  • Network with other top-notch technical mentors 
  • Earn additional income 
  • Contribute to a vibrant, global student community 
  • Stay updated on the latest in cutting-edge technologies

 

Also, while attaching resume/CV, please make sure the document is in English language.

 

Compensation at Udacity, an Accenture company, varies depending on a wide array of factors, which may include but are not limited to location, role, skill set, and level of experience. As required by local law, Udacity, an Accenture company, will provide a reasonable range of compensation. 

We believe that no one should be discriminated against because of their differences. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Our rich diversity makes us more innovative, more competitive, and more creative, which helps us better serve our clients and our communities.

 Accenture Equal Opportunity Statement

Udacity, an Accenture company, is an EEO and Affirmative Action Employer of Veterans/Individuals with Disabilities, and is committed to providing veteran employment opportunities to our service men and women.

Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States. Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process. Further, at Accenture a criminal conviction history is not an absolute bar to employment. 

 

Udacity's Values
 
Obsess over Outcomes - Take the Lead - Embrace Curiosity - Celebrate the Assist 
 

 

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job