Platform Administrator (Databricks)

AI overview

Oversee and optimize multiple Databricks workspaces while ensuring governance, compliance, and technical support across various data engineering tasks.

Nitka Technologies develops software for customers in the US and Europe and brings together about 300 professionals from Eastern Europe, North and South America, Armenia, Georgia and Kazakhstan.

We are looking for an experienced Platform Administrator (Databricks) for a long-term project. The customer is a company in California, an industry leader in the sale of tickets to events in Europe and the USA. 

We offer 100%  remote, full-time work.

Main tasks:
  • Manage multiple Databricks workspaces (dev/qa/prod);
  • Configure cluster policies, governance & compliance;
  • Create & maintain unity catalog objects: catalogs, schemas, grants, service principals, external storage etc;
  • Monitor and debug failed / long-running jobs using system tables (job_run_timeline, node_timeline, workflow_run);
  • Troubleshoot cluster crashes, driver OOM, executor failures, memory leaks;
  • Investigate Python / Spark errors, dependency conflicts (PyPI, WHL, Maven)
  • Assist users with cluster/job configuration, notebook errors, unity catalog permissions;
  • Explain platform limitations and best practices to Data engineers;
  • Maintain confluence pages with platform rules & troubleshooting guides;

Requirements:

  • Experience in implementation, integration or technical support;
  • Experience with Linux, Bash;
  • Confident SQL skills;
  • Strong experience of working with AWS Infrastructure (ec2, s3, iam, vpc, secret management, sqs);
  • Experience with Python sufficient for writing small scripts or applications;
  • Experience of maintaining Databricks jobs & environments;
  • Experience with REST API/SDK;
  • Understanding of file-based databases (DeltaLake, Parquet, Hive);
  • Understanding of of cluster types & node families;
  • Spoken English at Intermediate level or higher;

Would be a plus:

  • Databricks REST API / Databricks SDK;
  • Working with schema evolution, time travel, vacuum, compaction, z-order;
  • Debugging corrupted delta tables (conflicting commits, tombstones, missing checkpoints);
  • Understanding of acid implementation on top of object storage;
  • Spark knowledge (jobs, partitions, queries), experience in Kafka or similar technology, familiarity with Terraform / Gitlab CI;
  • Cost monitoring for platform services and objects;
  • Experience in enterprise Data platforms;

Working conditions:

  • Remote work;
  • Full-time (8 hours/day);
  • Attractive USD compensation;
  • Paid vacation, holidays.

Perks & Benefits Extracted with AI

  • Paid Time Off: Paid vacation, holidays.
  • Remote-Friendly: Remote work.
Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Administrator Q&A's
Report this job
Apply for this job