Data Engineer PK (Python/PySpark/AWS Glue/Amazon Athena/SQL/Apache Airflow)

TLDR

Contribute to innovative data engineering solutions by building and optimizing complex data pipelines using a range of modern technologies and frameworks.

Let’s be direct: We’re looking for a technical powerhouse. If you’re the developer who:

  • Is the clear technical leader on your team

  • Consistently solves problems others can’t crack

  • Ships complex features in half the time it takes others

  • Writes code so clean it could be published as a tutorial

  • Takes pride in elevating the entire codebase

Then we want to talk to you.
This isn’t a role for everyone, and that’s by design.
We’re seeking developers who know they’re exceptional and have the track record to prove it.

What you’ll do

  • Build, optimize, and scale data pipelines and infrastructure using Python, TypeScript, Apache Airflow, PySpark, AWS Glue, and Snowflake.

  • Design, operationalize, and monitor ingest and transformation workflows: DAGs, alerting, retries, SLAs, lineage, and cost controls.

  • Collaborate with platform and AI/ML teams to automate ingestion, validation, and real-time compute workflows; work toward a feature store.

  • Integrate pipeline health and metrics into engineering dashboards for full visibility and observability.

  • Model data and implement efficient, scalable transformations in Snowflake and PostgreSQL.

  • Build reusable frameworks and connectors to standardize internal data publishing and consumption.

Requirements

Required qualifications

  • 4+ years of production data engineering experience.

  • Deep, hands-on experience with Apache Airflow, AWS Glue, PySpark, and Python-based data pipelines.

  • Strong SQL skills and experience operating PostgreSQL in live environments.

  • Solid understanding of cloud-native data workflows (AWS preferred) and pipeline observability (metrics, logging, tracing, alerting).

  • Proven experience owning pipelines end-to-end: design, implementation, testing, deployment, monitoring, and iteration.

Preferred qualifications

  • Experience with Snowflake performance tuning (warehouses, partitions, clustering, query profiling) and cost optimization.

  • Real-time or near-real-time processing experience (e.g., streaming ingestion, incremental models, CDC).

  • Hands-on experience with a backend TypeScript framework (e.g., NestJS) is a strong plus.

  • Experience with data quality frameworks, contract testing, or schema management (e.g., Great Expectations, dbt tests, OpenAPI/Protobuf/Avro).

  • Background in building internal developer platforms or data platform components (connectors, SDKs, CI/CD for data).

Additional Information:

  • This is a fully remote position.

  • Compensation will be in USD.

  • Work hours are aligned with the EST time zone (9 AM to 6 PM EST) or PT time zone.

Wizdaa builds advanced solutions for DevOps and Platform Engineering, focusing on boosting deployment productivity and managing multiple environments seamlessly. Our offerings are designed for exceptional developers and teams looking to enhance their technical capabilities and leadership in software development.

View all jobs
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Data Engineer Q&A's
Report this job
Apply for this job