Data Platform Engineer

AI overview

Build and maintain scalable data ingestion pipelines and cloud infrastructure to support diverse IoT and operational data management across various devices.

We are a high-growth company transforming how businesses operate by integrating AI, IoT, and cloud-native services into scalable, real-time platforms. As a Platform Data Engineer, you’ll play a critical role in building and maintaining the data infrastructure that powers our products, services, and insights.

You’ll join a multidisciplinary team focused on ingesting, processing, and managing massive streams of sensor and operational data across a wide array of devices—from drones and robots to industrial systems and smart environments.

Responsibilities

  • Design, build, and maintain scalable, reliable, and high-throughput data ingestion pipelines for structured and semi-structured data.
  • Implement robust and secure data lake and SQL-based storage architectures optimized for performance and cost.
  • Develop and maintain internal tools and frameworks for data ingestion using Python, Golang, and SQL.
  • Collaborate cross-functionally with Cloud, Edge, Product, and AI teams to define data contracts, schemas, and retention policies.
  • Use AWS cloud infrastructure (including Argo Workflows, S3, Lambda, Glue, Kinesis, Athena, and RDS) to support end-to-end data workflows.
  • Employ Infrastructure-as-Code (IaC) practices using Terraform to manage data platform infrastructure.
  • Monitor data pipelines for quality, latency, and failures using tools such as CloudWatch, SumoLogic, or DataDog.
  • Continuously optimize storage, partitioning, and query performance across large-scale datasets.
  • Participate in architecture reviews and ensure the platform adheres to security, compliance, and best practice standards.

Skills and Qualifications

  • 5+ years of professional experience in software engineering or data engineering.
  • Strong programming skills in Python and Golang.
  • Deep understanding of SQL and modern data lake architectures (e.g., using Parquet, Iceberg, or Delta Lake).
  • Hands-on experience with AWS services including but not limited to: S3, Lambda, Glue, Kinesis, Athena, and RDS.
  • Proficiency with Terraform for automating infrastructure deployment and management.
  • Experience working with real-time or batch data ingestion at scale, and designing fault-tolerant ETL/ELT pipelines.
  • Familiarity with event-driven architectures and messaging systems like Kafka or Kinesis.
  • Strong debugging and optimization skills across cloud, network, and application layers.
  • Excellent collaboration, communication, and documentation skills.

Bonus Points

  • Experience working with time-series or IoT sensor data at industrial scale.
  • Familiarity with analytics tools and data warehouse integration (e.g., Redshift, Snowflake).
  • Exposure to gRPC and protobuf-based data contracts.
  • Experience supporting ML pipelines and feature stores.
  • Working knowledge of Kubernetes concepts.
  • Prior startup experience and/or comfort working in fast-paced, iterative environments.

At BrightAI, we are transforming industries with cutting-edge AI, IoT, Cloud, and Mobile solutions. As a high-growth company, we seek talented individuals eager to drive meaningful change and shape the future of business operations. If you're ready to make an impact and be part of a team redefining what's possible, explore our current opportunities and build the future with us.

View all jobs
Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Platform Engineer Q&A's
Report this job
Apply for this job