Senior Big Data Engineer

AI overview

Develop data pipelines and ETL processes using AWS, Python, and PySpark in a collaborative environment focused on high-performance advertising technology.
  • Design, develop, and maintain robust data pipelines and ETL processes using Python, SQL, and PySpark 
  • Work with large-scale data storage on AWS (S3, DynamoDB, MongoDB) 
  • Ensure high-quality, consistent, and reliable data flows between systems 
  • Optimize performance, scalability, and cost efficiency of data solutions 
  • Collaborate with backend developers and DevOps engineers to integrate and deploy data components 
  • Implement monitoring, logging, and alerting for production data pipelines 
  • Participate in architecture design, propose improvements, and mentor mid-level engineers. 

 

    • 5+ years of experience in data engineering or backend development 
    • Strong knowledge of Python and SQL 
    • Hands-on experience with AWS (S3, Glue, Lambda, DynamoDB) 
    • Practical knowledge of PySpark or other distributed processing frameworks 
    • Experience with NoSQL databases (MongoDB or DynamoDB) 
    • Good understanding of ETL principles, data modeling, and performance optimization 
    • Understanding of data security and compliance in cloud environments 
    • Fluent in English (Upper-Intermediate level or higher)

    PERSONAL PROFILE

    • Strong communication and collaboration skills in cross-functional environments 
    • Proactive, accountable, and driven to deliver high-quality results

    Build stunning career with Sigma Software! Find your dream job, send your CV and become one of us!

    View all jobs
    Get hired quicker

    Be the first to apply. Receive an email whenever similar jobs are posted.

    Ace your job interview

    Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

    Senior Big Data Engineer Q&A's
    Report this job
    Apply for this job