Principal Data Engineer - R01560587

AI overview

Design and implement scalable data architectures using Snowflake while optimizing performance and developing robust data pipelines with Python and DBT.
Principal Data Engineer Primary Skills
  • Snowflake data architecture and data engineering
  • ETL Fundamentals, Zero Copy Cloning, SQL, SQL (Basic + Advanced), Python, Data Warehousing, Snowflake Data Exchange, Time Travel and Fail Safe, Snowpipe, SnowSQL, Modern Data Platform Fundamentals, Data Modelling Fundamentals, PLSQL, T-SQL, Stored Procedures
  • Job requirements


  • Experience Range: 12 - 15 years of experience, including significant hands-on expertise in Snowflake data architecture and data engineering

  • Key Responsibilities:
  • Design and implement scalable Snowflake data architectures to support enterprise data warehousing and analytics needs
  • Optimize Snowflake performance through advanced tuning, warehousing strategies, and efficient data sharing solutions
  • Develop robust data pipelines using Python and DBT, including modeling, testing, macros, and snapshot management
  • Implement and enforce security best practices such as RBAC, data masking, and row-level security across cloud data platforms
  • Architect and manage AWS-based data solutions leveraging S3, Redshift, Lambda, Glue, EC2, and IAM for secure and reliable data operations
  • Orchestrate and monitor complex data workflows using Apache Airflow, including DAG design, operator configuration, and scheduling
  • Utilize version control systems such as Git to manage codebase and facilitate collaborative data engineering workflows
  • Integrate and process high-volume data using Apache ecosystem tools such as Spark, Kafka, and Hive, with an understanding of Hadoop environments

  • Required Skills:
  • Advanced hands-on experience with Snowflake, including performance tuning and warehousing strategies
  • Expertise in Snowflake security features such as RBAC, data masking, and row-level security
  • Proficiency in advanced Python programming for data engineering tasks
  • In-depth knowledge of DBT for data modeling, testing, macros, and snapshot management
  • Strong experience with AWS services including S3, Redshift, Lambda, Glue, EC2, and IAM
  • Extensive experience designing and managing Apache Airflow DAGs and scheduling workflows
  • Proficiency in version control using Git for collaborative development
  • Hands-on experience with Apache Spark, Kafka, and Hive
  • Solid understanding of Hadoop ecosystem
  • Expertise in SQL (basic and advanced), including SnowSQL, PLSQL, and T-SQL
  • Strong requirement understanding, presentation, and documentation skills; ability to translate business needs into clear, structured functional/technical documents and present them effectively to stakeholders.

  • Preferred Skills:
  • Experience with Salesforce Data Cloud integration
  • Familiarity with data cataloging tools such as Alation
  • Exposure to real-time streaming architectures
  • Experience working in multi-cloud environments
  • Knowledge of DevOps or DataOps practices
  • Certifications in data cloud technologies

  • Desired Qualifications:
  • Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field
  • Relevant certifications in Snowflake, AWS, or data engineering technologies are highly desirable
  • Brillio is a global leader in Enterprise Digital Transformation Solutions, partnering with companies to drive business improvement and competitiveness through innovative technology solutions.

    View all jobs
    Salary
    $140 – $150 per year
    Ace your job interview

    Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

    Principal Data Engineer Q&A's
    Report this job
    Apply for this job