Senior Data Engineer - ENF

Sacramento , United States
Remote

AI overview

Join a small innovative team to build a flexible data processing platform leveraging Spark and EMR, tackling significant Big Data challenges to enhance data quality.

Enformion is a dynamic and innovative data and analytics company that assists digital marketplaces in fraud prevention, risk management, seamless user onboarding, and fostering trust between shoppers and merchants. Our AI-powered solutions leverage extensive data intelligence and advanced behavioral analysis, enabling continuous monitoring for emerging risk indicators.

 

Who We Want

Do you live for working on challenging Big Data problems at a massive scale? Are you the type that intimately knows the ins and outs of Big data development with the expert knowledge and experience to push your hardware to the limits? If yes, then we want you.

We are looking for a Senior Data Engineer to help our engineering team build a modern data processing platform using Spark, EMR and other relational and noSQL databases. We are investing resources into setting up a more flexible and scalable data infrastructure to support the addition of new data sets and improve overall data quality. An ideal candidate will be excited to be in a small size company with a startup mindset that moves quickly on a constant flow of ideas, is able to weed through the maze of Big data tools and potential approaches to find the best possible solution and architecture.

Salary - $110K - $125K

Responsibilities

  • Implement and maintain big data platform and infrastructure
  • Develop, optimize and tune MySQL stored procedures, scripts, and indexes
  • Develop Hive schemas and scripts, Spark Jobs using pyspark and Scala and UDFs in Java
  • Design, develop and maintain automated, complex, and efficient ETL processes to do batch records-matching of multiple large-scale datasets, including supporting documentation
  • Develop and maintains pipelines using Airflow or any other tools to monitor, debug, and analyze data pipelines
  • Troubleshoot Hadoop cluster and query issues, evaluate query plans, and optimize schemas and queries
  • Strong interpersonal skills to resolve problems in a professional manner, lead working groups, and negotiate consensus

 

Qualifications & Skills

  • BS, MS, or PhD in Computer Science or related field
  • 5+ years minimum experience in language such as Java, Scala, PySpark, Perl, Shell Scripting and Python
  • Working knowledge of the Hadoop ecosystem applications (MapReduce, YARN, Pig, Hbase, Hive, Spark and more!)
  • Strong Experience working with data pipelines in multi-terabyte data warehouses. Experience in dealing with performance and scalability issues
  • Strong SQL (MySQL, Hive, etc.) and No-SQL (MongoDB, Hbase, etc.) skills, including writing complex queries and performance tuning
  • Knowledge of data modeling, partitioning, indexing, and architectural database design.
  • Experience using Source Code and Version Control systems like GIT etc.
  • Experience on continuous build and test process using tools such as GitLab, SBT, Postman, etc.
  • Experience with Search Engines, Name/Address Matching, or Linux text processing

Preferred:

  • Knowledge of cluster configuration, Hadoop administration and performance tuning are a huge plus.
  • Distributed computing principles and experience in big data technologies including performance tuning
  • Machine Learning

Location

Remote

Salary
$110,000 – $125,000 per year
Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Senior Data Engineer Q&A's
Report this job
Apply for this job