Data Engineer B. - PP - 102502

TLDR

Design and maintain scalable data pipelines and ETL systems for credit and fintech products, ensuring data reliability and performance across large workflows.

As a senior Software Engineer on the Credit Platform Data team, you will design, build, and maintain scalable data pipelines and ETL systems that enable internal business units to leverage data for credit and fintech products. This role is central to ensuring data reliability, integrity, and performance across large-scale processing workflows. You will work closely with product managers, analysts, and stakeholders to translate business requirements into robust data solutions using SQL, Python, PySpark/Pandas, and modern data warehousing concepts. This position offers the chance to shape core data infrastructure and influence how credit-related data is processed and consumed. Responsibilities
  • Design, build, and maintain robust data pipelines and ETL processes to ingest, transform, and load data into the data warehouse.
  • Develop and optimize SQL scripts and Python-based data processing jobs (PySpark, Pandas) for large-scale workflows.
  • Implement automated data quality checks and validation processes to ensure data integrity and accuracy.
  • Monitor system performance, troubleshoot issues, and optimize pipelines for reliability and scalability.
  • Collaborate with product managers, analysts, and stakeholders to gather requirements and deliver data solutions that meet business needs.
  • Create and maintain design documents and technical documentation for data pipelines and systems.
  • Participate in design and code reviews to maintain high engineering standards.
  • Apply data modeling and schema design techniques to support efficient, scalable storage and querying.
  • Contribute to CI/CD pipeline usage and integration to deploy and manage data processing jobs.
  • Follow SDLC practices and work as an active member of a development team, modifying ETL tools and workflows as needed.
  • Requirements
  • Bachelor’s degree in Computer Science, Engineering, or a related field.
  • 3+ years of proven experience as a Data Engineer or similar role with strong ETL and data pipeline experience.
  • Proficiency in SQL and scripting with Python for data processing tasks.
  • Proficiency with PySpark and/or Pandas for large-scale data processing.
  • Familiarity with data warehousing concepts and tools (e.g., AWS Redshift, Google BigQuery, Snowflake) and experience optimizing performance for large datasets.
  • Strong experience in database development, data modeling, schema design, and optimization techniques for scalability.
  • Experience writing and maintaining automation test cases for data pipelines.
  • Experience with Unix/Linux operating systems and shell scripting.
  • Practical knowledge of CI/CD pipelines and how to use them for data job deployment.
  • Solid understanding of SDLC and experience working within development teams.
  • Deep domain knowledge of Credit and Fintech and how data supports related products and processes.
  • Self-motivated, proactive, and able to communicate and collaborate effectively across teams.
  • Resilient Co builds scalable API platforms that enhance both customer-facing and internal services, utilizing advanced technologies such as GraphQL, Kubernetes, and AWS. Additionally, the company specializes in SAP BRIM solutions, tailored for financial contract accounting and seamless payment services integration, setting it apart in the tech landscape.

    View all jobs
    Ace your job interview

    Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

    Data Engineer Q&A's
    Report this job
    Apply for this job