AppZen is the leader in autonomous spend-to-pay software. Its patented artificial intelligence accurately and efficiently processes information from thousands of data sources so that organizations can better understand enterprise spend at scale to make smarter business decisions. It seamlessly integrates with existing accounts payable, expense, and card workflows to read, understand, and make real-time decisions based on your unique spend profile, leading to faster processing times and fewer instances of fraud or wasteful spend. Global enterprises, including one-third of the Fortune 500, use AppZen’s invoice, expense, and card transaction solutions to replace manual finance processes and accelerate the speed and agility of their businesses. To learn more, visit us at www.appzen.com.
Responsibilities
- Design, develop, and implement scalable and efficient data pipelines in the cloud using Python, SQL, and relevant technologies.
- Build and maintain data infrastructure on platforms such as AWS, leveraging services like EMR, Redshift, and others.
- Collaborate with data scientists, analysts, and other stakeholders to understand their requirements and provide the necessary data solutions.
- Develop and optimize ETL (Extract, Transform, Load) processes to ensure the accuracy, completeness, and timeliness of data.
- Create and maintain data models, schemas, and database structures using PostgreSQL and other relevant database technologies.
- Experience reporting tools such as Superset, (good to have : Domo, or Tableau, Quicksight) to develop visually appealing and insightful data visualizations and dashboards.
- Monitor and optimize the performance and scalability of data systems, ensuring high availability and reliability.
- Implement and maintain data security and privacy measures to protect sensitive information.
- Collaborate with the engineering team to integrate data solutions into existing applications or build new applications as required.
- Stay up-to-date with industry trends, emerging technologies, and best practices in data engineering
Qualifications:
- Bachelor's or master's degree in Computer Science, Engineering, or a related field. 8+ years of experience.
- Strong proficiency in Python and SQL for data manipulation, analysis, and scripting.
- Extensive experience with cloud platforms, particularly AWS, and working knowledge of services like EMR, Redshift, and S3.
- Solid understanding of data warehousing concepts and experience with relational databases like PostgreSQL.
- Familiarity with data visualization and reporting tools such as Superset, Domo, or Tableau.
- Experience with building and maintaining data pipelines using tools like Airflow.
- Knowledge of Python web frameworks like Flask or Django for building data-driven applications.
- Strong problem-solving and analytical skills, with a keen attention to detail.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- Proven ability to work in a fast-paced environment, prioritize tasks, and meet deadlines.
- If you are a talented Data Engineer with a passion for leveraging data to drive insights and impact, we would love to hear from you.
Join our team and contribute to building robust data infrastructure and pipelines that power our organization's data-driven decision-making process