What you'll do
Coming on as an expert, you will lead the building of a new business intelligence data stack, designing and implementing solutions such as data pipelines, data warehouse, data models and data analytics as well as putting them at the service of internal users. You will lead technical discussions and decisions that enable us to generate high quality business and product insights allowing the company to make better informed decisions.
We’re in a period of high growth which means we move fast, but you’ll do so alongside smart, driven individuals in a fun, collaborative environment with opportunities for growth. Learning Agile is one of our core values, which means that we are always identifying ways to make things better, and welcome new tools and languages that help us improve the way we work.
Specific responsibilities include:
Working with roughly defined requirements and using the business vision to elaborate and build effective solutions to data informed decision problems
Building the infrastructure required to enable self-serving of insights by a variety of stakeholders with different skill sets
Collaborating with other business areas such as Product and Customer Success to better understand our most demanding user needs and delivering value that exceed their expectations via business intelligence solutions and tools
Implementing analytics solutions that utilize the data pipeline to provide actionable insights into product development, customer acquisition and retention, operational efficiency and other key business performance metrics
Following our agile methodology and contributing to its improvement
Own and lead initiatives from inception to delivery and maintenance in an autonomous way
Proactively adding positive energy to our rapidly growing company
Your skills and experience
Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management
Experience with GCP cloud services: Dataflow, GCS, BigQuery, GKE, etc.
Experience with orchestration tools: Airflow, Dagster, Prefect, etc.
Experience building ‘big data’ pipelines with tools like: Databricks Workflows, DBT, SQLMesh.
Knowledge of data fundamentals theories such as Incremental Loading, Medallion Architecture and data modeling.
Knowledge of SQL and NoSQL databases, including Postgres and MongoDB.
Knowledge of data lake and data warehousing solutions: Databricks, GCS, S3, Redshift, BigQuery.
Knowledge of stream-processing systems: Storm, Spark-Streaming, etc.
Knowledge of object-oriented/object function scripting languages: Python, Java, Scala, etc.
Good communication skills, including the ability to identify and communicate data driven insights.
Ability to work with a high degree of autonomy.
At Least B2 Conversational English Level