Deliver analytics solutions across the entire data stack, partnering with various teams to drive business growth and facilitate data-driven decision making.
Who is Serotonin
Serotonin is the top go to market firm for transformative technologies, specializing in marketing, strategy, recruiting, and legal services. With a global team of 90 across 15 countries, Serotonin has supported over 300 clients in consumer tech, web3 infrastructure, digital assets, venture capital, and AI since its launch in 2020. Delivering end-to-end go-to-market solutions across all major marketing channels - including public relations, growth marketing, on-chain analytics, content, research, social, and design - Serotonin accelerates global innovation. At the core of our business is the Serotonin Platform, serving as a central nucleus for the web3 ecosystem, connecting builders and founders with essential resources to drive business growth.
About the Role
We're looking for a versatile data professional who combines strong analytical and data science capabilities with the engineering skills to build and maintain the infrastructure behind those analyses. This is a hybrid role perfect for someone who loves both discovering insights and building scalable systems to deliver them. You'll work across the entire data stack - from pipeline development to statistical modeling.
Responsibilities
Build and maintain data pipelines that power both ad-hoc analyses and production dashboards
Develop statistical models and data science solutions while also implementing the infrastructure to deploy them
Create self-serve analytics tools and datasets that empower stakeholders across the organization
Design experiments and perform statistical analyses to measure product and marketing initiatives
Build data models in our warehouse that balance analytical flexibility with performance
Partner directly with product, marketing, and leadership teams to identify opportunities and measure impact
Own the full lifecycle of data products - from initial exploration to production deployment
Requirements: Core Technical Skills
Strong experience with modern data stack tools (e.g., dbt, Airflow/Dagster, Snowflake, BigQuery, Redshift, or similar)
Proven ability to design and manage ETL pipelines and database architectures
Advanced SQL skills and high proficiency in Python for both data analysis and engineering
Understanding of data modeling principles (Kimball, Data Vault, or similar)
Experience with cloud data platforms (AWS, GCP, or Azure)
Requirements: Analytical & Data Science Expertise
Strong statistical analysis skills with hands-on experience using Python data science stack (pandas, NumPy, scikit-learn)
Experience with A/B testing, causal inference, and experimental design
Ability to communicate complex findings to non-technical stakeholders
Track record of using data to influence business strategy
Requirements: Blockchain & Web3 Experience
Hands-on experience with blockchain data extraction, transformation, and analysis
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!
Get hired quicker
Be the first to apply. Receive an email whenever similar jobs are posted.
Ace your job interview
Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.