Causaly is hiring a

DataOps Engineer

Athens, Greece
Full-Time

About us

Founded in 2018, Causaly accelerates how humans acquire knowledge and develop insights in Biomedicine. Our production-grade generative AI platform for research insights and knowledge automation enables thousands of scientists to discover evidence from millions of academic publications, clinical trials, regulatory documents, patents and other data sources… in minutes. 

We work with some of the world's largest biopharma companies and institutions on use cases spanning Drug Discovery, Safety and Competitive Intelligence. You can read more about how we accelerate knowledge acquisition and improve decision making in our blog posts here: Blog - Causaly 

We are backed by top VCs including ICONIQ, Index Ventures, Pentech and Marathon. 

Who we are looking for

We are looking for talented Data Engineers with a passion for DataOps and a demonstrable background in SQL and Python-based automation. You will join our Data & Semantic Technologies team, responsible for delivering the scalable and highly flexible data fabric that is the foundation of Causaly’s product suite. This team is enabling and empowering new product developments as well as innovations in AI to create true business value. You will be unleashing the value of data for our customers through building and operating automated data pipelines, feeding our constantly growing data warehouse and knowledge graph, evolving our data architectures, etc. 

We are a multi-disciplinary team working in a fast-paced and collaborative environment, who value honest opinion and open debate. You have a problem-solving mind-set with a hands-on attitude, you are keen to design and build innovative solutions that leverage the value of data, you are passionate and creative in your work, you love to share ideas with your team and can pick the right tool for the job? Then you should become part of our journey! 

What you can expect to work on: 

  • Gather and understand data based on business requirements 
  • Regularly import and transform big data (millions of records) from various formats (e.g. CSV, SQL, JSON) to data stores like BigQuery and Neo4j 
  • Process data further using SQL and/or Python, e.g., to sanitise fields, aggregate records, combine with external data sources 
  • Work with other engineers on highly performant data pipelines and efficient data operations, adhering to the industry’s best practices and technologies for scalability, fault tolerance and reliability 
  • Export data in well-defined target formats and schemata, ensure and validate data output and quality, produce corresponding reports and dashboards 
  • Manage and improve (legacy) data pipelines in the cloud - enable other engineers to run them efficiently 
  • Innovate on our data warehouse architecture and usage 
  • Work directly with a multitude of technical, product and business stakeholders 
  • Mentor and guide junior members, shape our technology strategy and innovate on our data backbone 
  • Collaborate with the DevOps team to help manage our infrastructure 

Requirements

Minimum Requirements 

  • Significant industry experience working with SQL, automation, ETL, Linux 
  • Proven database skills and experience with traditional RDBMS like MySQL as well as modern systems like BigQuery 
  • Experience with data versioning, data-backup and data-recovery strategies 
  • Solid understanding of modern software-development practices (testing, version control, documentation, etc.) and hands-on coding experience in Python 
  • Experience with cloud computing providers like GCP/AWS 
  • Strong engineering background enabling rapid progression from ideation to proof-of-concept  
  • A product and user-centric mindset 
  • Excellent problem solving, ownership, organizational skills, with high attention to detail and quality 

Preferred Qualifications 

  • Experience with more data-storage and retrieval technologies, such as ElasticSearch, data warehouses, NoSQL, Neo4j 
  • Command-line and Linux scripting skills in production 
  • Have utilised DevOps tools and practices to build and deploy software 
  • Knowledge of Terraform, Kubernetes and or/Docker Containers 
  • Programming skills and experience in other languages, such as Node.js 

Benefits

  • Competitive Salary
  • Hybrid working (home + office)
  • Annual training budget for professional development (e.g. books, video tutorials)
  • Enhanced sick-leave package
  • Plenty of opportunity to take on more responsibility as we grow
  • Be part of a multinational, diverse and exceptional team to build a transformative knowledge product that has real impact

Be yourself at Causaly... Difference is valued. Everyone belongs.

Diversity. Equity. Inclusion. They are more than words at Causaly. It's how we work together. It's how we build teams. It's how we grow leaders. It's what we nurture and celebrate. It's what helps us innovate. It's what helps us connect with the customers and communities we serve.

We are on a mission to accelerate scientific breakthroughs for ALL humankind and we are proud to be an equal opportunity employer. We welcome applications from all backgrounds and fairly consider qualified candidates without regard to race, ethnic or national origin, gender, gender identity or expression, sexual orientation, disability, neurodiversity, genetics, age, religion or belief, marital/civil partnership status, domestic / family status, veteran status or any other difference.

Apply for this job

Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!

Get hired quicker

Be the first to apply. Receive an email whenever similar jobs are posted.

Ace your job interview

Understand the required skills and qualifications, anticipate the questions you may be asked, and study well-prepared answers using our sample responses.

Operations Engineer Q&A's
Report this job
Apply for this job