Docplanner is hiring a

Senior Platform Engineer / DevOps Engineer, Data Platform Team (Remote in Spain)

Barcelona, Spain
Full-Time

Data Platform Team:

The Data Platform Team aims to provide all teams with a self-service, flexible and scalable platform for their data needs. The platform is easy to access, provides a seamless experience, and allows for the curation, modelling, and reprocessing of data. Data and business/product/tech metrics are owned by those teams and centralised for reuse. This ensures that data is treated as a product and leveraged for maximum business impact.

Team currently consists of 12 people, including data engineers, DWH engineers and a project manager. We believe that bringing 2 new roles of Machine Learning Engineer and Platform Engineer now will lead us to even more autonomy and satisfaction of our internal customers.

What are the challenges in the team?

  • Self-Service Data Management: To meet the growing demand for data-driven insights, we empower our internal teams with self-service access to our data platform. This will allow them to analyse data independently, reducing the burden on our team and freeing us up to focus on innovation and business delivery.
  • Increasing Team Throughput: We are seeking a platform engineering expertise in building and maintaining our data infrastructure to help us streamline our processes and improve our throughput. This need is instrumental in enhancing our team's autonomy and ownership, allowing us to operate more efficiently and effectively.
  • Building a Machine Learning Platform: As the demand for ML and AI solutions grows, we are establishing a comprehensive and user-friendly platform to support our data scientists in developing and deploying machine learning models. ML platform will streamline the entire machine learning lifecycle, from data acquisition and preparation to model training, deployment, and monitoring.

Who will you work closely with?

You will work closely with Product and Analytics Teams to understand their technical requirements and provide them with the necessary tools, services, and infrastructure to support their development efforts. Your insights and expertise will contribute to enhancing the efficiency and effectiveness of their workflows. You will also work side by side with the Platform Core Team to define and apply global infrastructure standards to the Data Platform realm.

How would you be impacting our mission?

  • Be part of the Data Platform Team, understanding its dynamics, providing support and sharing platform knowledge. Also collaborate closely with platform and analytics teams.
  • Seek ways to remove infrastructure roadblocks and enable the rest of the team focusing on BAU and delivering business value.
  • Enable the team to autonomously work with data infrastructure and seek for improvements.
  • Support initiatives related to CDC (Change Data Capture) type data ingestion, building ML/AI Platform or infrastructure monitoring, to name a few.
  • Implement IaaC best practices for managing the infrastructure, ensuring consistency, scalability, and rapid adaptability to changing project requirements and overcoming complexities.
  • Enhance monitoring of infrastructure, its costs, identifying optimization opportunities and proactively looking for cost-efficient measures.
  • Development, monitoring, and maintenance of Kubernetes clusters on several continents.   
  • CI / CD development and maintenance.  
  • Make sure that all of our services are written in a way that makes them highly available.  

What will help you thrive?

  • Vast experience with container orchestration platforms like Kubernetes (must-have).   
  • You know why and how to use Terraform and popular CI/CD tools.   
  • You know about building scalable and secure production HA environments on AWS.
  • You are not afraid of developing tools or scripts in Bash or GO to automate work.   
  • You can communicate in English (both spoken and written - min. B2 level).  
  • Growth mindset: nobody ticks all those boxes above, but willingness to learn is strongly valued here.

Bonus points

  • Previous experience in managing data infrastructure (dataops).
  • Familiarity with various big data & ML stuff (Apache Airflow, Kafka, AWS Sagemaker, Kubeflow, Apache Spark).
  • Strong Python skills with good coding practices.
  • Database management

Let’s talk money

  • A salary adequate to your experience and skills between €50,000.00 and €75,000.00. The range is broad so that we can accommodate our roles for all levels of experience, but we will show you the career ladder to explain where we see your skills and impact within the company". Your salary will be, now and always, 100% transparent to you;
  • Flexible remuneration and benefits system via Flexoh, which includes: restaurant card, transportation card, kindergarten, and training tax savings;
  • Share options plan after 6 months of working with us.

 

True flexibility and work-life balance

  • Remote or hybrid work model with our hub in Barcelona;
  • Flexible working hours (fully flexible, as in most cases you only have to be on a couple of meetings weekly);
  • Summer intensive schedule during July and August (work 7 hours, finish earlier);
  • 23 paid holidays, with exchangeable local bank holidays;
  • Additional paid holiday on your birthday or work anniversary (you choose what you want to celebrate).

 

Health comes first 

  • Private healthcare plan with Adeslas for you and subsidized for your family (medical and dental);
  • Access to hundreds of gyms for a symbolic fee in partnership for you and your family with Andjoy;
  • Access to iFeel, a technological platform for mental wellness offering online psychological support and counseling. 

 

Keep growing with us

  • 20% time rule: spend 20% of your working hours on personal development related to your role and collaboration with other teams;
  • Free English and Spanish classes.

This job is no longer available

Enter your email address below to get notified whenever we find a similar job post.

Unsubscribe at any time.