Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster. As one of the fastest-growing SaaS companies in history, we surpassed $2B in revenue in our last fiscal year with extensive growth potential ahead.
At the heart of Veeva are our values: Do the Right Thing, Customer Success, Employee Success, and Speed. We're not just any public company – we made history in 2021 by becoming a public benefit corporation (PBC), legally bound to balancing the interests of customers, employees, society, and investors.
As a Work Anywhere company, we support your flexibility to work from home or in the office, so you can thrive in your ideal environment.
The Role
We are an AI team supporting the entire suite of link data products (e.g., Link Key People). Agility and quality are our operating principles in developing cutting-edge ML models. Our ML models are trained by data captured by a massive group of over 2000 subject-matter experts. ML models complement the curational pipeline and scale our solutions to different regions, languages, and therapeutic areas. Ultimately, we accelerate clinical trials and equitable care. We are proud that our work helps patients to get their most urgent care sooner.
Your role will primarily involve developing LLM-based agents that are specialized in searching and browsing the web and extracting detailed information about Key Opinion Leaders (KOLs) in the healthcare sector. You will craft an end-to-end human-in-the-loop pipeline to sift through a large array of unstructured medical documents—ranging from academic articles to clinical guidelines and meeting notes from therapeutic committees. These agents will be equipped to perform semantic searches and reasoning in order to provide precise answers to predefined queries concerning KOL-related data across various languages and disciplines. Leveraging AWS infrastructure, you will build, scale, and optimize agents and pipelines for information extraction and question-answering, ensuring they are production-ready and robust. Your focus will be on building highly scalable, efficient systems while collaborating with Data Engineers for seamless data pipelines and Data Scientists for model refinement. You will take ownership of the entire deployment process, ensuring the models are integrated into production environments with minimal latency and high performance.
We invite you to work remotely from any area within the UK, Spain, or Portugal. However, it's a prerequisite that you already reside in this country and hold legal work authorization without needing an employer's support.
If you plan to move to this country or live nearby, we may still consider your application if you're a superb fit for the role. In such scenarios, kindly supply an extra document that outlines your impending or current location, visa status, and the reasons that make you an excellent fit.
What You'll Do
- Develop and manage ML infrastructures and CI/CD pipelines to support multiple data products
- Build fully automated, scalable, cost-effective, and fault-tolerant solutions in AWS to process billions of records
- Provide engineering mentorship and guidance to data scientists
- Develop LLM-based agents capable of performing function calls and utilizing tools such as browsers for enhanced data interaction and retrieval
- Experience with Reinforcement Learning from Human Feedback (RLHF) methods such as Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO) for training LLMs based on human preferences
- Collaborate with data scientists, data engineers, and product/operation teams
Requirements
- Agile mindset
- Professional in ML operationalization, including CI/CD pipelines and workflow/model management, stacks such as Airflow and MLfLow
- Professional in distributed computing platforms (Ray and Spark) as well as Kubernetes for inference
- Solid understanding and experience in deep learning frameworks (e.g., PyTorch, JAX,...)
- Hands-on experience in in-house training and inference of LLMs
- 3+ years of experience as a Machine Learning Engineer or relevant jobs
- 2+ years of experience in cloud development, ideally in AWS
- Strong analytical skills and data curiosity
- Strong collaboration skills as well as verbal and written communication skills
- Used to start-up environments
- Social competence and a team player
- High energy and ambitious
Nice to Have
- Experience in the life/health science industry, notably pharma
- Strong theoretical knowledge of Natural Language Processing, Machine Learning, or Reinforcement Learning
- Experience with NoSQL databases
- Familiarity with architectural choices, particularly for ML systems
- Leadership skills and a solid network to help in hiring and growing the team
Perks & Benefits
- Work anywhere
- Personal development budget (equal to 2% of your salary and in addition to that)
- Veeva charitable giving program
- Fitness reimbursement
- Life insurance + pension fund
#RemoteSpain
Veeva’s headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.
As an equal opportunity employer, Veeva is committed to fostering a culture of inclusion and growing a diverse workforce. Diversity makes us stronger. It comes in many forms. Gender, race, ethnicity, religion, politics, sexual orientation, age, disability and life experience shape us all into unique individuals. We value people for the individuals they are and the contributions they can bring to our teams.
If you need assistance or accommodation due to a disability or special need when applying for a role or in our recruitment process, please contact us at [email protected].