View our much better version of this job spec on our careers page: https://tinyurl.com/engineer-ML
***
THE KEY BITS
- Location: We can currently only accept applications from candidates that are located and have long-term working rights in the UK.
- Flexibility: We have an office hub in the UK as well as no-office hubs in several European countries. We operate a choice-first work approach that lets you work fully remotely from Day One even if you’re near an office hub.
- Salary: We benchmarked ca.£100-130,000 + equity at IC5 level in our career framework for someone based in the UK.
- Interviews: 3 stages totalling around 3 hours over 2-3 weeks.
- Start date: As soon as you can start.
- Reporting to: Valeriy Lapchenok, Engineering Director
HELP US HELP THE WORLD AGREE MORE
Juro has big ambitions: to become the go-to platform for agreeing and managing contracts globally. And we'll need help doing it.
Legal tech is on the rise, with Goldman Sachs estimating that
44% of legal tasks can be automated with generative AI. With the brand we have built and the agility of early stage we are well placed to capture this opportunity.
THE CHALLENGE
Working alongside a small team of passionate developers on automating the way people agree, you will help define how we can use and reuse internal AI/ML solutions and data to quickly release and improve new functionality, enable our partners to build better services on top of Juro and help us manage third-party integrations and AWS infrastructure.
Your role is to build on our engineering culture at Juro. You will help define, develop and improve Juro’s ML/AI capabilities to help provide structure and insight across our document base in a safe and contained way.
When you join our AI/ML team, you will:
- Guide our backend AI strategy across data storage, model selection, RAG frameworks and performance and training.
- Develop an interface that helps our application teams exploit the data and systems which you have put in place.
- Develop, maintain and improve new and existing backend integrations with third-party gen AI services.
- Optimise and analyse AI/ML performance efficiency.
- Work with microservice architecture environment using Docker, Kubernetes.
WHO WE LOOK FOR
Research shows that men apply to jobs if they meet ~60% of criteria, but women and those in traditionally underrepresented groups tend to apply only if they check all boxes. If you think you have what it takes but don't meet every single point above, please still get in touch. We'd love to chat and see if you could be a great fit.
We look for people whose approach to work aligns with our values & behaviours. For this role, we particularly value:
- Mentoring: You use your technical expertise to teach others patiently, who come to you for your knowledge.
- Caring: You take responsibility for what you build because you care, and you proactively seek/give feedback to suggest improvements
- Autonomy: You learn fast and don't require a lot of supervision to work well. When you ask questions, it's to re-confirm what you work on and then continue on independently.
- Focus on results: You deliver the tasks and projects that you promise, and you don't invent new solutions if an appropriate one already exists.
On top of that, you have been part of a journey where:
- You’ve got extensive hands-on experience with RAG (Retrieval-Augmented Generation) techniques. Successfully delivered RAG-based and agents-based solutions.
- You’ve accumulated proficiency in RAG frameworks, e.g. autogen, crewAI, langchain / langgraph.
- Got an extensive knowledge base around conversational patterns and techniques used in communication with LLMs.
- You’ve acquired strong proficiency in Python, including experience with TensorFlow and PyTorch for machine learning.
- You’re familiar with HuggingFace and have a strong understanding of various model families and their applications.
- Got an ability to verify and ensure data quality through effective data cleaning techniques.
- Have a past experience in error analysis of models and devising strategies for improvement.
- Got dedication to ongoing research and keeping abreast of the latest AI and ML advancements, integrating new methodologies into existing systems.
Bonus points if:
- Deploying models in production environments
- LLMs training and hyperparameter tuning
- Deploying models to production
- CUDA for GPU acceleration.
- MongoDB
- Typescript
GOT MORE QUESTIONS?
Check if they're answered on:
- Our extended job description for this role: https://tinyurl.com/engineer-ML
There, you'll find answers on topics such as career progression, inclusion & belonging, the interview process, benefits and more. Or reach out to our talent team (
[email protected]) for anything.