Liquid AI, an MIT spin-off, is a foundation model company headquartered in Boston, Massachusetts. Our mission is to build capable and efficient general-purpose AI systems at every scale.
Our goal at Liquid is to build the most capable AI systems to solve problems at every scale, such that users can build, access, and control their AI solutions. This is to ensure that AI will get meaningfully, reliably and efficiently integrated at all enterprises. Long term, Liquid will create and deploy frontier-AI-powered solutions that are available to everyone.
What This Role Is
We're looking for an Applied ML Engineer to customize, implement and deploy our Liquid Foundation Models (LFMs) for customers. This is a hands-on, technical role focused on bringing our LFMs and Liquid stack to life through impactful implementation.
You're A Great Fit If
You have hands-on experience optimizing and deploying local LLMs - running models like Llama, Mistral or other open-source LLMs locally through tools like vLLM, Ollama or LM Studio.
You're passionate about customizing ML models to solve real customer problems - from fine-tuning foundation models to optimizing them for specific use cases, you know how to make models work for unique requirements
You have a knack for lightweight ML deployment and can architect solutions that work efficiently in resource-constrained environments - whether that's optimizing inference on CPUs, working with limited memory budgets, or deploying to edge devices
You have a sharp eye for data quality and know what makes data effective - able to spot ineffective patterns in sample data, help design targeted synthetic datasets, and craft prompts that unlock the full potential of foundation models for specific use cases
What Sets You Apart
You have customized an existing product for a customer
You're versatile across deployment scenarios - whether it's containerized cloud deployments, on-premise installations with strict security requirements, or optimized edge inference, you can make models work anywhere
What You'll Actually Do
Own the complete deployment journey - from model customization to serving infrastructure, ensuring our solutions work flawlessly in variable customer environments,.
Deploy AI systems to solve use cases others can not - implementing solutions that push beyond base LFMs can deliver and redefine what's possible with our technology
Work alongside our core engineering team to leverage and enhance our powerful toolkit of Liquid infrastructure
What You'll Gain
The ability to shape how the world's most influential organizations adopt and deploy LFMs - you'll be hands-on building solutions for customers who are reimagining entire industries
Own the complete journey of delivering ML solutions that matter - from model customization to deployment architecture to seeing your work drive real customer impact
If you've read this far and aren't at least slightly intimidated by this, you're either perfect for the role or completely wrong for it.
Only one way to find out.