About Us
Wing is seeking elite talent to join M32 AI (a subsidiary of Wing, backed by top-tier Silicon Valley VCs), dedicated to building agentic AI for traditional service businesses.
Think of it like a startup within a corporate: fast moving and agile, with the stability of a corporate, and zero bureaucracy.
If you’re driven by challenge and eager to make a significant impact in a high-caliber role, this is the opportunity you’ve been waiting for.
Your mission: own and evolve the test automation ecosystem that keeps us shipping delightful, bug-free experiences every week.
This role combines deep manual and UX testing with structured exploratory testing and targeted automation to create a robust quality foundation for our products.
You Will Own
Own the full QA lifecycle for Agentic AI products: strategy, design, execution, reporting, and release sign-off.
Design and run test plans covering functional, regression, smoke, exploratory, and usability testing for AI behavior and decision chains.
Validate multi-step decision flows and reasoning to catch logic gaps, guardrail failures, or requirement mismatches.
Perform structured exploratory testing to uncover unexpected behaviors, edge cases, and cascading AI failures.
Build synthetic test scripts for UI elements, APIs, and end-to-end flows to verify functionality.
Test across platforms (web, mobile, integrations) for consistency and performance.
Maintain dashboards tracking test coverage, failures, and quality KPIs for all stakeholders.
Improve test reliability: fix flakiness, optimize parallel runs, and cut execution time.
Partner with Product, Design, and Engineering to refine requirements and set clear go/no-go criteria.
Monitor pre- and post-release quality; use data to enhance AI evaluation and guardrails.
What Great Looks Like
Automated Coverage: Achieve and sustain 90% critical path test coverage within 21 days
Fast Feedback: Keep full regression test execution under 10 minutes to enable near‑instant feedback for engineers
Bug-Free Releases: Ship weekly without major production bugs
Preferred Skills & Experience
Experience testing GenAI or LLM‑driven products, including common failure modes such as hallucinations, unsafe responses, bias, and brittle decision paths.
Exposure to performance and load testing tools and practices for web applications and APIs.
Familiarity with structured exploratory testing approaches and test charters, especially for AI behavior and agent decision‑making.
Prior experience in high‑velocity environments (e.g., startups) where QA acts as an owner of quality rather than a purely executional function.
Prefer automation over repetition, while recognizing the value of focused exploratory testing
Our Hiring Process
Introductory Call (20 min) - Discuss our culture, expectations, and working style
Asynchronous Task - Build and document a small automated test flow for a sample application, using either a testing framework or a no-code automation tool
Final Interview (45 min) - A live session with our CPO and CTO
That’s all! After this, we will be extending the offer to you. We run a short, fast cycle that can be complete in as short as 7 days
Compensation
$1500-$2000 USD per month
What You Get
Competitive salary
Performance‑based bonuses tied to release quality
Software for Upskilling & Productivity
Remote-first culture
Work from anywhere
Paid Time Off
High autonomy, low bureaucracy
Fast-track to leadership for high performers
US HQ Opportunities
Direct access to founding team
High visibility, autonomy and ownership
Optional in‑person hack‑weeks in Hong Kong, India, or London
A clear growth path into Head of QA as the team scales
Access to best‑in‑class tooling
We hire for output instead of pedigree. If your systems never miss a bug, we want you on the team.