Evaluate and validate code across multiple programming languages to enhance AI model performance while contributing to cutting-edge AI development projects.
The future of AI — whether in training or evaluation, classical ML or agentic workflows — starts with high-quality data.
At HumanSignal, we’re building the platform that powers the creation, curation, and evaluation of that data. From fine-tuning foundation models to validating agent behaviors in production, our tools are used by leading AI teams to ensure models are grounded in real-world signal, not noise.
Our open-source product, Label Studio, has become the de facto standard for labeling and evaluating data across modalities — from text and images to time series and agents-in-environments. With over 250,000 users and hundreds of millions of labeled samples, it’s the most widely adopted OSS solution for teams working on building AI systems.
Label Studio Enterprise builds on that traction with the security, collaboration, and scalability features needed to support mission-critical AI pipelines — powering everything from model training datasets to eval test sets to continuous feedback loops.We started before foundation models were mainstream, and we’re doubling down now that AI is eating the world. If you're excited to help leading AI teams build smarter, more accurate systems — we’d love to talk.
HumanSignal is seeking highly qualified software engineering and computer science experts to contribute to cutting-edge AI development projects. This role involves evaluating, annotating, and validating code across multiple programming languages and domains to improve AI model performance in software development and problem-solving.
Up to $110 USD/hour based on qualifications and project complexity
Candidates must hold degrees directly in computer science, software engineering, or computationally-intensive fields. Acceptable programs include:
Interested candidates should be prepared to:
Flexible Work Hours
Flexible, project-based work - work on your own schedule
Performance feedback environment
Quality-focused environment with performance feedback
Remote-Friendly
Remote position - work from anywhere
HumanSignal builds a powerful platform that facilitates the creation, curation, and evaluation of high-quality data specifically for AI applications. Serving leading AI teams, our tools, including the open-source Label Studio, help organizations unlock new potential in AI by providing purpose-built datasets that go beyond traditional data scraping. We're on a mission to empower researchers and enterprises to innovate freely with data that's tailored to their unique needs.
Please mention you found this job on AI Jobs. It helps us get more startups to hire on our site. Thanks and good luck!