-
Evaluation Scenario Writer
3 settimane fa
Solo per membri registrati RomeWe connect specialists with project-based AI opportunities for leading tech companies. · Create structured test cases that simulate complex human workflows · ...
-
Evaluation Scenario Writer
1 mese fa
Solo per membri registrati RomeThis opportunity is for candidates residing in the specified country. We're looking for someone to design realistic and structured evaluation scenarios for LLM-based agents. You'll create test cases and define gold-standard behavior to compare agent actions against. · We believe ...
-
Evaluation Scenario Writer
1 settimana fa
Solo per membri registrati RomeWe are looking for specialists to connect with project-based AI opportunities. · Mindrift connects specialists with project-based AI opportunities for leading tech companies, · Create structured test cases that simulate complex human workflows · Define gold-standard behavior and ...
-
Evaluation Scenario Writer
21 ore fa
Solo per membri registrati RomeMindrift connects specialists with project-based AI opportunities for leading tech companies. · ...
-
AI Agent Evaluation Analyst
1 mese fa
Solo per membri registrati RomeWe're on the hunt for QAs for autonomous AI agents for a new project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks. · This project opportunity is ideal for people who enjoy looking at systems holistically and thinking t ...
- Solo per membri registrati Rome
We are looking for hands-on Python engineers to develop Model Context Protocol (MCP) servers and internal tools for running and evaluating agent behavior.This opportunity is only for candidates currently residing in specified country. · ...
Evaluation Scenario Writer - Roma - Mindrift
Descrizione
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.What We DoThe Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real‐world expertise from across the globe.About The RoleDesign realistic and structured evaluation scenarios for LLM‐based agents. Create test cases that simulate human‐performed tasks and define gold‐standard behavior to compare agent actions against. Ensure each scenario is clearly defined, well‐scored, easy to execute, and reusable. Require sharp analytical mindset, attention to detail, and interest in how AI agents make decisions.
Create structured test cases that simulate complex human workflows.Define gold‐standard behavior and scoring logic to evaluate agent actions.Analyze agent logs, failure modes, and decision paths.Work with code repositories and test frameworks to validate scenarios.Iterate on prompts, instructions, and test cases to improve clarity and difficulty.Ensure scenarios are production‐ready, easy to run, and reusable.
How To Get StartedApply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you'll help shape the future of AI while ensuring technology benefits everyone.Requirements
Bachelor's and/or Master's Degree in Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / NLP, Information Systems or related fields.Background in QA, software testing, data analysis, or NLP annotation.Good understanding of test design principles (e.g., reproducibility, coverage, edge cases).Strong written communication skills in English.Comfortable with structured formats like JSON/YAML for scenario description.Can define expected agent behaviors (gold paths) and scoring logic.Basic experience with Python and JavaScript.Curious and open to working with AI‐generated content, agent logs, and prompt‐based behavior.
Nice to Have
Experience in writing manual or automated test cases.Familiarity with LLM capabilities and typical failure modes.Understanding of scoring metrics (precision, recall, coverage, reward functions).
Benefits
Get paid for your expertise, with rates up to $30/hour depending on skills, experience, and project needs.Flexible, remote, freelance project that fits around professional or academic commitments.Participate in an advanced AI project and gain valuable experience to enhance your portfolio.Influence how future AI models understand and communicate in your field of expertise.
Seniority levelEntry levelEmployment typePart‐timeJob functionOtherIndustriesIT Services and IT Consulting#J-18808-Ljbffr
-
Evaluation Scenario Writer
Solo per membri registrati Rome
-
Evaluation Scenario Writer
Solo per membri registrati Rome
-
Evaluation Scenario Writer
Solo per membri registrati Rome
-
Evaluation Scenario Writer
Solo per membri registrati Rome
-
AI Agent Evaluation Analyst
Solo per membri registrati Rome
-
MCP & Tools Python Developer - Agent Evaluation Infrastructure
Solo per membri registrati Rome