Full-Time AI Evaluation – Safety Specialist
Mercor is hiring a remote Full-Time AI Evaluation – Safety Specialist. The career level for this job opening is Experienced and is accepting USA, UK, Canada based applicants remotely. Read complete job description before applying.
This job was posted 2 months ago and is likely no
longer active. We encourage you to explore more recent opportunities on our site. However, you
may still try your luck using 'Apply Now' link below. We recommend focusing on newer listings
available here.
Mercor
Job Title
AI Evaluation – Safety Specialist
Posted
Career Level
Full-Time
Career Level
Experienced
Locations Accepted
USA, UK, Canada
Salary
HOUR $47 - $90
Share
Job Details
At Mercor, we believe the foundation of AI safety is high-quality human data. Models can’t evaluate themselves — they need humans who can apply structured judgment to complex, nuanced outputs.
We’re building a flexible pod of Safety specialists: contributors from both technical and non-technical backgrounds who will serve as expert data annotators. This pod will annotate and evaluate AI behaviors to ensure the systems are safe.
No prior annotation experience is required — instead, we’re looking for people with the ability to make careful, consistent decisions in ambiguous situations.
This role may include reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources.
Qualifications: - You bring experience in model evaluation, structured annotation, or applied research.
- You are skilled at spotting biases, inconsistencies, or subtle unsafe behaviors that automated systems may miss.
- You can explain and defend your reasoning with clarity.
- You thrive in a fast-moving, experimental environment where evaluation methods evolve quickly.
- Examples of past titles: Machine Learning Research Assistant, AI Evaluator, Data Scientist, Applied Scientist, Research Engineer, AI Safety Fellow, Annotation Specialist, Data Labeling Analyst, AI Ethics Researcher.
- Produce high-quality human data by annotating AI outputs against safety criteria (e.g., bias, misinformation, disallowed content, unsafe reasoning, etc).
- Apply harm taxonomies and guidelines consistently, even when tasks are ambiguous.
- Document your reasoning to improve guidelines.
- Collaborate to provide the human data that powers AI safety research, model improvements, and risk audits.
- Work at the frontier of AI safety, providing the human data that shapes how advanced systems behave.
- Gain experience in a rapidly growing field with direct impact on how labs deploy frontier AI responsibly.
- Be part of a team committed to making AI systems safer, trustworthy, and aligned with human values.
FAQs
What is the last date for applying to the job?
The deadline to apply for Full-Time AI Evaluation – Safety Specialist at Mercor is
30th of October 2025
. We consider jobs older than one month to have expired.
Which countries are accepted for this remote job?
This job accepts [
USA, UK, Canada
] applicants. .
Related Jobs You May Like
AI Agent Developer (Remote)
Bengaluru, India
6 days ago
Agno
Google ADK
LangGraph
Winbold
Full-Time
Experienced
AI Solution Architect
Sofia, Bulgaria
1 week ago
AI/ML Services
Azure
OpenAI
KOSTAL Group
Full-Time
Experienced
Prompt Engineer
USA
2 weeks ago
Content Automation
Content Localization
Generative AI
Medier
Full-Time
Experienced
AI/LLM Engineer
Italy, Spain
2 weeks ago
LLMs (OpenAI Or Similar)
Multi-agent Systems
Prompt Engineering
Docplanner
Full-Time
Experienced
Lead AI Solution Developer
Bengaluru, India
2 weeks ago
Cloud Platforms
DevOps
Java
Altisource
Full-Time
Experienced
AI Solution Developer
Bengaluru, India
3 weeks ago
Java
LangChain
LLM APIs
Altisource
Full-Time
Experienced
Lead Domain Architect - AI
Auckland, New Zealand
1 month ago
AI/ML Architecture
AI/ML Governance
Cloud Technologies (AWS & Databricks)
Contact Energy
Full-Time
Expert