Like any genAI model, Google Gemini responses can sometimes be inaccurate, but in this case it might be because testers don’t have the expertise to fact-check them.
According to TechCrunch, the firm hired to improve accuracy for Gemini is now making its testers evaluate responses even if they don’t have the “domain knowledge.”
The report raises questions about the rigor and standards Google says it applies to testing Gemini for accuracy. In the “Building responsibly” section of the Gemini 2.0 announcement, Google said it is “working with trusted testers and external experts and performing extensive risk assessments and safety and assurance evaluations.” There’s a reasonable focus on evaluating responses for sensitive and harmful content, but less attention is paid to responses that aren’t necessarily dangerous but just inaccurate.
Google seems to disregard the hallucination and error problem by simply adding a disclaimer that “Gemini can make mistakes, so double-check it,” which effectively absolves it from any responsibility. But that doesn’t account for the humans doing the work behind the scenes.
Previously GlobalLogic, a subsidiary of Hitachi, instructed its prompt engineers and analysts to skip a Gemini response they didn’t fully understand. “If you do not have critical expertise (e.g. coding, math) to rate this prompt, please skip this task,” said the guidelines viewed by the outlet.
But last week, GlobalLogic changed its instructions, saying, “You should not skip prompts that require specialized domain knowledge,” and to instead “rate the parts of the prompt you understand,” and note that they don’t have the required expertise in their analysis. Expertise, in other words, is not being treated as a prerequisite for this work.
Contractors can now only skip prompts that are “completely missing information,” according to TechCrunch, or those that contain sensitive content that requires a consent form.