Google Bard AI fact checker gives damning statement on how the AI is trained

Sayem Ahmed
google building

According to an interview, a Google Bard AI fact checker has issued a damning claim that workers are not being given enough time to verify the claims that the AI is making.

While Google admits that its Bard generative AI is behind its contemporaries like ChatGPT and Bing, the company has been taking some time to improve the tool. To make Bard’s answers more accurate, Google has hired crowdsourced workers to verify the accuracy of the AI’s responses. This should help to train the AI and improve the quality of its output.

Language models require this kind of training to ensure that the information they provide is correct, something that all AI companies have faced challenges with over the past year. Ed Stackhouse, who works for Appen was hired to help improve Bard’s responses, however, he claims that workers might not be given the time they need to verify all of Bard’s outputs, in an interview with The Register.

Employees need to read the information and comment on the quality of the text, which can take up to “15 minutes” to verify. Stackhouse claims that workers are given just two minutes to verify the information.

“You would have to check that a business was started at such and such date, that it manufactured such and such project, that the CEO is such and such,”. With the number of queries that need to be checked, two minutes simply is not enough time to verify the claims made by Bard thoroughly.

Inaccurate information could harm AI in the long-term

Google Logo alongside mobile phone

This is a dangerous precedent being set by Google, and inaccurate information could be especially harmful when users are searching for queries about things like medication, politics, or legal topics. When given the query of what the side effects of a particular prescription, Stackhouse claims “I would have to go through and verify each one. What if I get one wrong?”

This is an incredibly risky move for Google, and due to the rapid proliferation of AI, it could lead to widespread misinformation. “The biggest danger is that they can mislead and sound so good that people will be convinced that AI is correct.”

According to The Register, the contractor tasked with verifying these claims, Appen, penalize workers if they do not complete their allotted number of tasks in a given timeframe. Considering how workers are allegedly only given two minutes to verify the responses, it has caused a stir internally. Six workers, including Stackhouse have been fired for speaking out on the potentially harmful practice.

“It’s their product. If they want a flawed product, that’s on them.” Stackhouse commented on Google’s involvement with Appen.

A Google spokesperson told The Register: “As we’ve shared, Appen is responsible for the working conditions of their employees – including pay, benefits, employment changes, and the tasks they’re assigned. We, of course, respect the right of these workers to join a union or participate in organizing activity, but it’s a matter between the workers and their employer, Appen,” 

Related Topics

About The Author

Dexerto's Hardware Editor. Sayem is an expert in all things Nvidia, AMD, Intel, and PC components. He has 10 years of experience, having written for the likes of Eurogamer, IGN, Trusted Reviews, Kotaku, and many more. Get in touch via email at