Liz Reid, the Head of Google Search, has admitted that the company’s search engine has returned some “odd, inaccurate or unhelpful AI Overviews” after they rolled out to everyone in the US. The executive published an explanation for Google’s more peculiar AI-generated responses in a blog post, where it also announced that the company has implemented safeguards that will help the new feature return more accurate and less meme-worthy results.

Reid defended Google and pointed out that some of the more egregious AI Overview responses going around, such as claims that it’s safe to leave dogs in cars, are fake. The viral screenshot showing the answer to “How many rocks should I eat?” is real, but she said that Google came up with an answer because a website published a satirical content tackling the topic. “Prior to these screenshots going viral, practically no one asked Google that question,” she explained, so the company’s AI linked to that website.

The Google VP also confirmed that AI Overview told people to use glue to get cheese to stick to pizza based on content taken from a forum. She said forums typically provide “authentic, first-hand information,” but they could also lead to “less-than-helpful advice.” The executive didn’t mention the other viral AI Overview answers going around, but as The Washington Post reports, the technology also told users that Barack Obama was Muslim and that people should drink plenty of urine to help them pass a kidney stone.

Reid said the company tested the feature extensively before launch, but “thereโ€™s nothing quite like having millions of people using the feature with many novel searches.” Google was apparently able to determine patterns wherein its AI technology didn’t get things right by looking at examples of its responses over the past couple of weeks. It has then put protections in place based on its observations, starting by tweaking its AI to be able to better detect humor and satire content. It has also updated its systems to limit the addition of user-generated replies in Overviews, such as social media and forum posts, which could give people misleading or even harmful advice. In addition, it has also “added triggering restrictions for queries where AI Overviews were not proving to be as helpful” and has stopped showing AI-generated replies for certain health topics.

Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *