Google Snippet Controversy: AI-Generated Claims on Glass Eating Spark Concerns and Debate

196
Google Snippet Controversy: AI-Generated Claims on Glass Eating Spark Concerns and Debate

In Google snippets for two different search queries earlier this week, there was a claim suggesting that eating glass offered some health benefits. The source of this highly questionable information, discovered by Full Fact, allegedly came from a website called Emergent Mind, known for regularly sharing AI-generated content.

According to Full Fact, they explicitly searched on Google for health advantages associated with eating glass. One of the responses that appeared in a Google snippet suggested that glass could “aid in weight loss” as it was non-nutritious and calorie-free. Consequently, one could “enjoy the crunchy texture of glass” without worrying about gaining weight.

The other search result mentioned that glass was “a great source of silicon,” an essential trace element that plays a vital role in bone health, connective tissues, and skin while improving artery elasticity to prevent conditions like heart disease.

Google primarily assesses snippets through automated means. Allegedly, these two dubious snippets have now been removed. In their support documentation, Google explains that “snippets are displayed when our systems determine that users can easily find what they are looking for through this format,” pulling content from web search results.

According to Google, their automated systems determine if a page provides a good snippet for a specific search query. However, the company emphasizes that snippets conflicting with guidelines for featured snippets, including medical content that contradicts generally accepted consensus or expert opinions, will be removed.

Typically, Google’s automated systems identify and suppress snippets that violate guidelines. Nevertheless, due to the extensive nature of search, Google also relies on user reports, subsequently manually reviewing reported snippets and removing them if necessary.

The response originates from ChatGPT

Matt Mazur, the founder of Emergent Mind, informed Full Fact that the questionable passages were from an early version of ChatGPT. At that time, the AI chatbot was even explicitly instructed to generate misinformation. The complete response generated by ChatGPT was captured in a screenshot shared on Reddit nearly a year ago.

The generation of false information by AI tools is not uncommon, even when users expect accurate information. This behavior is also referred to as hallucination. As recently as August, AI experts cautioned that eliminating AI hallucinations might never be entirely achievable. This recent example further highlights how modern AI trends challenge users’ trust in the authenticity of online information.

READ MORE: OpenAI Shakeup: CEO & President Exit Sparking AI Development and Security Concerns

Previous articleWindows 11 Unveils Customization Wave: EU Users Gain App Freedom, Global Shifts Await
Next articleRussian Government Targets Over 200 Online Services in Extensive Security Crackdown
Mark Brannon
Tech journalist Mark Brannon explores the digital frontier, delivering engaging news and in-depth features on cutting-edge innovations and industry developments.