A new study by The Guardian has revealed that ChatGPT Search, OpenAI's AI-powered search tool, can be tricked into producing false or overly positive summaries. Researchers demonstrated that by hiding text in web pages, they could make the AI ignore negative reviews or even generate harmful code.

This tool is meant to make browsing easier by summarizing content, like product reviews. However, it is vulnerable to hidden text attacks, a common weakness in large language models. Although this issue has been studied before, this is the first time such manipulation has been proven on a live AI search tool.

OpenAI has not commented on this specific case but mentioned that it uses measures to block malicious websites and is working on improving its security. Experts point out that other companies, like Google, which have more experience in search technology, have stronger protections against similar threats.