Artificial intelligence tools, such as GPT models and Perplexity AI, have revolutionized the way users search for information online. However, a recent study has revealed an alarming security issue: these technologies are directing users to phishing websites instead of legitimate login pages. According to research conducted by Netcraft, when querying these tools about official URLs, more than 34% of the responses pointed to domains that did not belong to the corresponding brands.
A problem that is not stopping from growing
Researchers conducted tests using GPT-4.1 models, assigning tasks such as finding the website to access accounts for 50 different brands in sectors like finance, retail, technology, and utilities. Out of the 131 unique URLs obtained, 64 belonged to the correct brands, while 28 were inactive or unregistered, and 5 were from legitimate but unrelated companies. This finding underscores a critical risk in this era of AI-powered searches.
A real case that illustrates this vulnerability is that of Wells Fargo, where AI recommended a fraudulent Google site as the top result when requesting the login URL, relegating the legitimate address to a secondary position. This type of error could be exploited by cybercriminals, who are modifying their strategies to take advantage of these weaknesses, as demonstrated by the recent identification of a plot operation that used a fake API to impersonate legitimate interfaces.
The impact of these vulnerabilities is significant, especially for smaller brands and local banks, which, due to their lower presence in the training data of language models, are more susceptible to misinformation generated by AI. This increases their exposure to phishing attempts that can be financially devastating. As search engines adopt these models, a concerning question arises about the security and trust in the results generated by artificial intelligence.