On June 4, Asana identified a bug in its Model Context Protocol (MCP) server and took the server offline to investigate. While the incident was not the result of an external attack, the bug could have exposed data belonging to Asana MCP users to users in other accounts.
According to Asana’s disclosure, the bug “could have potentially exposed certain information from your Asana domain to other Asana MCP users.” Specifically, users leveraging the MCP interface—typically for LLM-powered chat interfaces—may have been able to access data from other organizations, but only within the “projects, teams, tasks, and other Asana objects” of the MCP user’s permissions.
There is no indication that attackers exploited the bug or that other users actually viewed the information accessible through the MCP bug. Asana emphasizes: “This was not a result of a hack or malicious activity on our systems.”
Asana responded quickly upon discovery of the bug:
“As soon as the vulnerability was discovered, our teams immediately took the MCP server down and resolved the issue in our code,” the company wrote in its email to customers.
Customers have been given the ability to request logs and metadata associated with their MCP users to determine whether cross-account data exposure may have occurred. Asana advises organizations to “review any information you may have accessed through the MCP server in recent weeks and immediately delete any data that does not belong to your organization."
Asana reports that the MCP server will be reinstated “in the coming days,” but reconnection will be manual. “We want to ensure your team is aware of the issue we experienced, and that you have full control over when your Asana instance reconnects to the MCP server.”
The company also confirmed that a formal post-mortem report is underway and will be available upon request when completed.
This incident highlights key lessons for any organization integrating LLMs into sensitive workflows:
Asana’s transparency in handling the incident and proactive communication are commendable, but the episode underscores the risks inherent in LLM system design, especially when integrated with enterprise data platforms.