By now we’re all familiar with the capabilities of generative AI for creating images. For some tasks, like casting an existing image in a recognizable art style, it works well. Much more than that and it encounters limitations: complex prompts often don’t return exactly what you imagined and iterating on a failed prompt can quickly become time-consuming.
In an attempt to make image generation more reliable at scale, ComfyUI provides a visual, node-based workflow builder that ensures certain properties are present or absent in the resulting image. ComfyUI instances configured for anonymous access allow anyone on the internet to see those workflows and the images they generate. In our research, we found at least 60 ComfyUI servers exposed to the internet and leaking data. You can probably guess one thing they were being used to generate… but there are also some surprises.
What is ComfyUI?
ComfyUI is a relatively new technology, first released at the beginning of 2023. Riding the wave of interest in generative AI, community adoption continued through 2024, earning 77k stars on Github and garnering support from mainstream corporate software like Nvidia RTX Remix.
ComfyUI provides a visual, node-based editor for defining operations in a workflow. Thus the interface includes lists of models, nodes, and workflows the user has created. In an exemplar workflow pictured below, a user might select different models to perform different steps in generating an image. They can also define both positive and negative prompts to ensure the output is to their liking. When this interface is exposed, it allows anonymous users the ability to see the models, nodes, stored prompts, workflows and even to generate their own images with the host’s computing resources.

Resources for threat intelligence teams
Threat intelligence teams looking to assess their own exposure to open ComfyUI instances can search Shodan for title:”ComfyUI”
, which returns around 2,800 sites. The vast majority of these are dead ends– approximately 800 return login pages, which can be removed by updating the search to title:”ComfyUI” -title:”Login”
. Aside from the risks of data leakage, those authenticated instances may still be relevant for determining if a vendor is using ComfyUI in some capacity.
Of the remaining 2,000 sites, many are honeypots, require authentication, or are otherwise inaccessible. Ultimately, we found 93 IP addresses allowing unauthenticated access to ComfyUI instances. A further 28 of them timed out while attempting to load the UI and 5 displayed evidence of being honeypots after loading. That left us with 60 unauthenticated ComfyUI servers leaking data and/or available for abuse.
The distribution of IP addresses in the Shodan search reveals a concentration in China, which both is and is not misleading. Most of the ComfyUI IP addresses in China are honeypots, but leaking instances disproportionately are hosted in China or use Chinese in their UI settings or file names. These instances are also more likely to be operated by small AI/ML vendors rather than individuals, and thus pose some supply chain risk.

Risks of exposing ComfyUI
Expanded attack surface
Exposing additional services, even with authentication enabled, increases an organization’s attack surface. In examining the IP addresses where ComfyUI was being hosted, we found they often contained additional services related to the AI generation pipeline. In this example, ComfyUI generated pictures for a clothing retailer on one port while a login interface for a DeepSeek interface somehow related to an RF chip manufacturer was exposed on another.


Supply chain
The companies operating the ComfyUI workflows often hosted their public website and other business applications on the same IPs. In reviewing these, it was clear that these were not hobbyist instances, but attempts to offer AI image generation as a commercial service. The risk of exposed ComfyUI instances most commonly occurs in the supply chain, as the software permits enterprising individuals to quickly set up image generating pipelines that can then be offered as SaaS services to others. Instances sharing their hardware metadata showed some impressive specs, like a NVIDIA A100-SXM4-80GB GPU with 1 TB of RAM, that indicate either a very rich individual, repurposed cryptominer, or a level of investment commensurate with a small business.
Prompt leaks
ComfyUI embeds image generation metadata in the PNGs it creates. If those prompts contain brand keywords or other internal information, they can be leaked when the images are shared. In one case, the inputs included text prompts and base64-encoded images of real women, both of which could be recovered from the metadata of the generated PNGs. ComfyUI-generated images in the AI supply chain could therefore persist information about the images’ provenance of which corporate consumers might not be aware. Depending on what that prompt data is, it might lead on to our next risk.

Reputational damage
Even if you are not generating pornography with ComfyUI, other end users of a shared AI image generation vendor might be. If the vendor’s ComfyUI instance is exposed, you might wind up in the uncomfortable position of having your brand associated with that content. The range of content currently being generated on such multi-tenant ComfyUI installations ranges from the innocuous to the extremely unpleasant.
In our survey, we saw four IP addresses generating pornography, usually in the hentai/anime style. Four IP addresses may not seem like very many, but each of those systems was outputting a consistent stream of pornography, resulting in a massive amount of content. On those four IPs we saw dozens of different AI models dedicated to specific explicit scenarios. The existence of such models is not a secret– they can be found even on industry-standard sites like Hugging Face (example and warning: NSFW)– though it can be a little disconcerting to see how much development effort has gone into the creation of resources for mass producing hyper-specific pornography. Why someone would want to produce more pornography than they can possibly consume would require a more psychoanalytical approach than we take in our research, so for now we will leave it as a risk with which you don’t want your business associated.

Communications and deepfakes
Somewhat to our surprise, none of the instances we surveyed were creating pornographic deepfakes, at least at the time when we observed them. However, one instance was creating realistic videos of a female Chinese spokesperson in a military uniform. The project used separate workflows to animate her body and face based on models and reference videos in order to match normal human movement for the words she was speaking. At first glance, this project appeared to be creating deepfakes of a Chinese military spokesperson, which would be ripe for abuse, but after translating two of the videos we learned they were announcements related to university events. Still, the ability of anonymous users to edit the prompts could create unintended effects for the consumers of this media.


Conclusion
In the case of data leaking from AI image generation software, we can’t ignore that the risks go beyond preventing the exposure of strictly sensitive data like PII and credentials. The content being created by genAI infrastructure configured for public viewing can erode our perception of, and trust in, that vendor. Exposed ComfyUI interfaces reveal those images, and those images can in turn reveal the prompts used to generate them. They might also reveal other leaks in a vendor’s attack surface, linking the ComfyUI image generation to vendors that appear to offer unrelated services.
Needless to say, if you are using ComfyUI, ensure that authentication is enabled. That risk should be easily treatable. The other risks apparent from surveying these exposures may not be so easy. In several cases, the image generation services were hosted on the same IP addresses as seemingly unrelated businesses, crossing over between manufacturing, consulting, and consumer products. The risk of boutique AI/ML vendors with poor separation of duties between multiple business ventures requires additional diligence from vendor risk management teams, but one that surveying exposed ComfyUI instances can help highlight.