
Strengths
- Node-based workflow to visually build complex generation processes
- Completely free and open source, runs locally
- Supports all Stable Diffusion models and LoRA
- Workflows can be saved and shared
- The community provides a large number of ready-made workflows
Best for
- Construct a complex image generation process (picture drawing + partial redrawing)
- Automate batch image generation
- Professional AI image creation workflow
- Learn the underlying principles of Stable Diffusion
- Develop custom AI image processing pipelines
Installation and basic use
ComfyUI requires a certain technical foundation, but it is extremely powerful after installation.
Install ComfyUI
# Prerequisite: Python 3.10+ and NVIDIA GPU installed (recommended) # 1. Clone the repository git clone https://github.com/comfyanonymous/ComfyUI cdComfyUI # 2. Install dependencies pip install -r requirements.txt # 3. Download the model (put into models/checkpoints/) # Recommended: SDXL Base 1.0 or SD 1.5 # 4. Start python main.py # 5. Browser access http://127.0.0.1:8188
Open the browser and see the node editing interface.
By default, there is a basic Vincent diagram workflow.
Prompt can be modified directly and pictures generated.
It is recommended to use the ComfyUI Manager plug-in, which can install custom nodes and workflows with one click.
Understand node workflow
Nodes of basic Vincent diagram workflow: 1. Load Checkpoint (load model) ↓ 2. CLIP Text Encode (forward Prompt) 3. CLIP Text Encode (negative Prompt) ↓ 4. KSampler (sampler, controls the generation process) ↓ 5. VAE Decode (decoded into pictures) ↓ 6. Save Image Each node has adjustable parameters, Data is transferred between nodes through connections.
After understanding the node process,
New functionality can be inserted between any nodes,
Such as adding ControlNet, LoRA, amplification nodes, etc.
Download the ready-made workflow JSON file from the community and drag it directly into ComfyUI to use it without building it from scratch.
Advanced workflow
The power of ComfyUI lies in its ability to build arbitrarily complex workflows.
Tushengtu workflow
Based on the basic workflow, add: 1. Load Image node (load reference image) 2. VAE Encode node (encode the image) 3. Connect to the latent_image input of KSampler 4. Adjust the denoise parameter (0.5-0.8 to retain the original image structure) In this way, the Tu Sheng Tu workflow is constructed. You can change the style while keeping the original composition.
The Tushengtu workflow can convert any picture into
Convert to the specified style,
The lower the denoise, the closer it is to the original image, and the higher it is, the freer it is.
Tushengtu is the most commonly used advanced function of ComfyUI, suitable for style migration and image modification.
Compared with similar tools
| Tool | Strength | Best for | Pricing |
|---|---|---|---|
| ComfyUI This tool | The most flexible workflow and node-based visualization, the first choice for professional users | Professional AI image creator requiring complex workflows | completely free |
| Stable Diffusion WebUI | The interface is more traditional and easier to get started with | Beginner, accustomed to traditional interface | completely free |
| Fooocus | Extremely simple operation, ready to use right out of the box | Don’t want to learn complex settings, quickly generate | completely free |
| Midjourney | Highest quality, no configuration required | Pursue the highest quality and don’t want to manage the local environment | Paid $10-$120/month |
Sources & references:
- ComfyUI GitHub (2025-03)
- ComfyUI community workflow (2025-03)