ComfyUI

ComfyUI — User Guide

Node-based SD workflows—flexible for power users.

Strengths
  • Node-based workflow to visually build complex generation processes
  • Completely free and open source, runs locally
  • Supports all Stable Diffusion models and LoRA
  • Workflows can be saved and shared
  • The community provides a large number of ready-made workflows
Best for
  • Construct a complex image generation process (picture drawing + partial redrawing)
  • Automate batch image generation
  • Professional AI image creation workflow
  • Learn the underlying principles of Stable Diffusion
  • Develop custom AI image processing pipelines

Installation and basic use

ComfyUI requires a certain technical foundation, but it is extremely powerful after installation.

Scenario

Install ComfyUI

Prompt example
# Prerequisite: Python 3.10+ and NVIDIA GPU installed (recommended)




# 1. Clone the repository


git clone https://github.com/comfyanonymous/ComfyUI


cdComfyUI




# 2. Install dependencies


pip install -r requirements.txt




# 3. Download the model (put into models/checkpoints/)


# Recommended: SDXL Base 1.0 or SD 1.5




# 4. Start


python main.py




# 5. Browser access http://127.0.0.1:8188
Output / what to expect

Open the browser and see the node editing interface.

By default, there is a basic Vincent diagram workflow.

Prompt can be modified directly and pictures generated.

Tips

It is recommended to use the ComfyUI Manager plug-in, which can install custom nodes and workflows with one click.

Scenario

Understand node workflow

Prompt example
Nodes of basic Vincent diagram workflow:

1. Load Checkpoint (load model)
   ↓
2. CLIP Text Encode (forward Prompt)
3. CLIP Text Encode (negative Prompt)
   ↓
4. KSampler (sampler, controls the generation process)
   ↓
5. VAE Decode (decoded into pictures)
   ↓
6. Save Image

Each node has adjustable parameters,
Data is transferred between nodes through connections.
Output / what to expect

After understanding the node process,

New functionality can be inserted between any nodes,

Such as adding ControlNet, LoRA, amplification nodes, etc.

Tips

Download the ready-made workflow JSON file from the community and drag it directly into ComfyUI to use it without building it from scratch.

Advanced workflow

The power of ComfyUI lies in its ability to build arbitrarily complex workflows.

Scenario

Tushengtu workflow

Prompt example
Based on the basic workflow, add:




1. Load Image node (load reference image)


2. VAE Encode node (encode the image)


3. Connect to the latent_image input of KSampler


4. Adjust the denoise parameter (0.5-0.8 to retain the original image structure)




In this way, the Tu Sheng Tu workflow is constructed.


You can change the style while keeping the original composition.
Output / what to expect

The Tushengtu workflow can convert any picture into

Convert to the specified style,

The lower the denoise, the closer it is to the original image, and the higher it is, the freer it is.

Tips

Tushengtu is the most commonly used advanced function of ComfyUI, suitable for style migration and image modification.

Compared with similar tools

ToolStrengthBest forPricing
ComfyUI This toolThe most flexible workflow and node-based visualization, the first choice for professional usersProfessional AI image creator requiring complex workflowscompletely free
Stable Diffusion WebUIThe interface is more traditional and easier to get started withBeginner, accustomed to traditional interfacecompletely free
FooocusExtremely simple operation, ready to use right out of the boxDon’t want to learn complex settings, quickly generatecompletely free
MidjourneyHighest quality, no configuration requiredPursue the highest quality and don’t want to manage the local environmentPaid $10-$120/month

Sources & references: