Stable Diffusion WebUI

Stable Diffusion WebUI — User Guide

Popular local SD UI with a rich plugin ecosystem.

Strengths
  • Completely open source and free, can be run locally and generated unlimitedly
  • Huge community model ecosystem (tens of thousands of models on Civitai and other platforms)
  • Highly customizable, supporting extensions such as LoRA and ControlNet
  • Data is completely private, no privacy concerns
Best for
  • Professional creators who need to generate images in large quantities
  • Designers who require fine control over a specific style
  • Developers build image generation applications
  • Privacy-sensitive commercial image generation

Using AUTOMATIC1111 WebUI

AUTOMATIC1111 is the most popular Stable Diffusion local runtime interface.

Scenario

Install and run Stable Diffusion

Prompt example
1. Install Python 3.10 and Git


2. Clone the repository: git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui


3. Download the model file and put it in the models/Stable-diffusion directory


4. Run webui.bat (Windows)
Output / what to expect
Start the web interface locally and access http://127.0.0.1:7860 through the browser. You can generate images without restrictions and do not need to be connected to the Internet.
Tips

Necessary components will be automatically downloaded when running for the first time, and a stable network is required. It is recommended to use an NVIDIA GPU (at least 4GB of video memory) for reasonable speed.

Scenario

Generate high-quality realistic portraits

Prompt example
Positive: (masterpiece, best quality:1.2), 1girl, beautiful face, detailed
eyes, natural lighting, photorealistic, 8k


Negative: (worst quality, low quality:1.4), deformed, ugly, blurry
Output / what to expect
Generate high-quality realistic portraits, precisely control quality and avoid common problems through positive and negative prompt words.
Tips

Negative prompts are crucial to quality improvement. "worst quality, low quality, deformed" are necessary negative words.

ControlNet precise control

Use ControlNet to generate results with precise control over poses, line art, and more.

Scenario

Generate character graph based on pose reference

Prompt example
Upload the character pose reference picture, select "OpenPose" mode in ControlNet, and enter the character description prompt words
Output / what to expect
The pose of the generated character image is exactly the same as that of the reference image, but the appearance, clothing, and style can be fully customized, which solves the problem of random AI image poses.
Tips

ControlNet has multiple modes: OpenPose (posture), Canny (line drawing), and Depth (depth map). Choose the appropriate mode according to your needs.

Sources & references: