Diffusion Model
An AI architecture that generates images by progressively refining random noise into coherent outputs.
Definition
Diffusion models create images by starting with pure noise and gradually removing it through learned steps until a coherent image emerges. The training process teaches the model to reverse noise addition, enabling generation of new images matching text descriptions.
This architecture powers leading image generators including Stable Diffusion and DALL-E.
Why It Matters
Understanding diffusion models explains why AI image generation has specific characteristics—why it excels at certain styles and struggles with others, why prompts work the way they do.
This knowledge helps create better prompts and set realistic expectations for AI-generated imagery.
Examples in Practice
A designer uses knowledge of diffusion model limitations to craft prompts that avoid common failure modes like mangled hands.
A company evaluates different diffusion-based tools based on their training data and artistic capabilities.
An artist develops workflows that leverage diffusion model strengths while manually correcting typical weaknesses.