Diffusion Model ๐Ÿ”’ Gated

FLUX.2 Klein 4B

by black-forest-labs / Comfy-Org / unsloth ยท โ˜…โ˜…โ˜…โ˜…ยฝ 4.8 ยท โ€” downloads ยท โ€” ยท flux2
Install with mods
mods install flux2-klein-4b
โšก Running mods install flux2-klein-4b without --variant auto-selects the best variant for your GPU VRAM.
Available variants: distilled-fp8, base-fp8, bf16, gguf-q8-0, gguf-q6-k, gguf-q5-k-m, gguf-q4-k-m, gguf-q3-k-m, gguf-q2-k
mods install flux2-klein-4b --variant distilled-fp8
๐Ÿ”’ This model requires accepting terms on HuggingFace and running mods auth huggingface first.

About

FLUX.2 Klein 4B โ€” the fastest model in the FLUX family.

Unifies text-to-image and image editing in one compact architecture.

Two variants: Base (undistilled, best for fine-tuning) and Distilled (4-step, ~1.2s on 5090).

Only 8.4 GB VRAM for distilled variant. Supports style transforms, semantic edits,

object replacement/removal, multi-reference composition, and iterative edits.

Variants

Variant Format Size VRAM Install
distilled-fp8 โ“˜ safetensors 4.5 GB 8+ GB mods install flux2-klein-4b --variant distilled-fp8
base-fp8 โ“˜ safetensors 4.5 GB 10+ GB mods install flux2-klein-4b --variant base-fp8
bf16 โ“˜ gguf 7.8 GB 10+ GB mods install flux2-klein-4b --variant bf16
gguf-q8-0 โ“˜ gguf 4.3 GB 6+ GB mods install flux2-klein-4b --variant gguf-q8-0
gguf-q6-k โ“˜ gguf 3.4 GB 6+ GB mods install flux2-klein-4b --variant gguf-q6-k
gguf-q5-k-m โ“˜ gguf 3.1 GB 6+ GB mods install flux2-klein-4b --variant gguf-q5-k-m
gguf-q4-k-m โ“˜ gguf 2.6 GB 4+ GB mods install flux2-klein-4b --variant gguf-q4-k-m
gguf-q3-k-m โ“˜ gguf 2.1 GB 4+ GB mods install flux2-klein-4b --variant gguf-q3-k-m
gguf-q2-k โ“˜ gguf 1.8 GB 4+ GB mods install flux2-klein-4b --variant gguf-q2-k

Dependencies

๐Ÿ“ฆ These models are automatically installed when you run mods install flux2-klein-4b. No extra steps needed โ€” mods resolves and downloads all dependencies for you.