About
FLUX.2 Klein 9B — mid-size model in the FLUX.2 Klein family.
Successor to FLUX.1 with improved image quality and editing capabilities.
Supports text-to-image and image editing workflows.
Uses FLUX.2 VAE and Qwen 3 8B text encoder.
Available as BFL fp8 safetensors (distilled + base) and GGUF quantizations.
Requires ComfyUI-GGUF custom node for GGUF variants.
Preview
Variants
| Variant | Format | Size | VRAM | Install |
|---|---|---|---|---|
| distilled-fp8 ⓘ | safetensors | 9.4 GB | 12+ GB | mods install flux2-klein-9b --variant distilled-fp8 |
| base-fp8 ⓘ | safetensors | 9.6 GB | 14+ GB | mods install flux2-klein-9b --variant base-fp8 |
| bf16 ⓘ | gguf | 18.2 GB | 20+ GB | mods install flux2-klein-9b --variant bf16 |
| f16 ⓘ | gguf | 18.2 GB | 20+ GB | mods install flux2-klein-9b --variant f16 |
| gguf-q8-0 ⓘ | gguf | 10.0 GB | 12+ GB | mods install flux2-klein-9b --variant gguf-q8-0 |
| gguf-q6-k ⓘ | gguf | 7.9 GB | 10+ GB | mods install flux2-klein-9b --variant gguf-q6-k |
| gguf-q5-k-m ⓘ | gguf | 7.0 GB | 8+ GB | mods install flux2-klein-9b --variant gguf-q5-k-m |
| gguf-q5-k-s ⓘ | gguf | 6.9 GB | 8+ GB | mods install flux2-klein-9b --variant gguf-q5-k-s |
| gguf-q4-k-m ⓘ | gguf | 5.9 GB | 8+ GB | mods install flux2-klein-9b --variant gguf-q4-k-m |
| gguf-q4-k-s ⓘ | gguf | 5.8 GB | 8+ GB | mods install flux2-klein-9b --variant gguf-q4-k-s |
| gguf-q4-1 ⓘ | gguf | 6.2 GB | 8+ GB | mods install flux2-klein-9b --variant gguf-q4-1 |
| gguf-q4-0 ⓘ | gguf | 5.6 GB | 8+ GB | mods install flux2-klein-9b --variant gguf-q4-0 |
| gguf-q3-k-m ⓘ | gguf | 4.8 GB | 6+ GB | mods install flux2-klein-9b --variant gguf-q3-k-m |
| gguf-q3-k-s ⓘ | gguf | 4.7 GB | 6+ GB | mods install flux2-klein-9b --variant gguf-q3-k-s |
| gguf-q2-k ⓘ | gguf | 4.0 GB | 6+ GB | mods install flux2-klein-9b --variant gguf-q2-k |
Dependencies
These models are automatically installed when you run
mods install flux2-klein-9b. No extra steps needed — mods resolves and downloads all dependencies for you.