Diffusion Model ๐Ÿ”’ Gated

FLUX.2 Dev

by black-forest-labs / Comfy-Org / unsloth ยท โ˜…โ˜…โ˜…โ˜…ยฝ 4.9 ยท โ€” downloads ยท โ€” ยท flux2
Install with mods
mods install flux2-dev
โšก Running mods install flux2-dev without --variant auto-selects the best variant for your GPU VRAM.
Available variants: fp8mixed, bf16, gguf-q8-0, gguf-q6-k, gguf-q5-k-m, gguf-q4-k-m, gguf-q3-k-m, gguf-q2-k
mods install flux2-dev --variant fp8mixed
๐Ÿ”’ This model requires accepting terms on HuggingFace and running mods auth huggingface first.

About

FLUX.2 Dev โ€” next-generation image model from Black Forest Labs.

Up to 4MP photorealistic output with improved lighting, skin, fabric, and hand detail.

Multi-reference consistency (up to 10 images), improved editing precision,

better visual understanding, and professional-class text rendering.

Uses Mistral 3 Small text encoder and FLUX.2 VAE.

Open-source (non-commercial license).

Variants

Variant Format Size VRAM Install
fp8mixed โ“˜ safetensors 12.0 GB 12+ GB mods install flux2-dev --variant fp8mixed
bf16 โ“˜ gguf 64.4 GB 64+ GB mods install flux2-dev --variant bf16
gguf-q8-0 โ“˜ gguf 35.0 GB 36+ GB mods install flux2-dev --variant gguf-q8-0
gguf-q6-k โ“˜ gguf 27.4 GB 28+ GB mods install flux2-dev --variant gguf-q6-k
gguf-q5-k-m โ“˜ gguf 23.9 GB 24+ GB mods install flux2-dev --variant gguf-q5-k-m
gguf-q4-k-m โ“˜ gguf 20.0 GB 20+ GB mods install flux2-dev --variant gguf-q4-k-m
gguf-q3-k-m โ“˜ gguf 15.8 GB 16+ GB mods install flux2-dev --variant gguf-q3-k-m
gguf-q2-k โ“˜ gguf 12.9 GB 14+ GB mods install flux2-dev --variant gguf-q2-k

Dependencies

๐Ÿ“ฆ These models are automatically installed when you run mods install flux2-dev. No extra steps needed โ€” mods resolves and downloads all dependencies for you.