CLI Reference
Everything you need to use mods effectively.
Installation
curl -fsSL https://raw.githubusercontent.com/modshq-org/mods/main/install.sh | sh Or build from source:
git clone https://github.com/modshq-org/mods && cd mods && cargo install --path . Quick Start
Set up mods
$ mods initAuto-detects ComfyUI and A1111 installations, configures storage.
Adopt existing models (optional)
$ mods link --comfyui ~/ComfyUIMoves recognized models into the store and replaces them with symlinks.
Install models
$ mods install flux-devDownloads the model + all dependencies, symlinked to all your tools.
Concepts
Content-Addressed Storage
Models are stored by SHA256 hash in ~/mods/store/. A single file on disk can be symlinked into multiple tools. This means no duplicate 24GB files across ComfyUI and A1111.
Symlink Strategy
When you install a model, mods downloads it to the store, then creates symlinks in your tool's model folders. Symlinks are transparent — your tools see normal files. On Windows, mods falls back to hard links if symlinks require admin privileges.
Dependency Resolution
Manifests declare dependencies. Installing a checkpoint automatically installs its required VAE, text encoders, and other assets. The resolver handles transitive dependencies and skips already-installed items.
Variant Selection
Models come in variants (fp16, fp8, GGUF quantizations). Mods detects your GPU VRAM and picks the largest variant that fits. Dependencies can constrain variants too — a checkpoint can require the fp8 text encoder to avoid VRAM overflow.
Adopting Existing Models
mods link scans your existing model folders, hashes files, and matches them against the registry. Matched files are moved into the store and replaced with symlinks. Unrecognized files are left untouched.
Commands
mods init
Interactive first-run setup — detect tools, configure storage
mods init mods install
Install a model, LoRA, VAE, or other asset (with dependency resolution)
mods install <id> [flags] <id> Model ID from the registry (e.g., flux-dev, realistic-skin-v3) --variant <value> Force a specific variant (e.g., fp16, fp8, gguf-q4) --dry-run Show what would be installed without doing it --force Force re-download even if files already exist mods uninstall
Remove an installed model
mods uninstall <id> [flags] <id> Model ID to uninstall --force Force removal even if other items depend on this mods list
List installed models
mods list [flags] -t, --type <value> Filter by asset type (checkpoint, lora, vae, text_encoder, etc.) mods info
Show detailed info about a model
mods info <id> <id> Model ID to inspect mods search
Search the registry
mods search <query> [flags] <query> Search query -t, --type <value> Filter by asset type --for <value> Filter by compatible base model --tag <value> Filter by tag --min-rating <value> Minimum rating mods space
Show disk usage breakdown
mods space mods doctor
Check for broken symlinks, missing deps, corrupt files
mods doctor [flags] --verify-hashes Also verify SHA256 hashes (slow for large files) mods config
View or update configuration (e.g., storage.root, gpu.vram_mb)
mods config [key] [value] [key] Config key to view or set (e.g., storage.root) [value] New value (required when setting a key) mods gc
Garbage collect — remove unreferenced files from the store
mods gc mods link
Link an existing tool's model folder into mods
mods link [flags] --comfyui <value> Path to ComfyUI installation --a1111 <value> Path to A1111 installation mods auth
Configure authentication (HuggingFace, Civitai)
mods auth <provider> <provider> Auth provider: huggingface or civitai mods update
Fetch latest registry index
mods update mods export
Export installed state to a lock file
mods export mods import
Import and install from a lock file
mods import <path> <path> Path to mods.lock file mods popular
Show popular/trending models
mods popular [type] [flags] [type] Filter by asset type --for <value> Filter by compatible base model --period <value> Time period: day, week, month (default: week) VRAM Selection
Mods detects your GPU and picks the largest variant that fits. Override with --variant or set a manual VRAM value.
fp16 Full quality, no compromises fp8 Slight quality reduction, half the VRAM gguf-q4 Quantized, needs GGUF loader node gguf-q2 Lower quality, but functional $ mods config gpu.vram_mb 24576 # Manual override Config Files
~/.mods/config.yaml
Main configuration: storage root, tool targets, GPU override.
storage:
root: ~/mods
targets:
- path: ~/ComfyUI
type: comfyui
symlink: true
- path: ~/stable-diffusion-webui
type: a1111
symlink: true
# gpu:
# vram_mb: 24576~/.mods/auth.yaml
Authentication tokens for gated model providers.
huggingface: token: "hf_..." civitai: api_key: "..."
~/.mods/state.db
SQLite database tracking installed models, symlinks, and dependencies.
~/.mods/index.json
Local cache of the registry index. Updated via mods update.
Supported Tools
Mods creates symlinks into the correct folder for each tool. Configure targets during mods init or with mods link.
ComfyUI
Full support. Models placed in models/checkpoints/, LoRAs in models/loras/, etc.
mods link --comfyui ~/ComfyUIA1111 / SD WebUI
Full support. Models in models/Stable-diffusion/, LoRAs in models/Lora/, etc.
mods link --a1111 ~/stable-diffusion-webuiInvokeAI Planned
InvokeAI uses its own model database internally. Integration is on the roadmap.
Other tools
Want to add support for another tool? Contributions welcome — see CONTRIBUTING.md for how to add a folder layout mapping.
FAQ
What if mods init doesn't detect my ComfyUI?
You can manually link any tool installation with mods link:
$ mods link --comfyui /path/to/ComfyUIThis works for any location, including portable or manually installed setups.
How do I download gated models like Flux Dev?
Some models on HuggingFace require accepting license terms. Mods handles this:
$ mods auth huggingfaceThis stores your HuggingFace token in ~/.mods/auth.yaml. You'll also need to accept the model's terms on HuggingFace before downloading. Mods will tell you exactly which URL to visit.
Can I override the auto-selected variant?
Yes. Mods picks the largest variant that fits your GPU by default, but you can always override:
$ mods install flux-dev --variant fp8This is useful if you prefer faster inference over max quality — for example, fp8 on a 24GB card gives roughly 2x speed with minimal quality loss.
Where are my models stored?
Models live in a content-addressed store at ~/mods/store/ by default. Change it with:
$ mods config storage.root /path/to/new/locationCheck current disk usage with mods space.
How much disk space do I need?
It depends on which models you install. A typical Flux setup (checkpoint + VAE + text encoders) is ~30GB. Run mods install --dry-run to see download sizes before committing, and mods space to see current usage.
Something seems broken — how do I diagnose it?
Run the health check:
$ mods doctorThis checks for broken symlinks, missing dependencies, and other issues. Add --verify-hashes to also verify file integrity (slower for large files).