CLI Reference

Everything you need to use mods effectively.

Installation

curl -fsSL https://raw.githubusercontent.com/modshq-org/mods/main/install.sh | sh

Or build from source:

git clone https://github.com/modshq-org/mods && cd mods && cargo install --path .

Quick Start

1

Set up mods

$ mods init

Auto-detects ComfyUI and A1111 installations, configures storage.

2

Adopt existing models (optional)

$ mods link --comfyui ~/ComfyUI

Moves recognized models into the store and replaces them with symlinks.

3

Install models

$ mods install flux-dev

Downloads the model + all dependencies, symlinked to all your tools.

Concepts

Content-Addressed Storage

Models are stored by SHA256 hash in ~/mods/store/. A single file on disk can be symlinked into multiple tools. This means no duplicate 24GB files across ComfyUI and A1111.

Symlink Strategy

When you install a model, mods downloads it to the store, then creates symlinks in your tool's model folders. Symlinks are transparent — your tools see normal files. On Windows, mods falls back to hard links if symlinks require admin privileges.

Dependency Resolution

Manifests declare dependencies. Installing a checkpoint automatically installs its required VAE, text encoders, and other assets. The resolver handles transitive dependencies and skips already-installed items.

Variant Selection

Models come in variants (fp16, fp8, GGUF quantizations). Mods detects your GPU VRAM and picks the largest variant that fits. Dependencies can constrain variants too — a checkpoint can require the fp8 text encoder to avoid VRAM overflow.

Adopting Existing Models

mods link scans your existing model folders, hashes files, and matches them against the registry. Matched files are moved into the store and replaced with symlinks. Unrecognized files are left untouched.

Commands

mods init

Interactive first-run setup — detect tools, configure storage

Usage mods init

mods install

Install a model, LoRA, VAE, or other asset (with dependency resolution)

Usage mods install <id> [flags]
Argument Description
<id> Model ID from the registry (e.g., flux-dev, realistic-skin-v3)
Flag Description
--variant <value> Force a specific variant (e.g., fp16, fp8, gguf-q4)
--dry-run Show what would be installed without doing it
--force Force re-download even if files already exist

mods uninstall

Remove an installed model

Usage mods uninstall <id> [flags]
Argument Description
<id> Model ID to uninstall
Flag Description
--force Force removal even if other items depend on this

mods list

List installed models

Usage mods list [flags]
Flag Description
-t, --type <value> Filter by asset type (checkpoint, lora, vae, text_encoder, etc.)

mods info

Show detailed info about a model

Usage mods info <id>
Argument Description
<id> Model ID to inspect

mods space

Show disk usage breakdown

Usage mods space

mods doctor

Check for broken symlinks, missing deps, corrupt files

Usage mods doctor [flags]
Flag Description
--verify-hashes Also verify SHA256 hashes (slow for large files)

mods config

View or update configuration (e.g., storage.root, gpu.vram_mb)

Usage mods config [key] [value]
Argument Description
[key] Config key to view or set (e.g., storage.root)
[value] New value (required when setting a key)

mods gc

Garbage collect — remove unreferenced files from the store

Usage mods gc

mods auth

Configure authentication (HuggingFace, Civitai)

Usage mods auth <provider>
Argument Description
<provider> Auth provider: huggingface or civitai

mods update

Fetch latest registry index

Usage mods update

mods export

Export installed state to a lock file

Usage mods export

mods import

Import and install from a lock file

Usage mods import <path>
Argument Description
<path> Path to mods.lock file

VRAM Selection

Mods detects your GPU and picks the largest variant that fits. Override with --variant or set a manual VRAM value.

VRAM Variant Notes
24GB+ fp16 Full quality, no compromises
12–23GB fp8 Slight quality reduction, half the VRAM
8–11GB gguf-q4 Quantized, needs GGUF loader node
< 8GB gguf-q2 Lower quality, but functional
$ mods config gpu.vram_mb 24576  # Manual override

Config Files

~/.mods/config.yaml

Main configuration: storage root, tool targets, GPU override.

storage:
  root: ~/mods
targets:
  - path: ~/ComfyUI
    type: comfyui
    symlink: true
  - path: ~/stable-diffusion-webui
    type: a1111
    symlink: true
# gpu:
#   vram_mb: 24576

~/.mods/auth.yaml

Authentication tokens for gated model providers.

huggingface:
  token: "hf_..."
civitai:
  api_key: "..."

~/.mods/state.db

SQLite database tracking installed models, symlinks, and dependencies.

~/.mods/index.json

Local cache of the registry index. Updated via mods update.

Supported Tools

Mods creates symlinks into the correct folder for each tool. Configure targets during mods init or with mods link.

ComfyUI

Full support. Models placed in models/checkpoints/, LoRAs in models/loras/, etc.

mods link --comfyui ~/ComfyUI

A1111 / SD WebUI

Full support. Models in models/Stable-diffusion/, LoRAs in models/Lora/, etc.

mods link --a1111 ~/stable-diffusion-webui

InvokeAI Planned

InvokeAI uses its own model database internally. Integration is on the roadmap.

Other tools

Want to add support for another tool? Contributions welcome — see CONTRIBUTING.md for how to add a folder layout mapping.

FAQ

What if mods init doesn't detect my ComfyUI?

You can manually link any tool installation with mods link:

$ mods link --comfyui /path/to/ComfyUI

This works for any location, including portable or manually installed setups.

How do I download gated models like Flux Dev?

Some models on HuggingFace require accepting license terms. Mods handles this:

$ mods auth huggingface

This stores your HuggingFace token in ~/.mods/auth.yaml. You'll also need to accept the model's terms on HuggingFace before downloading. Mods will tell you exactly which URL to visit.

Can I override the auto-selected variant?

Yes. Mods picks the largest variant that fits your GPU by default, but you can always override:

$ mods install flux-dev --variant fp8

This is useful if you prefer faster inference over max quality — for example, fp8 on a 24GB card gives roughly 2x speed with minimal quality loss.

Where are my models stored?

Models live in a content-addressed store at ~/mods/store/ by default. Change it with:

$ mods config storage.root /path/to/new/location

Check current disk usage with mods space.

How much disk space do I need?

It depends on which models you install. A typical Flux setup (checkpoint + VAE + text encoders) is ~30GB. Run mods install --dry-run to see download sizes before committing, and mods space to see current usage.

Something seems broken — how do I diagnose it?

Run the health check:

$ mods doctor

This checks for broken symlinks, missing dependencies, and other issues. Add --verify-hashes to also verify file integrity (slower for large files).