Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.
Add this skill
npx mdskills install sickn33/hugging-face-cliComprehensive CLI reference with clear commands, examples, and common patterns for Hugging Face operations
1---2name: hugging-face-cli3description: "Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute."4source: "https://github.com/huggingface/skills/tree/main/skills/hugging-face-cli"5risk: safe6---78# Hugging Face CLI910The `hf` CLI provides direct terminal access to the Hugging Face Hub for downloading, uploading, and managing repositories, cache, and compute resources.1112## When to Use This Skill1314Use this skill when:15- User needs to download models, datasets, or spaces16- Uploading files to Hub repositories17- Creating Hugging Face repositories18- Managing local cache19- Running compute jobs on HF infrastructure20- Working with Hugging Face Hub authentication2122## Quick Command Reference2324| Task | Command |25|------|---------|26| Login | `hf auth login` |27| Download model | `hf download <repo_id>` |28| Download to folder | `hf download <repo_id> --local-dir ./path` |29| Upload folder | `hf upload <repo_id> . .` |30| Create repo | `hf repo create <name>` |31| Create tag | `hf repo tag create <repo_id> <tag>` |32| Delete files | `hf repo-files delete <repo_id> <files>` |33| List cache | `hf cache ls` |34| Remove from cache | `hf cache rm <repo_or_revision>` |35| List models | `hf models ls` |36| Get model info | `hf models info <model_id>` |37| List datasets | `hf datasets ls` |38| Get dataset info | `hf datasets info <dataset_id>` |39| List spaces | `hf spaces ls` |40| Get space info | `hf spaces info <space_id>` |41| List endpoints | `hf endpoints ls` |42| Run GPU job | `hf jobs run --flavor a10g-small <image> <cmd>` |43| Environment info | `hf env` |4445## Core Commands4647### Authentication48```bash49hf auth login # Interactive login50hf auth login --token $HF_TOKEN # Non-interactive51hf auth whoami # Check current user52hf auth list # List stored tokens53hf auth switch # Switch between tokens54hf auth logout # Log out55```5657### Download58```bash59hf download <repo_id> # Full repo to cache60hf download <repo_id> file.safetensors # Specific file61hf download <repo_id> --local-dir ./models # To local directory62hf download <repo_id> --include "*.safetensors" # Filter by pattern63hf download <repo_id> --repo-type dataset # Dataset64hf download <repo_id> --revision v1.0 # Specific version65```6667### Upload68```bash69hf upload <repo_id> . . # Current dir to root70hf upload <repo_id> ./models /weights # Folder to path71hf upload <repo_id> model.safetensors # Single file72hf upload <repo_id> . . --repo-type dataset # Dataset73hf upload <repo_id> . . --create-pr # Create PR74hf upload <repo_id> . . --commit-message="msg" # Custom message75```7677### Repository Management78```bash79hf repo create <name> # Create model repo80hf repo create <name> --repo-type dataset # Create dataset81hf repo create <name> --private # Private repo82hf repo create <name> --repo-type space --space_sdk gradio # Gradio space83hf repo delete <repo_id> # Delete repo84hf repo move <from_id> <to_id> # Move repo to new namespace85hf repo settings <repo_id> --private true # Update repo settings86hf repo list --repo-type model # List repos87hf repo branch create <repo_id> release-v1 # Create branch88hf repo branch delete <repo_id> release-v1 # Delete branch89hf repo tag create <repo_id> v1.0 # Create tag90hf repo tag list <repo_id> # List tags91hf repo tag delete <repo_id> v1.0 # Delete tag92```9394### Delete Files from Repo95```bash96hf repo-files delete <repo_id> folder/ # Delete folder97hf repo-files delete <repo_id> "*.txt" # Delete with pattern98```99100### Cache Management101```bash102hf cache ls # List cached repos103hf cache ls --revisions # Include individual revisions104hf cache rm model/gpt2 # Remove cached repo105hf cache rm <revision_hash> # Remove cached revision106hf cache prune # Remove detached revisions107hf cache verify gpt2 # Verify checksums from cache108```109110### Browse Hub111```bash112# Models113hf models ls # List top trending models114hf models ls --search "MiniMax" --author MiniMaxAI # Search models115hf models ls --filter "text-generation" --limit 20 # Filter by task116hf models info MiniMaxAI/MiniMax-M2.1 # Get model info117118# Datasets119hf datasets ls # List top trending datasets120hf datasets ls --search "finepdfs" --sort downloads # Search datasets121hf datasets info HuggingFaceFW/finepdfs # Get dataset info122123# Spaces124hf spaces ls # List top trending spaces125hf spaces ls --filter "3d" --limit 10 # Filter by 3D modeling spaces126hf spaces info enzostvs/deepsite # Get space info127```128129### Jobs (Cloud Compute)130```bash131hf jobs run python:3.12 python script.py # Run on CPU132hf jobs run --flavor a10g-small <image> <cmd> # Run on GPU133hf jobs run --secrets HF_TOKEN <image> <cmd> # With HF token134hf jobs ps # List jobs135hf jobs logs <job_id> # View logs136hf jobs cancel <job_id> # Cancel job137```138139### Inference Endpoints140```bash141hf endpoints ls # List endpoints142hf endpoints deploy my-endpoint \143 --repo openai/gpt-oss-120b \144 --framework vllm \145 --accelerator gpu \146 --instance-size x4 \147 --instance-type nvidia-a10g \148 --region us-east-1 \149 --vendor aws150hf endpoints describe my-endpoint # Show endpoint details151hf endpoints pause my-endpoint # Pause endpoint152hf endpoints resume my-endpoint # Resume endpoint153hf endpoints scale-to-zero my-endpoint # Scale to zero154hf endpoints delete my-endpoint --yes # Delete endpoint155```156**GPU Flavors:** `cpu-basic`, `cpu-upgrade`, `cpu-xl`, `t4-small`, `t4-medium`, `l4x1`, `l4x4`, `l40sx1`, `l40sx4`, `l40sx8`, `a10g-small`, `a10g-large`, `a10g-largex2`, `a10g-largex4`, `a100-large`, `h100`, `h100x8`157158## Common Patterns159160### Download and Use Model Locally161```bash162# Download to local directory for deployment163hf download meta-llama/Llama-3.2-1B-Instruct --local-dir ./model164165# Or use cache and get path166MODEL_PATH=$(hf download meta-llama/Llama-3.2-1B-Instruct --quiet)167```168169### Publish Model/Dataset170```bash171hf repo create my-username/my-model --private172hf upload my-username/my-model ./output . --commit-message="Initial release"173hf repo tag create my-username/my-model v1.0174```175176### Sync Space with Local177```bash178hf upload my-username/my-space . . --repo-type space \179 --exclude="logs/*" --delete="*" --commit-message="Sync"180```181182### Check Cache Usage183```bash184hf cache ls # See all cached repos and sizes185hf cache rm model/gpt2 # Remove a repo from cache186```187188## Key Options189190- `--repo-type`: `model` (default), `dataset`, `space`191- `--revision`: Branch, tag, or commit hash192- `--token`: Override authentication193- `--quiet`: Output only essential info (paths/URLs)194195## References196197- **Complete command reference**: See [references/commands.md](references/commands.md)198- **Workflow examples**: See [references/examples.md](references/examples.md)199
Full transparency — inspect the skill content before installing.