Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.
Add this skill
npx mdskills install huggingface/huggingface-cliComprehensive CLI reference with clear command tables, examples, and common patterns for Hub operations
1---2name: hugging-face-cli3description: Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create repos, manage local cache, or run compute jobs on HF infrastructure. Covers authentication, file transfers, repository creation, cache operations, and cloud compute.4---56# Hugging Face CLI78The `hf` CLI provides direct terminal access to the Hugging Face Hub for downloading, uploading, and managing repositories, cache, and compute resources.910## Quick Command Reference1112| Task | Command |13|------|---------|14| Login | `hf auth login` |15| Download model | `hf download <repo_id>` |16| Download to folder | `hf download <repo_id> --local-dir ./path` |17| Upload folder | `hf upload <repo_id> . .` |18| Create repo | `hf repo create <name>` |19| Create tag | `hf repo tag create <repo_id> <tag>` |20| Delete files | `hf repo-files delete <repo_id> <files>` |21| List cache | `hf cache ls` |22| Remove from cache | `hf cache rm <repo_or_revision>` |23| List models | `hf models ls` |24| Get model info | `hf models info <model_id>` |25| List datasets | `hf datasets ls` |26| Get dataset info | `hf datasets info <dataset_id>` |27| List spaces | `hf spaces ls` |28| Get space info | `hf spaces info <space_id>` |29| List endpoints | `hf endpoints ls` |30| Run GPU job | `hf jobs run --flavor a10g-small <image> <cmd>` |31| Environment info | `hf env` |3233## Core Commands3435### Authentication36```bash37hf auth login # Interactive login38hf auth login --token $HF_TOKEN # Non-interactive39hf auth whoami # Check current user40hf auth list # List stored tokens41hf auth switch # Switch between tokens42hf auth logout # Log out43```4445### Download46```bash47hf download <repo_id> # Full repo to cache48hf download <repo_id> file.safetensors # Specific file49hf download <repo_id> --local-dir ./models # To local directory50hf download <repo_id> --include "*.safetensors" # Filter by pattern51hf download <repo_id> --repo-type dataset # Dataset52hf download <repo_id> --revision v1.0 # Specific version53```5455### Upload56```bash57hf upload <repo_id> . . # Current dir to root58hf upload <repo_id> ./models /weights # Folder to path59hf upload <repo_id> model.safetensors # Single file60hf upload <repo_id> . . --repo-type dataset # Dataset61hf upload <repo_id> . . --create-pr # Create PR62hf upload <repo_id> . . --commit-message="msg" # Custom message63```6465### Repository Management66```bash67hf repo create <name> # Create model repo68hf repo create <name> --repo-type dataset # Create dataset69hf repo create <name> --private # Private repo70hf repo create <name> --repo-type space --space_sdk gradio # Gradio space71hf repo delete <repo_id> # Delete repo72hf repo move <from_id> <to_id> # Move repo to new namespace73hf repo settings <repo_id> --private true # Update repo settings74hf repo list --repo-type model # List repos75hf repo branch create <repo_id> release-v1 # Create branch76hf repo branch delete <repo_id> release-v1 # Delete branch77hf repo tag create <repo_id> v1.0 # Create tag78hf repo tag list <repo_id> # List tags79hf repo tag delete <repo_id> v1.0 # Delete tag80```8182### Delete Files from Repo83```bash84hf repo-files delete <repo_id> folder/ # Delete folder85hf repo-files delete <repo_id> "*.txt" # Delete with pattern86```8788### Cache Management89```bash90hf cache ls # List cached repos91hf cache ls --revisions # Include individual revisions92hf cache rm model/gpt2 # Remove cached repo93hf cache rm <revision_hash> # Remove cached revision94hf cache prune # Remove detached revisions95hf cache verify gpt2 # Verify checksums from cache96```9798### Browse Hub99```bash100# Models101hf models ls # List top trending models102hf models ls --search "MiniMax" --author MiniMaxAI # Search models103hf models ls --filter "text-generation" --limit 20 # Filter by task104hf models info MiniMaxAI/MiniMax-M2.1 # Get model info105106# Datasets107hf datasets ls # List top trending datasets108hf datasets ls --search "finepdfs" --sort downloads # Search datasets109hf datasets info HuggingFaceFW/finepdfs # Get dataset info110111# Spaces112hf spaces ls # List top trending spaces113hf spaces ls --filter "3d" --limit 10 # Filter by 3D modeling spaces114hf spaces info enzostvs/deepsite # Get space info115```116117### Jobs (Cloud Compute)118```bash119hf jobs run python:3.12 python script.py # Run on CPU120hf jobs run --flavor a10g-small <image> <cmd> # Run on GPU121hf jobs run --secrets HF_TOKEN <image> <cmd> # With HF token122hf jobs ps # List jobs123hf jobs logs <job_id> # View logs124hf jobs cancel <job_id> # Cancel job125```126127### Inference Endpoints128```bash129hf endpoints ls # List endpoints130hf endpoints deploy my-endpoint \131 --repo openai/gpt-oss-120b \132 --framework vllm \133 --accelerator gpu \134 --instance-size x4 \135 --instance-type nvidia-a10g \136 --region us-east-1 \137 --vendor aws138hf endpoints describe my-endpoint # Show endpoint details139hf endpoints pause my-endpoint # Pause endpoint140hf endpoints resume my-endpoint # Resume endpoint141hf endpoints scale-to-zero my-endpoint # Scale to zero142hf endpoints delete my-endpoint --yes # Delete endpoint143```144**GPU Flavors:** `cpu-basic`, `cpu-upgrade`, `cpu-xl`, `t4-small`, `t4-medium`, `l4x1`, `l4x4`, `l40sx1`, `l40sx4`, `l40sx8`, `a10g-small`, `a10g-large`, `a10g-largex2`, `a10g-largex4`, `a100-large`, `h100`, `h100x8`145146## Common Patterns147148### Download and Use Model Locally149```bash150# Download to local directory for deployment151hf download meta-llama/Llama-3.2-1B-Instruct --local-dir ./model152153# Or use cache and get path154MODEL_PATH=$(hf download meta-llama/Llama-3.2-1B-Instruct --quiet)155```156157### Publish Model/Dataset158```bash159hf repo create my-username/my-model --private160hf upload my-username/my-model ./output . --commit-message="Initial release"161hf repo tag create my-username/my-model v1.0162```163164### Sync Space with Local165```bash166hf upload my-username/my-space . . --repo-type space \167 --exclude="logs/*" --delete="*" --commit-message="Sync"168```169170### Check Cache Usage171```bash172hf cache ls # See all cached repos and sizes173hf cache rm model/gpt2 # Remove a repo from cache174```175176## Key Options177178- `--repo-type`: `model` (default), `dataset`, `space`179- `--revision`: Branch, tag, or commit hash180- `--token`: Override authentication181- `--quiet`: Output only essential info (paths/URLs)182183## References184185- **Complete command reference**: See [references/commands.md](references/commands.md)186- **Workflow examples**: See [references/examples.md](references/examples.md)187
Full transparency — inspect the skill content before installing.