Recently I renamed the tool to runProcess to better reflect that you can run more than just shell commands with it. There are two explicit modes now: 1. mode=executable where you pass argv with argv[0] representing the executable file and then the rest of the array contains args to it. 2. mode=shell where you pass commandline (just like typing into bash/fish/pwsh/etc) which will use your system's
Add this skill
npx mdskills install g0t4/mcp-server-commandsWell-documented command execution MCP server with clear dual-mode design and comprehensive setup instructions
1## `runProcess` renaming/redesign23Recently I renamed the tool to `runProcess` to better reflect that you can run more than just shell commands with it. There are two explicit modes now:41. `mode=executable` where you pass `argv` with `argv[0]` representing the `executable` file and then the rest of the array contains args to it.52. `mode=shell` where you pass `command_line` (just like typing into `bash`/`fish`/`pwsh`/etc) which will use your system's default shell.67I hate APIs that make ambiguous if you're executing something via a shell, or not. I hate it being a toggle b/c there's way more to running a shell command vs exec than just flipping a switch. So I made that explicit in the new tool's parameters89If you want your model to use specific shell(s) on a system, I would list them in your system prompt. Or, maybe in your tool instructions, though models tend to pay better attention to examples in a system prompt.1011I've used this new design with `gptoss-120b` extensively and it went off without a hitch, no issues switching as the model doesn't care about names nor even the redesigned `mode` part, it all seems to "make sense" to gptoss.1213Let me know if you encounter problems!1415## Tools1617Tools are for LLMs to request. Claude Sonnet 3.5 intelligently uses `run_process`. And, initial testing shows promising results with [Groq Desktop with MCP](https://github.com/groq/groq-desktop-beta) and `llama4` models.1819Currently, just one command to rule them all!2021- `run_process` - run a command, i.e. `hostname` or `ls -al` or `echo "hello world"` etc22 - Returns `STDOUT` and `STDERR` as text23 - Optional `stdin` parameter means your LLM can24 - pass scripts over `STDIN` to commands like `fish`, `bash`, `zsh`, `python`25 - create files with `cat >> foo/bar.txt` from the text in `stdin`2627> [!WARNING]28> Be careful what you ask this server to run!29> In Claude Desktop app, use `Approve Once` (not `Allow for This Chat`) so you can review each command, use `Deny` if you don't trust the command.30> Permissions are dictated by the user that runs the server.31> DO NOT run with `sudo`.3233## Video walkthrough3435<a href="https://youtu.be/0-VPu1Pc18w"><img src="https://img.youtube.com/vi/0-VPu1Pc18w/maxresdefault.jpg" width="480" alt="YouTube Thumbnail"></a>3637## Prompts3839Prompts are for users to include in chat history, i.e. via `Zed`'s slash commands (in its AI Chat panel)4041- `run_process` - generate a prompt message with the command output4243* FYI this was mostly a learning exercise... I see this as a user requested tool call. That's a fancy way to say, it's a template for running a command and passing the outputs to the model!4445## Development4647Install dependencies:48```bash49npm install50```5152Build the server:53```bash54npm run build55```5657For development with auto-rebuild:58```bash59npm run watch60```6162## Installation6364To use with Claude Desktop, add the server config:6566On MacOS: `~/Library/Application Support/Claude/claude_desktop_config.json`67On Windows: `%APPDATA%/Claude/claude_desktop_config.json`6869Groq Desktop (beta, macOS) uses `~/Library/Application Support/groq-desktop-app/settings.json`7071### Use the published npm package7273Published to npm as [mcp-server-commands](https://www.npmjs.com/package/mcp-server-commands) using this [workflow](https://github.com/g0t4/mcp-server-commands/actions)7475```json76{77 "mcpServers": {78 "mcp-server-commands": {79 "command": "npx",80 "args": ["mcp-server-commands"]81 }82 }83}84```8586### Use a local build (repo checkout)8788Make sure to run `npm run build`8990```json91{92 "mcpServers": {93 "mcp-server-commands": {94 // works b/c of shebang in index.js95 "command": "/path/to/mcp-server-commands/build/index.js"96 }97 }98}99```100101## Local Models102103- Most models are trained such that they don't think they can run commands for you.104 - Sometimes, they use tools w/o hesitation... other times, I have to coax them.105 - Use a system prompt or prompt template to instruct that they should follow user requests. Including to use `run_processs` without double checking.106- Ollama is a great way to run a model locally (w/ Open-WebUI)107108```sh109# NOTE: make sure to review variants and sizes, so the model fits in your VRAM to perform well!110111# Probably the best so far is [OpenHands LM](https://www.all-hands.dev/blog/introducing-openhands-lm-32b----a-strong-open-coding-agent-model)112ollama pull https://huggingface.co/lmstudio-community/openhands-lm-32b-v0.1-GGUF113114# https://ollama.com/library/devstral115ollama pull devstral116117# Qwen2.5-Coder has tool use but you have to coax it118ollama pull qwen2.5-coder119```120121### HTTP / OpenAPI122123The server is implemented with the `STDIO` transport.124For `HTTP`, use [`mcpo`](https://github.com/open-webui/mcpo) for an `OpenAPI` compatible web server interface.125This works with [`Open-WebUI`](https://github.com/open-webui/open-webui)126127```bash128uvx mcpo --port 3010 --api-key "supersecret" -- npx mcp-server-commands129130# uvx runs mcpo => mcpo run npx => npx runs mcp-server-commands131# then, mcpo bridges STDIO <=> HTTP132```133134> [!WARNING]135> I briefly used `mcpo` with `open-webui`, make sure to vet it for security concerns.136137### Logging138139Claude Desktop app writes logs to `~/Library/Logs/Claude/mcp-server-mcp-server-commands.log`140141By default, only important messages are logged (i.e. errors).142If you want to see more messages, add `--verbose` to the `args` when configuring the server.143144By the way, logs are written to `STDERR` because that is what Claude Desktop routes to the log files.145In the future, I expect well formatted log messages to be written over the `STDIO` transport to the MCP client (note: not Claude Desktop app).146147### Debugging148149Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector), which is available as a package script:150151```bash152npm run inspector153```154155The Inspector will provide a URL to access debugging tools in your browser.156
Full transparency — inspect the skill content before installing.