Recently I renamed the tool to runProcess to better reflect that you can run more than just shell commands with it. There are two explicit modes now: 1. mode=executable where you pass argv with argv[0] representing the executable file and then the rest of the array contains args to it. 2. mode=shell where you pass commandline (just like typing into bash/fish/pwsh/etc) which will use your system's
Add this skill
npx mdskills install g0t4/mcp-server-commandsWell-documented command execution MCP server with clear dual-mode design and comprehensive setup instructions
runProcess renaming/redesignRecently I renamed the tool to runProcess to better reflect that you can run more than just shell commands with it. There are two explicit modes now:
mode=executable where you pass argv with argv[0] representing the executable file and then the rest of the array contains args to it.mode=shell where you pass command_line (just like typing into bash/fish/pwsh/etc) which will use your system's default shell.I hate APIs that make ambiguous if you're executing something via a shell, or not. I hate it being a toggle b/c there's way more to running a shell command vs exec than just flipping a switch. So I made that explicit in the new tool's parameters
If you want your model to use specific shell(s) on a system, I would list them in your system prompt. Or, maybe in your tool instructions, though models tend to pay better attention to examples in a system prompt.
I've used this new design with gptoss-120b extensively and it went off without a hitch, no issues switching as the model doesn't care about names nor even the redesigned mode part, it all seems to "make sense" to gptoss.
Let me know if you encounter problems!
Tools are for LLMs to request. Claude Sonnet 3.5 intelligently uses run_process. And, initial testing shows promising results with Groq Desktop with MCP and llama4 models.
Currently, just one command to rule them all!
run_process - run a command, i.e. hostname or ls -al or echo "hello world" etc
STDOUT and STDERR as textstdin parameter means your LLM can
STDIN to commands like fish, bash, zsh, pythoncat >> foo/bar.txt from the text in stdinWarning:
Be careful what you ask this server to run! In Claude Desktop app, use
Approve Once(notAllow for This Chat) so you can review each command, useDenyif you don't trust the command. Permissions are dictated by the user that runs the server. DO NOT run withsudo.
Prompts are for users to include in chat history, i.e. via Zed's slash commands (in its AI Chat panel)
run_process - generate a prompt message with the command outputInstall dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Groq Desktop (beta, macOS) uses ~/Library/Application Support/groq-desktop-app/settings.json
Published to npm as mcp-server-commands using this workflow
{
"mcpServers": {
"mcp-server-commands": {
"command": "npx",
"args": ["mcp-server-commands"]
}
}
}
Make sure to run npm run build
{
"mcpServers": {
"mcp-server-commands": {
// works b/c of shebang in index.js
"command": "/path/to/mcp-server-commands/build/index.js"
}
}
}
run_processs without double checking.# NOTE: make sure to review variants and sizes, so the model fits in your VRAM to perform well!
# Probably the best so far is [OpenHands LM](https://www.all-hands.dev/blog/introducing-openhands-lm-32b----a-strong-open-coding-agent-model)
ollama pull https://huggingface.co/lmstudio-community/openhands-lm-32b-v0.1-GGUF
# https://ollama.com/library/devstral
ollama pull devstral
# Qwen2.5-Coder has tool use but you have to coax it
ollama pull qwen2.5-coder
The server is implemented with the STDIO transport.
For HTTP, use mcpo for an OpenAPI compatible web server interface.
This works with Open-WebUI
uvx mcpo --port 3010 --api-key "supersecret" -- npx mcp-server-commands
# uvx runs mcpo => mcpo run npx => npx runs mcp-server-commands
# then, mcpo bridges STDIO HTTP
Warning:
I briefly used
mcpowithopen-webui, make sure to vet it for security concerns.
Claude Desktop app writes logs to ~/Library/Logs/Claude/mcp-server-mcp-server-commands.log
By default, only important messages are logged (i.e. errors).
If you want to see more messages, add --verbose to the args when configuring the server.
By the way, logs are written to STDERR because that is what Claude Desktop routes to the log files.
In the future, I expect well formatted log messages to be written over the STDIO transport to the MCP client (note: not Claude Desktop app).
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.
Install via CLI
npx mdskills install g0t4/mcp-server-commandsMCP Server Commands is a free, open-source AI agent skill. Recently I renamed the tool to runProcess to better reflect that you can run more than just shell commands with it. There are two explicit modes now: 1. mode=executable where you pass argv with argv[0] representing the executable file and then the rest of the array contains args to it. 2. mode=shell where you pass commandline (just like typing into bash/fish/pwsh/etc) which will use your system's
Install MCP Server Commands with a single command:
npx mdskills install g0t4/mcp-server-commandsThis downloads the skill files into your project and your AI agent picks them up automatically.
MCP Server Commands works with Claude Code, Claude Desktop, Cursor, Vscode Copilot, Windsurf, Continue Dev, Gemini Cli, Amp, Roo Code, Goose. Skills use the open SKILL.md format which is compatible with any AI coding agent that reads markdown instructions.