LLM & AI Agent Skills
AI agent skills for working with large language models. Prompt engineering, API integration, and AI workflow patterns.
208 listings
Conversation Memory
Persistent memory systems for LLM conversations including short-term, long-term, and entity-based memory Use when: conversation memory, remember, memory persistence, long-term memory, chat history.
HackMD MCP Server
MCP ServerA Model Context Protocol (MCP) server that interfaces with the HackMD API, allowing LLM clients to access and interact with HackMD notes, teams, user profiles, and history data. - Get user profile information - Create, read, update, and delete notes - Manage team notes and collaborate with team members - Access reading history - List and manage teams - Dual transport support: Both HTTP and STDIO t
Data Structure Protocol
>-
Video Editor MCP server
MCP ServerSee a demo here: https://www.youtube.com/watch?v=KG6TMLD8GmA Upload, edit, search, and generate videos from everyone's favorite LLM and Video Jungle. You'll need to sign up for an account at Video Jungle in order to use this tool, and add your API key. The server implements an interface to upload, generate, and edit videos with: - Custom vj:// URI scheme for accessing individual videos and project
MCP Chess Server
MCP ServerThis MCP let's you play chess against any LLM. To use this chess server, add the following configuration to your MCP config: Play a game: Find a position in a PGN for game analysis: The server provides the following tools: getboardvisualization(): Provides the current state of the chessboard as an image. The board orientation automatically flips based on the user's assigned color. getturn(): Indic
LLM App Patterns
Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG, building agents, or setting up LLM observability.
Prompt Caching
Caching strategies for LLM prompts including Anthropic prompt caching, response caching, and CAG (Cache Augmented Generation) Use when: prompt caching, cache prompt, response cache, cag, cache augmented.
LLM Evaluation
PluginLLM evaluation and testing patterns including prompt testing, hallucination detection, benchmark creation, and quality metrics. Use when testing LLM applications, validating prompt quality, implementing systematic evaluation, or measuring LLM performance.
Address Feedback
PluginGenAI Agent Framework, the Pydantic way Documentation: ai.pydantic.dev FastAPI revolutionized web development by offering an innovative and ergonomic design, built on the foundation of Pydantic Validation and modern Python features like type hints. Yet despite virtually every Python agent framework and LLM library using Pydantic Validation, when we began to use LLMs in Pydantic Logfire, we couldn'
Piston MCP Server
MCP ServerPiston MCP Server piston-mcp is an MCP server that allows LLMs to connect to and execute code using Piston . You can try out piston-mcp locally without cloning it. To try out piston-mcp you'll need to install uv: You will also need to download an MCP client to connect to piston-mcp, such as Claude Desktop. Update the MCP client's configuration with the following configuration to connect to piston-
Louis030195/toggl MCP
MCP ServerDead simple MCP (Model Context Protocol) server for Toggl time tracking. Control your Toggl timer directly from Claude, ChatGPT, or any LLM that supports MCP. - ⏱️ Start/stop timers - 📊 View current timer - 📈 Get today's time entries - 🗂️ List projects - 🗑️ Delete time entries Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claudedesktopconfig.json): 1. Go to Tog
LLM Application Dev Prompt Optimize
You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thought reasoning, and model-specific optimizati
Langchain Architecture
Design LLM applications using the LangChain framework with agents, memory, and tool integration patterns. Use when building LangChain applications, implementing AI agents, or creating complex LLM workflows.
MCP Server Chart
MCP ServerA Model Context Protocol server for generating charts using AntV. We can use this mcp server for chart generation and data analysis. This is a TypeScript-based MCP server that provides chart generation capabilities. It allows you to create various types of charts through MCP tools. You can also use it in Dify. - ✨ Features - 🎨 Skill Usage - 🚰 Run with SSE or Streamable transport - 🎮 CLI Options
Clarity Gate
PluginPre-ingestion verification for epistemic quality in RAG systems. Ensures documents are properly qualified before entering knowledge bases. Produces CGD (Clarity-Gated Documents) and validates SOT (Source of Truth) files.
CockroachDB MCP Server
MCP ServerThe CockroachDB MCP Server is a natural language interface designed for LLMs and agentic applications to manage, monitor, and query data in CockroachDB. It integrates seamlessly with MCP (Model Content Protocol) clients, such as Claude Desktop or Cursor, enabling AI-driven workflows to interact directly with your database. - Cluster Monitoring - Database Operations - Table Management - Query Engin
iMessage MCP
MCP ServerA Deno monorepo containing packages for iMessage access on macOS: - @wyattjoh/imessage - Core library for read-only iMessage database access - @wyattjoh/imessage-mcp - Model Context Protocol (MCP) server for LLM integration - Search messages by text content, contact, or date range - Get recent messages - List all chats/conversations - Get all contacts/handles - Retrieve messages from specific chat
Browserbase MCP Server
MCP ServerThe Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need. This server provides cloud browser automation capabilities using Browse
Tilt MCP Server
MCP ServerA Model Context Protocol (MCP) server that integrates with Tilt to provide programmatic access to Tilt resources and logs through LLM applications. Imagine prompting like this: The key insight is you no longer need to tell your LLM how to build and deploy your code. Instead, you can simply ask it to what to build and deploy. Tilt is a powerful tool for working with Docker/Kubernetes workloads. Wit
MKP - Model Kontext Protocol Server for Kubernetes
MKP is a Model Context Protocol (MCP) server for Kubernetes that allows LLM-powered applications to interact with Kubernetes clusters. It provides tools for listing and applying Kubernetes resources through the MCP protocol. - List resources supported by the Kubernetes API server - List clustered resources - List namespaced resources - Get resources and their subresources (including status, scale,
Open Data Model Context Protocol
Connect Open Data to LLMs in minutes! We enable 2 things: Open Data Access: Access to many public datasets right from your LLM application (starting with Claude, more to come). Publishing: Get community help and a distribution network to distribute your Open Data. Get everyone to use it! How do we do that? Access: Setup our MCP servers in your LLM application in 2 clicks via our CLI tool (starting
This Project Has Moved!
MCP Server� Migrate to MCP Platform • 💬 Discord Community • � Legacy Docs Zero-configuration deployment of production-ready MCP servers with Docker containers, comprehensive CLI tools, and intelligent caching. Focus on AI integration, not infrastructure setup. That's it! Your MCP server is running at http://localhost:8080 Perfect for: AI developers, data scientists, DevOps teams building with MCP. Deploy M
Chroma MCP Server
MCP ServerChroma - the open-source embedding database . The fastest way to build Python or JavaScript LLM apps with memory! The Model Context Protocol (MCP) is an open protocol designed for effortless integration between LLM applications and external data sources or tools, offering a standardized framework to seamlessly provide LLMs with the context they require. This server provides data retrieval capabili
Next Type LLM
RulesHolistic understanding of requirements & stack