A feedback loop for people building AI skills and MCP servers. You're building a skill, an MCP server, or a custom prompt strategy that's supposed to make an AI coding assistant better at a specific job. But how do you know it actually works? How do you know your latest commit made things better and not worse? Pitlane gives you the answer. Define the tasks your skill should help with, set up a bas
Add this skill
npx mdskills install pitlane-ai/testing-with-pitlaneComprehensive eval design guide with actionable setup, assertion strategy, and clear anti-patterns
claude mcp add testing-with-pitlane -- npx -y pitlane
npx mdskills install pitlane-ai/testing-with-pitlane