CC-Meta lets you iterate on your Claude Code prompts without leaving the terminal. Instead of switching to the web client to test and refine prompts, you get instant AI feedback on clarity, specificity, and completeness right in your current workflow. This keeps you in context and speeds up the process of crafting effective prompts.
An MCP (Model Context Protocol) server that evaluates prompts using AI to provide detailed feedback on clarity, completeness, and effectiveness.
Without CC-Meta | With CC-Meta |
- Multi-model support - Use any OpenAI or Anthropic model
- Flexible API keys - Provide your own API key for each evaluation
- Two tools available:
ping
- Test if the server is connected and workingevaluate
- Get AI-powered analysis of your prompts
-
Install dependencies:
npm install # or yarn install
-
Build the project:
npm run build
-
Configure your model and API key: Edit the
.mcp.json
file to set your preferred model and API key:{ "mcpServers": { "prompt-evaluator": { "command": "node", "args": ["./prompt-evaluator-mcp/start.js"], "env": { "PROMPT_EVAL_MODEL": "sonnet-4", // or "o3", "opus-4" "PROMPT_EVAL_API_KEY": "your-api-key-here" } } } }
Once configured, you have multiple ways to evaluate prompts:
/meta Your prompt here without quotes
mcp_prompt-evaluator_ping() # Test connection
mcp_prompt-evaluator_evaluate("Your prompt to evaluate")
- OpenAI:
o3
(o3-2025-04-16) - Anthropic:
opus-4
(claude-opus-4-20250514),sonnet-4
(claude-sonnet-4-20250514)
The AI evaluation provides:
- Score from 0-10
- Specific strengths of your prompt
- Areas for improvement
- Suggested rewrites when needed
- Analysis of:
- Clarity of intent
- Specificity of requirements
- Context provided
- Actionability
- Edge cases considered
The evaluation prompt is stored in src/prompt.ts
and can be easily customized:
- Edit the prompt template to change evaluation criteria
- Modify the scoring rubric and weights
- Adjust the output format
- Add domain-specific evaluation rules
After making changes, rebuild with npm run build
.