中文文档 | Contributing | Documentation
Great news for the developer community! In our commitment to democratizing AI agent technology and fostering a vibrant ecosystem of innovation, we're thrilled to announce that Kode has transitioned from AGPLv3 to the Apache 2.0 license.
- ✅ Complete Freedom: Use Kode in any project - personal, commercial, or enterprise
- ✅ Build Without Barriers: Create proprietary solutions without open-sourcing requirements
- ✅ Simple Attribution: Just maintain copyright notices and license info
- ✅ Join a Movement: Be part of accelerating the world's transition to AI-powered development
This change reflects our belief that the future of software development is collaborative, open, and augmented by AI. By removing licensing barriers, we're empowering developers worldwide to build the next generation of AI-assisted tools and workflows. Let's build the future together! 🚀
2025-08-29: We've added Windows support! All Windows users can now run Kode using Git Bash, Unix subsystems, or WSL (Windows Subsystem for Linux) on their computers.
Kode proudly supports the AGENTS.md standard protocol initiated by OpenAI - a simple, open format for guiding programming agents that's used by 20k+ open source projects.
- ✅ AGENTS.md - Native support for the OpenAI-initiated standard format
- ✅ CLAUDE.md - Full backward compatibility with Claude Code configurations
- ✅ Subagent System - Advanced agent delegation and task orchestration
- ✅ Cross-platform - Works with 20+ AI models and providers
Use # Your documentation request
to generate and maintain your AGENTS.md file automatically, while maintaining full compatibility with existing Claude Code workflows.
Kode is a powerful AI assistant that lives in your terminal. It can understand your codebase, edit files, run commands, and handle entire workflows for you.
⚠️ Security Notice: Kode runs in YOLO mode by default (equivalent to Claude's--dangerously-skip-permissions
flag), bypassing all permission checks for maximum productivity. YOLO mode is recommended only for trusted, secure environments when working on non-critical projects. If you're working with important files or using models of questionable capability, we strongly recommend usingkode --safe
to enable permission checks and manual approval for all operations.📊 Model Performance: For optimal performance, we recommend using newer, more capable models designed for autonomous task completion. Avoid older Q&A-focused models like GPT-4o or Gemini 2.5 Pro, which are optimized for answering questions rather than sustained independent task execution. Choose models specifically trained for agentic workflows and extended reasoning capabilities.
- 🤖 AI-Powered Assistance - Uses advanced AI models to understand and respond to your requests
- 🔄 Multi-Model Collaboration - Flexibly switch and combine multiple AI models to leverage their unique strengths
- 🦜 Expert Model Consultation - Use
@ask-model-name
to consult specific AI models for specialized analysis - 👤 Intelligent Agent System - Use
@run-agent-name
to delegate tasks to specialized subagents - 📝 Code Editing - Directly edit files with intelligent suggestions and improvements
- 🔍 Codebase Understanding - Analyzes your project structure and code relationships
- 🚀 Command Execution - Run shell commands and see results in real-time
- 🛠️ Workflow Automation - Handle complex development tasks with simple prompts
Our state-of-the-art completion system provides unparalleled coding assistance:
- Hyphen-Aware Matching - Type
dao
to matchrun-agent-dao-qi-harmony-designer
- Abbreviation Support -
dq
matchesdao-qi
,nde
matchesnode
- Numeric Suffix Handling -
py3
intelligently matchespython3
- Multi-Algorithm Fusion - Combines 7+ matching algorithms for best results
- No @ Required - Type
gp5
directly to match@ask-gpt-5
- Auto-Prefix Addition - Tab/Enter automatically adds
@
for agents and models - Mixed Completion - Seamlessly switch between commands, files, agents, and models
- Smart Prioritization - Results ranked by relevance and usage frequency
- 500+ Common Commands - Curated database of frequently used Unix/Linux commands
- System Intersection - Only shows commands that actually exist on your system
- Priority Scoring - Common commands appear first (git, npm, docker, etc.)
- Real-time Loading - Dynamic command discovery from system PATH
- 🎨 Interactive UI - Beautiful terminal interface with syntax highlighting
- 🔌 Tool System - Extensible architecture with specialized tools for different tasks
- 💾 Context Management - Smart context handling to maintain conversation continuity
- 📋 AGENTS.md Integration - Use
# documentation requests
to auto-generate and maintain project documentation
npm install -g @shareai-lab/kode
After installation, you can use any of these commands:
kode
- Primary commandkwa
- Kode With Agent (alternative)kd
- Ultra-short alias
Start an interactive session:
kode
# or
kwa
# or
kd
Get a quick response:
kode -p "explain this function" main.js
# or
kwa -p "explain this function" main.js
Kode supports a powerful @ mention system for intelligent completions:
# Consult specific AI models for expert opinions
@ask-claude-sonnet-4 How should I optimize this React component for performance?
@ask-gpt-5 What are the security implications of this authentication method?
@ask-o1-preview Analyze the complexity of this algorithm
# Delegate tasks to specialized subagents
@run-agent-simplicity-auditor Review this code for over-engineering
@run-agent-architect Design a microservices architecture for this system
@run-agent-test-writer Create comprehensive tests for these modules
# Reference files and directories with auto-completion
@src/components/Button.tsx
@docs/api-reference.md
@.env.example
The @ mention system provides intelligent completions as you type, showing available models, agents, and files.
Use the #
prefix to generate and maintain your AGENTS.md documentation:
# Generate setup instructions
# How do I set up the development environment?
# Create testing documentation
# What are the testing procedures for this project?
# Document deployment process
# Explain the deployment pipeline and requirements
This mode automatically formats responses as structured documentation and appends them to your AGENTS.md file.
# Clone the repository
git clone https://github.com/shareAI-lab/Kode.git
cd Kode
# Build the image locally
docker build --no-cache -t kode .
# Run in your project directory
cd your-project
docker run -it --rm \
-v $(pwd):/workspace \
-v ~/.kode:/root/.kode \
-v ~/.kode.json:/root/.kode.json \
-w /workspace \
kode
The Docker setup includes:
-
Volume Mounts:
$(pwd):/workspace
- Mounts your current project directory~/.kode:/root/.kode
- Preserves your kode configuration directory between runs~/.kode.json:/root/.kode.json
- Preserves your kode global configuration file between runs
-
Working Directory: Set to
/workspace
inside the container -
Interactive Mode: Uses
-it
flags for interactive terminal access -
Cleanup:
--rm
flag removes the container after exit
Note: Kode uses both ~/.kode
directory for additional data (like memory files) and ~/.kode.json
file for global configuration.
The first time you run the Docker command, it will build the image. Subsequent runs will use the cached image for faster startup.
You can use the onboarding to set up the model, or /model
.
If you don't see the models you want on the list, you can manually set them in /config
As long as you have an openai-like endpoint, it should work.
/help
- Show available commands/model
- Change AI model settings/config
- Open configuration panel/cost
- Show token usage and costs/clear
- Clear conversation history/init
- Initialize project context
Unlike official Claude which supports only a single model, Kode implements true multi-model collaboration, allowing you to fully leverage the unique strengths of different AI models.
We designed a unified ModelManager
system that supports:
- Model Profiles: Each model has an independent configuration file containing API endpoints, authentication, context window size, cost parameters, etc.
- Model Pointers: Users can configure default models for different purposes in the
/model
command:main
: Default model for main Agenttask
: Default model for SubAgentreasoning
: Reserved for future ThinkTool usagequick
: Fast model for simple NLP tasks (security identification, title generation, etc.)
- Dynamic Model Switching: Support runtime model switching without restarting sessions, maintaining context continuity
Our specially designed TaskTool
(Architect tool) implements:
- Subagent Mechanism: Can launch multiple sub-agents to process tasks in parallel
- Model Parameter Passing: Users can specify which model SubAgents should use in their requests
- Default Model Configuration: SubAgents use the model configured by the
task
pointer by default
We specially designed the AskExpertModel
tool:
- Expert Model Invocation: Allows temporarily calling specific expert models to solve difficult problems during conversations
- Model Isolation Execution: Expert model responses are processed independently without affecting the main conversation flow
- Knowledge Integration: Integrates expert model insights into the current task
- Tab Key Quick Switch: Press Tab in the input box to quickly switch the model for the current conversation
/model
Command: Use/model
command to configure and manage multiple model profiles, set default models for different purposes- User Control: Users can specify specific models for task processing at any time
Architecture Design Phase
- Use o3 model or GPT-5 model to explore system architecture and formulate sharp and clear technical solutions
- These models excel in abstract thinking and system design
Solution Refinement Phase
- Use gemini model to deeply explore production environment design details
- Leverage its deep accumulation in practical engineering and balanced reasoning capabilities
Code Implementation Phase
- Use Qwen Coder model, Kimi k2 model, GLM-4.5 model, or Claude Sonnet 4 model for specific code writing
- These models have strong performance in code generation, file editing, and engineering implementation
- Support parallel processing of multiple coding tasks through subagents
Problem Solving
- When encountering complex problems, consult expert models like o3 model, Claude Opus 4.1 model, or Grok 4 model
- Obtain deep technical insights and innovative solutions
# Example 1: Architecture Design
"Use o3 model to help me design a high-concurrency message queue system architecture"
# Example 2: Multi-Model Collaboration
"First use GPT-5 model to analyze the root cause of this performance issue, then use Claude Sonnet 4 model to write optimization code"
# Example 3: Parallel Task Processing
"Use Qwen Coder model as subagent to refactor these three modules simultaneously"
# Example 4: Expert Consultation
"This memory leak issue is tricky, ask Claude Opus 4.1 model separately for solutions"
# Example 5: Code Review
"Have Kimi k2 model review the code quality of this PR"
# Example 6: Complex Reasoning
"Use Grok 4 model to help me derive the time complexity of this algorithm"
# Example 7: Solution Design
"Have GLM-4.5 model design a microservice decomposition plan"
// Example of multi-model configuration support
{
"modelProfiles": {
"o3": { "provider": "openai", "model": "o3", "apiKey": "..." },
"claude4": { "provider": "anthropic", "model": "claude-sonnet-4", "apiKey": "..." },
"qwen": { "provider": "alibaba", "model": "qwen-coder", "apiKey": "..." }
},
"modelPointers": {
"main": "claude4", // Main conversation model
"task": "qwen", // Task execution model
"reasoning": "o3", // Reasoning model
"quick": "glm-4.5" // Quick response model
}
}
- Usage Statistics: Use
/cost
command to view token usage and costs for each model - Multi-Model Cost Comparison: Track usage costs of different models in real-time
- History Records: Save cost data for each session
- Context Inheritance: Maintain conversation continuity when switching models
- Context Window Adaptation: Automatically adjust based on different models' context window sizes
- Session State Preservation: Ensure information consistency during multi-model collaboration
- Maximized Efficiency: Each task is handled by the most suitable model
- Cost Optimization: Use lightweight models for simple tasks, powerful models for complex tasks
- Parallel Processing: Multiple models can work on different subtasks simultaneously
- Flexible Switching: Switch models based on task requirements without restarting sessions
- Leveraging Strengths: Combine advantages of different models for optimal overall results
Feature | Kode | Official Claude |
---|---|---|
Number of Supported Models | Unlimited, configurable for any model | Only supports single Claude model |
Model Switching | ✅ Tab key quick switch | ❌ Requires session restart |
Parallel Processing | ✅ Multiple SubAgents work in parallel | ❌ Single-threaded processing |
Cost Tracking | ✅ Separate statistics for multiple models | ❌ Single model cost |
Task Model Configuration | ✅ Different default models for different purposes | ❌ Same model for all tasks |
Expert Consultation | ✅ AskExpertModel tool | ❌ Not supported |
This multi-model collaboration capability makes Kode a true AI Development Workbench, not just a single AI assistant.
Kode is built with modern tools and requires Bun for development.
# macOS/Linux
curl -fsSL https://bun.sh/install | bash
# Windows
powershell -c "irm bun.sh/install.ps1 | iex"
# Clone the repository
git clone https://github.com/shareAI-lab/kode.git
cd kode
# Install dependencies
bun install
# Run in development mode
bun run dev
bun run build
# Run tests
bun test
# Test the CLI
./cli.js --help
We welcome contributions! Please see our Contributing Guide for details.
Apache 2.0 License - see LICENSE for details.
- Some code from @dnakov's anonkode
- Some UI learned from gemini-cli
- Some system design learned from claude code