Skip to content

The open-source platform for crafting intelligent, collaborative agents in Minecraft using Large Language Models.

License

Notifications You must be signed in to change notification settings

mindcraft-ce/mindcraft-ce

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mindcraft-ce

Mindcraft Community Edition 🧠⛏️

Maintained by @uukelele-scratch, @Sweaterdog, @riqvip, @MrElmida, and the community.

Static Badge GitHub Release Date GitHub commits since latest release Discord GitHub repo size

Crafting minds for Minecraft with LLMs and mineflayer!

FAQ | Discord Support | Blog Post | Paper Website | MineCollab

Note

This fork of Mindcraft is maintained by the community and includes features not present in the official repo.

The open-source platform for crafting intelligent, collaborative agents in Minecraft using Large Language Models.

mindcraft vs. mindcraft-ce

Feature mindcraft (Original) mindcraft-ce (Community Edition)
Development Status Inactive Active
Minecraft Version Up to 1.21.1 Up to 1.21.4
Node.js Version v14+ v18+ (v22 recommended)
Default Ollama Model llama3.1 (Generic) Andy-4 (Built for Minecraft)
Free API Option No Yes (pollinations)
Voice Interaction Basic Text-to-Speech (TTS) Advanced TTS & Speech-to-Text (STT)
Vision Mode Simple on/off toggle Modes: off, prompted, always
Extensibility None Plugin System
Dataset Tools No Yes, built-in tools for data collection
Dependencies Older Updated (e.g., Mineflayer 4.29.0)
Error Handling Shows technical error message, difficult to troubleshoot Includes suggested fix for easy fixing
Pathfinding Basic, standard robotic movement. Upgraded movements, ability to use doors, fence gates, and swim better.

Caution

Do not connect this bot to public servers with coding enabled. This project allows an LLM to write/execute code on your computer. The code is sandboxed, but still vulnerable to injection attacks. Code writing is disabled by default. You can enable it by setting allow_insecure_coding to true in settings.js. Ye be warned.

Requirements

Install and Run

Note

An experimental Windows-only single-click installer + launcher, with extra features like a GUI editor for changing settings, is being worked on. Additionally, there is also another single-click installer + launcher available here. The single click installer auto-configures everything for you, and uses the optimal Andy-4 model based on your setup.

  1. Make sure you have the requirements above.

  2. Download this repository's latest release. Unzip it to your Downloads folder.

Note

We recommend using pollinations.ai as it is the easiest to set up. If you're using it, you can skip step 3 below.

  1. Rename keys.example.json to keys.json and fill in your API keys (you only need one). The desired model is set in andy.json or other profiles. For other models refer to the table below.

  2. In terminal/command prompt, run npm install from the installed directory. (Note: If naudiodon fails to build and you don't need STT, you can usually proceed.)

  3. Start a minecraft world and open it to LAN on localhost port 55916

  4. Run node main.js from the installed directory

If you encounter issues, check the FAQ or find support on discord. If that fails, you can create an issue.

Model Customization

You can configure project details in settings.js. See file.

You can configure the agent's name, model, and prompts in their profile like andy.json with the model field. For comprehensive details, see Model Specifications.

API Config Variable Example Model name Docs
openai OPENAI_API_KEY gpt-4.1-mini docs
google GEMINI_API_KEY gemini-2.0-flash docs
vertex GCLOUD AUTHENTICATION vertex/gemini-2.0-flash models docs
anthropic ANTHROPIC_API_KEY claude-3-5-haiku-20241022 docs
xai XAI_API_KEY grok-3-mini docs
deepseek DEEPSEEK_API_KEY deepseek-chat docs
ollama (local) n/a ollama/sweaterdog/andy-4 docs
qwen QWEN_API_KEY qwen-max Intl./cn
doubao DOUBAO_API_KEY doubao-1-5-pro-32k-250115 cn
mistral MISTRAL_API_KEY mistral-large-latest docs
replicate REPLICATE_API_KEY replicate/meta/meta-llama-3-70b-instruct docs
groq (not grok) GROQCLOUD_API_KEY groq/mixtral-8x7b-32768 docs
huggingface HUGGINGFACE_API_KEY huggingface/mistralai/Mistral-Nemo-Instruct-2407 docs
novita NOVITA_API_KEY novita/deepseek/deepseek-r1 docs
openrouter OPENROUTER_API_KEY openrouter/anthropic/claude-sonnet-4 docs
glhf.chat GHLF_API_KEY glhf/hf:meta-llama/Llama-3.1-405B-Instruct docs
hyperbolic HYPERBOLIC_API_KEY hyperbolic/deepseek-ai/DeepSeek-V3 docs
pollinations n/a pollinations/openai-large docs
andy API ANDY_API_KEY (optional) andy/auto (depends on what models are available) docs
vllm n/a vllm/llama3 n/a

If you use Ollama, to install the models used by default (generation and embedding), execute the following terminal command: ollama pull sweaterdog/andy-4 && ollama pull nomic-embed-text

Additional info about Andy-4...

image

Andy-4 is a community made, open-source model made by Sweaterdog to play Minecraft. Since Andy-4 is open-source, which means you can download the model, and play with it offline and for free.

The Andy-4 collection of models has reasoning and non-reasoning modes, sometimes the model will reason automatically without being prompted. If you want to specifically enable reasoning, use the andy-4-reasoning.json profile. Some Andy-4 models may not be able to disable reasoning, no matter what profile is used.

Andy-4 has many different models, and come in different sizes. For more information about which model size is best for you, check Sweaterdog's Ollama page

If you have any Issues, join the Mindcraft server, and ping @Sweaterdog with your issue, or leave an issue on the Andy-4 huggingface repo

Bot Profiles

Bot profiles are json files (such as andy.json) that define:

  1. Bot backend LLMs to use for talking, coding, and embedding.
  2. Prompts used to influence the bot's behavior.
  3. Examples help the bot perform tasks.

Model Specifications

LLM models can be specified simply as "model": "gpt-4o". However, you can use different models for chat, coding, and embeddings. You can pass a string or an object for these fields. A model object must specify an api, and optionally a model, url, and additional params.

"model": {
  "api": "openai",
  "model": "gpt-4.1",
  "url": "https://api.openai.com/v1/",
  "params": {
    "max_tokens": 1000,
    "temperature": 1
  }
},
"code_model": {
  "api": "openai",
  "model": "o4-mini",
  "url": "https://api.openai.com/v1/"
},
"vision_model": {
  "api": "openai",
  "model": "gpt-4.1",
  "url": "https://api.openai.com/v1/"
},
"embedding": {
  "api": "openai",
  "url": "https://api.openai.com/v1/",
  "model": "text-embedding-3-large"
},
"speak_model": {
  "api": "pollinations",
  "url": "https://text.pollinations.ai/openai",
  "model": "openai-audio",
  "voice": "echo"
}

model is used for chat, code_model is used for newAction coding, vision_model is used for image interpretation, and embedding is used to embed text for example selection. If code_model or vision_model is not specified, model will be used by default. Not all APIs support embeddings or vision.

All apis have default models and urls, so those fields are optional. The params field is optional and can be used to specify additional parameters for the model. It accepts any key-value pairs supported by the api. Is not supported for embedding models.

Embedding Models

Embedding models are used to embed and efficiently select relevant examples for conversation and coding.

Supported Embedding APIs: openai, google, replicate, huggingface, novita, ollama, andy

If you try to use an unsupported model, then it will default to a simple word-overlap method. Expect reduced performance, recommend mixing APIs to ensure embedding support.

Plugins

mindcraft-ce has support for custom plugins! For instructions, check out the plugin documentation.

Online Servers

To connect to online servers your bot will need an official Microsoft/Minecraft account. You can use your own personal one, but will need another account if you want to connect too and play with it. To connect, change these lines in settings.js:

"host": "111.222.333.444",
"port": 25565,
"auth": "microsoft",

// rest is same...

Important

The bot's name in the profile.json must exactly match the Minecraft profile name! Otherwise the bot will spam talk to itself. Example: If you are signing in with a Microsoft account, with the username "Player01", then you need to set the name in profile to "Player01".

When using a Microsoft account for mindcraft, it will show a link and a code. Open the link in the browser, sign in with the Microsoft account you wish for the bot to use, and follow the on-screen instructions.

Migrating PRs from the Original Repo

Warning

These steps only work if you have write access to mindcraft-ce.

  1. Clone the fork with the PR (e.g. mindcraft-fork), if you haven't already.
  2. Add mindcraft-ce as a remote:
git remote add mindcraft-ce https://github.com/mindcraft-ce/mindcraft-ce.git
  1. Push the branch to mindcraft-ce, replacing patch-x with your branch's name:
git push mindcraft-ce patch-x
  1. On GitHub, go to mindcraft-ce, switch to patch-x, and create a PR to the desired branch in mindcraft-ce.

Docker Container

If you intend to allow_insecure_coding, it is a good idea to run the app in a docker container to reduce risks of running unknown code. This is strongly recommended before connecting to remote servers.

docker run -i -t --rm -v $(pwd):/app -w /app -p 3000-3003:3000-3003 node:latest node main.js

or simply

docker-compose up

When running in docker, if you want the bot to join your local minecraft server, you have to use a special host address host.docker.internal to call your localhost from inside your docker container. Put this into your settings.js:

"host": "host.docker.internal", // instead of "localhost", to join your local minecraft from inside the docker container

To connect to an unsupported minecraft version, you can try to use viaproxy

STT in Mindcraft

STT allows you to speak to the model if you have a microphone.

STT can be enabled in settings.js under the section that looks like this:

    "stt_transcription": true, // Change this to "true" to enable STT
    "stt_provider": "groq", // STT provider: "groq" or "pollinations"
    "stt_username": "SYSTEM",
    "stt_agent_name": ""

The Text to Speech engine will begin listening on the system default input device.

If for some reason STT does not work, install naudiodon by running the command: npm install naudiodon

STT Providers:

  • Groq: You need a GroqCloud API key as Groq is used for Audio transcription
  • Pollinations: Free STT service, no API key required. Uses the openai-audio model via the Pollinations API.

To use Groq STT, simply set "stt_provider": "groq" in your settings.js file. This provides an alternative to pollinations for speech-to-text transcription.

Note

Pollinations can be buggy! Using STT as groq is still free, and is far more stable and correct that pollinations.

Dataset collection

Mindcraft has the capabilities to collect data from you playing with the bots, which can be used to generate training data to fine-tune models such as Andy-4. To do this, enable logging inside of settings.js, then navigate to the logs folder.

Inside of the logs folder, and installing the dependecies, you will find a file named generate_usernames.py, you need to run this in order to convert your collected data into a usable dataset. This will generate a bunch of random names to replace the name of your bot, and your username. Both of which improve performance later on.

To run it, run python generate_usernames.py. The max amount of usernames will take up multiple Terabytes of data. If for some reason you want to do this, run it with the --make_all flag.

Next, you need to set up convert.py to include every username that interacted with the bot, as well as the bot's own username. This is done by adding / changing the usernames in the ORIGINAL_USERNAMES list.

After this, you are all set up for conversion! Since you might not want to convert all data at once, you must change the names of the .csv file*(s)* that you want to convert to Andy_pre1. If more than one file is wanted for conversion, change 1 to the next number, this value can be as high as you want.

To convert, run python convert.py, if you get a dependency error, ensure you are in a virtual python environment rather than a global one.

For setting up vision datasets, run convert.py with the flag of --vision, this will do the same thing as the rest of the conversions, but change the format to an image-friendly way. But it should be known that the formatted image data is not yet prepared for training, we are still working out how to have the data effectively be used by Unsloth.

Andy API - Distributed AI Compute Pool

The Andy API is a revolutionary distributed compute pool that allows users to share their AI resources and access models from around the world. By connecting to the Andy API network, you can:

  • Contribute Resources: Share your local AI models (Ollama, LM Studio, etc.) or API quotas (OpenAI, Anthropic, etc.) with the community
  • Access Diverse Models: Use models from other contributors without needing to host them locally
  • Scale Dynamically: Automatically distribute workload across available compute resources

Andy API Local Client

The Andy API Local Client is a modern web-based interface that makes it easy to connect any OpenAI-compatible endpoint to the distributed compute pool:

🔗 Repository: https://github.com/mindcraft-ce/Andy-API

Key Features:

  • Universal Compatibility: Works with Ollama, OpenAI API, LM Studio, vLLM, and any OpenAI-compatible endpoint
  • Web Dashboard: Real-time monitoring, model management, and performance analytics
  • Easy Setup: Simple installation with automatic model discovery
  • Resource Sharing: Contribute your compute power or API quotas to help the community

Quick Start:

# Clone the Andy API Local Client
git clone https://github.com/mindcraft-ce/Andy-API.git
cd Andy-API

# Install and run
pip install -r requirements.txt
python launch.py

# Open http://localhost:5000 in your browser

By running the Andy API Local Client alongside mindcraft-ce, you can contribute to the distributed AI ecosystem while using the best available models for your Minecraft agents!

Tasks

Bot performance can be roughly evaluated with Tasks. Tasks automatically initialize bots with a goal to acquire specific items or construct predefined buildings, and remove the bot once the goal is achieved.

To run tasks, you need python, pip, and optionally conda. You can then install dependencies with pip install -r requirements.txt.

Tasks are defined in json files in the tasks folder, and can be run with: python tasks/run_task_file.py --task_path=tasks/example_tasks.json

For full evaluations, you will need to download and install the task suite. Full instructions.

Specifying Profiles via Command Line

By default, the program will use the profiles specified in settings.js. You can specify one or more agent profiles using the --profiles argument: node main.js --profiles ./profiles/andy.json ./profiles/jill.json

Patches

Some of the node modules that we depend on have bugs in them. To add a patch, change your local node module file and run npx patch-package [package-name]

Citation:

@article{mindcraft2025,
  title = {Collaborating Action by Action: A Multi-agent LLM Framework for Embodied Reasoning},
  author = {White*, Isadora and Nottingham*, Kolby and Maniar, Ayush and Robinson, Max and Lillemark, Hansen and Maheshwari, Mehul and Qin, Lianhui and Ammanabrolu, Prithviraj},
  journal = {arXiv preprint arXiv:2504.17950},
  year = {2025},
  url = {https://arxiv.org/abs/2504.17950},
}

About

The open-source platform for crafting intelligent, collaborative agents in Minecraft using Large Language Models.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Languages

  • JavaScript 67.3%
  • Python 28.2%
  • CSS 2.1%
  • HTML 2.0%
  • Other 0.4%