Skip to content

simonw/llm-mlx-llama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm-mlx-llama

PyPI Changelog Tests License

Using MLX on macOS to run Llama 2. Highly experimental.

Installation

Install this plugin in the same environment as LLM.

llm install https://github.com/simonw/llm-mlx-llama/archive/refs/heads/main.zip

Usage

Download Llama-2-7b-chat.npz and tokenizer.model from mlx-llama/Llama-2-7b-chat-mlx.

Pass paths to those files as options when you run a prompt:

llm -m mlx-llama \
  'five great reasons to get a pet pelican:' \
  -o model Llama-2-7b-chat.npz \
  -o tokenizer tokenizer.model

Chat mode and continuing a conversation are not yet supported.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-mlx-llama
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest

About

Run Llama 2 using MLX on macOS

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published

Languages