This repository contains experiments and example code for using LangChain with Llamafile, running local LLMs in Python.
- Python 3 project with Pipenv for easy dependency management
- All required libraries for LangChain and Llamafile are specified in the included
Pipfile
- Example code for interacting with a local LLM via LangChain in the
src
directory - Ready-to-use VSCode workspace for convenient development
- Easily switch between different local LLMs downloaded from HuggingFace, e.g. Mozilla/Llama-3.2-1B-Instruct-llamafile
- Python 3.12 (or compatible version)
- pipenv (recommended, but you can use any Python environment manager)
-
Clone the repository:
git clone https://github.com/brakmic/langchain-experiments.git cd langchain-experiments
-
Install dependencies:
pipenv install
Or, if you prefer another environment manager, use the
Pipfile
as a reference. -
Download a Llamafile model:
- Visit HuggingFace or another LLM provider.
- Download the
.llamafile
model and run it locally (see the model's README for instructions).
-
Set the Llamafile API URL (optional):
- By default, the code expects the Llamafile server at
http://localhost:8080
. - You can override this by setting the
LLAMAFILE_URL
environment variable.
- By default, the code expects the Llamafile server at
-
Source code is in the
src
directory. For example, to run a simple prompt:python src/llamafile.py
-
You can open the project in VSCode using the included workspace file:
code langchain.code-workspace
- The
.venv
andlocal/
directories are excluded via.gitignore
. - You can use any Python 3 environment manager if you prefer not to use Pipenv.
- Example models and prompts are provided, but you can adapt them for your own experiments.