llama-cpp
Here are 7 public repositories matching this topic...
Runpod-LLM provides ready-to-use container scripts for running large language models (LLMs) easily on RunPod.
-
Updated
May 20, 2025 - Shell
🧠 A comprehensive toolkit for benchmarking, optimizing, and deploying local Large Language Models. Includes performance testing tools, optimized configurations for CPU/GPU/hybrid setups, and detailed guides to maximize LLM performance on your hardware.
-
Updated
Mar 27, 2025 - Shell
Repo to download, save and run quantised LLM models using Llama.cpp and benchmark the results (private use)
-
Updated
Feb 28, 2024 - Shell
Debian Assistant CLI (private)
-
Updated
Aug 7, 2025 - Shell
Lightweight web UI for llama.cpp with dynamic model switching, chat history & markdown support. No GPU required. Perfect for local AI development.
-
Updated
Jun 23, 2025 - Shell
A simple plugin for geany to interact with openai compatible LLMs (llama.cpp's llama-server)
-
Updated
May 17, 2025 - Shell
Improve this page
Add a description, image, and links to the llama-cpp topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama-cpp topic, visit your repo's landing page and select "manage topics."