Локальный API-прокси для Qwen AI с поддержкой сохранения контекста диалогов и управления сессиями через REST API
-
Updated
Aug 12, 2025 - JavaScript
Локальный API-прокси для Qwen AI с поддержкой сохранения контекста диалогов и управления сессиями через REST API
🦙 chat-o-llama: A lightweight, modern web interface for AI conversations with support for both Ollama and llama.cpp backends. Features persistent conversation management, real-time backend switching, intelligent context compression, and a clean responsive UI.
Chatbot User Interaction and Storage
Multi-agent chatroom powered by OpenAI — users configure personas and bring them to life.
Retrieval-Augmented Generation (RAG) Chatbot Using Langchain and FAISS.
The FSTT Chatbot Project is a full-stack MEAN application for the Faculty of Sciences and Technologies of Tangier. It uses an advanced Large Language Model (Ollama Llama3) and a Retrieval-Augmented Generation (RAG) approach, with JWT authentication for secure access and a vector database (ChromaDB) for accurate responses.
Export and archive Claude Code conversation history with intelligent deduplication
Plataforma de envío de mensajes directos, con características básicas de social networking.
My first AI chatbot using Gemini
Lightweight web UI for llama.cpp with dynamic model switching, chat history & markdown support. No GPU required. Perfect for local AI development.
Browse and view your Claude conversation transcripts with a modern web interface
Chat-O-Llama is a user-friendly web interface for managing conversations with Ollama, featuring persistent chat history. Easily set up and start your chat sessions with just a few commands. 🐙💻
Conversation compression method for multi-turn RAG
Add a description, image, and links to the conversation-history topic page so that developers can more easily learn about it.
To associate your repository with the conversation-history topic, visit your repo's landing page and select "manage topics."