Releases: av/harbor
v0.3.19 - Docling
Docling
Docling simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.
harbor up docling
Misc
speaches
now working again with newer version of the service + automatic model downloadsharbor config
extra optionsharbor pull
fixed regression after v0.3.16 not allowing to pull custom servicesharbor open
proper handling when system open failsharbor ls | ps | ...
- correct handling of--no-defaults
nbs
launch example- bump golang >= 1.24.0 by @genevera in #185
- fix doc generation order
New Contributors
Full Changelog: v0.3.18...v0.3.19
v0.3.18
boost
- new
nbs
module llm.emit_artifact
convenience tweaks
- new
harbor pull
- now pulls Ollama models too:harbor pull gemma3
,harbor pull webui
harbor ollama <command>
- now launches Ollama if it's not running yet- routines migrated to TypeScript
Full Changelog: v0.3.16...v0.3.18
v0.3.17
v0.3.16
v0.3.16
boost
- full docs revamp: env vars, modules references
HARBOR_BOOST_MODULES
supports "all" optiondiscussurl
uses Jina Reader instead of raw content
parllama
- moving away frompkgx
, fixing config locationvllm
- support forvllm.image
config
Full Changelog: v0.3.15...v0.3.16
v0.3.15
v0.3.15
boost
HARBOR_BOOST_PUBLIC_URL
env var allows boost to know its public URL for artifacts to be able to connect back to it- fix tool registry for
ws://
connections
Full Changelog: v0.3.14...v0.3.15
v0.3.14
v0.3.14
boost
autotemp
mod- fixes to tool calls and streaming
- mini tool registry, finished local tool call loop
- API-level tests
Full Changelog: v0.3.13...v0.3.14
v0.3.13
v0.3.12
Airweave
Airweave is a one-stop-shop for integrating external knowledge for use by agents. Current integration is preliminary, awaiting better configurability for embeddings/LLMs in the upstream.
Misc
boost
- Default temp is
0.35
polyglot
- adjust prompts for smaller LLMschat.advance
fixed regression breaking multiple modules relying on this method (rcn
and others)
- Default temp is
Full Changelog: v0.3.11...v0.3.12
v0.3.11
This is a maintenance release with few small bugfixes
ldr
- fixing env vars for Ollama/SearXNG for newer version
- switching to official prebuilt image
litellm
- config pointers
Full Changelog: v0.3.10...v0.3.11
v0.3.10 - Modular MAX
Modular MAX
MAX is a platform for inference from Modular (creators of Mojo lang). You will need Nvidia GPU to work with this service.
# Pull the image, start the service, tail logs
# ⚠️ Service will download configured model automatically
harbor up modularmax --tail
Misc
docs
- dev pipeline improvements, enables normalising local MD links in the future (to fix dead local links in the Wiki/repo)
- normalising formatting for the service metadata
- modernised
searxng
wiki entry
harbor help
/harbor how
- CLI help updates
Full Changelog: v0.3.9...v0.3.10