Nexa SDK is an on-device inference framework that runs any model on any device, across any backend. It runs on CPUs, GPUs, NPUs with backend support for CUDA, Metal, Vulkan, and Qualcomm NPU. It handles multiple input modalities including text 📝, image 🖼️, and audio 🎧. The SDK includes an OpenAI-compatible API server with support for JSON schema-based function calling and streaming. It supports model formats such as GGUF, MLX, Nexa AI's own .nexa
format, enabling efficient quantized inference across diverse platforms.
- Qualcomm NPU support for GGUF models. OmniNeural-4B is the first multimodal AI model built natively for NPUs — handling text, images, and audio in one model.
- Check the model and demos at Hugginface repo
- Check our OmniNeural-4B technical blog
- Download our arm64 with Qualcomm NPU support installer and try it!
- ASR & TTS model support in MLX format.
- new "> /mic" mode to transcribe live speech directly in your terminal.
curl -fsSL https://raw.githubusercontent.com/NexaAI/nexa-sdk/main/release/linux/install.sh -o install.sh && chmod +x install.sh && ./install.sh
You can run any compatible GGUF,MLX, or nexa model from 🤗 Hugging Face by using the <full repo name>
.
Tip
You need to download the arm64 with Qualcomm NPU support and make sure you have Snapdragon® X Elite chip on your laptop.
🖼️ Run and chat with our multimodal model, OmniNeural-4B:
nexa infer omni-neural
nexa infer NexaAI/OmniNeural-4B
Tip
GGUF runs on macOS, Linux, and Windows.
📝 Run and chat with LLMs, e.g. Qwen3:
nexa infer ggml-org/Qwen3-1.7B-GGUF
🖼️ Run and chat with Multimodal models, e.g. Qwen2.5-Omni:
nexa infer NexaAI/Qwen2.5-Omni-3B-GGUF
Tip
MLX is macOS-only (Apple Silicon). Many MLX models in the Hugging Face mlx-community organization have quality issues and may not run reliably. We recommend starting with models from our curated NexaAI Collection for best results. For example
📝 Run and chat with LLMs, e.g. Qwen3:
nexa infer NexaAI/Qwen3-4B-4bit-MLX
🖼️ Run and chat with Multimodal models, e.g. Gemma3n:
nexa infer NexaAI/gemma-3n-E4B-it-4bit-MLX
Essential Command | What it does |
---|---|
nexa -h |
show all CLI commands |
nexa pull <repo> |
Interactive download & cache of a model |
nexa infer <repo> |
Local inference |
nexa list |
Show all cached models with sizes |
nexa remove <repo> / nexa clean |
Delete one / all cached models |
nexa serve --host 127.0.0.1:8080 |
Launch OpenAI‑compatible REST server |
nexa run <repo> |
Chat with a model via an existing server |
👉 To interact with multimodal models, you can drag photos or audio clips directly into the CLI — you can even drop multiple images at once!
See CLI Reference for full commands.
We would like to thank the following projects: