Explore, Experience, and Evaluate the Future of On-Device Generative AI with Google AI Edge.
The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android (available now) and iOS (coming soon) devices. Dive into a world of creative and practical AI use cases, all running locally, without needing an internet connection once the model is loaded. Experiment with different models, chat, ask questions with images and audio clip, explore prompts, and more!
Install the app today from Google Play
For users without Google Play access, install the apk from the latest release
Important
You must uninstall all previous versions of the app before installing this one. Past versions will no longer be working and supported.
- 📱 Run Locally, Fully Offline: Experience the magic of GenAI without an internet connection. All processing happens directly on your device.
- 🤖 Choose Your Model: Easily switch between different models from Hugging Face and compare their performance.
- 🖼️ Ask Image: Upload images and ask questions about them. Get descriptions, solve problems, or identify objects.
- 🎙️ Audio Scribe: Transcribe an uploaded or recorded audio clip into text or translate it into another language.
- ✍️ Prompt Lab: Summarize, rewrite, generate code, or use freeform prompts to explore single-turn LLM use cases.
- 💬 AI Chat: Engage in multi-turn conversations.
- 📊 Performance Insights: Real-time benchmarks (TTFT, decode speed, latency).
- 🧩 Bring Your Own Model: Test your local LiteRT
.litermlm
models. - 🔗 Developer Resources: Quick links to model cards and source code.
- Check OS Requirement: Android 12 and up
- Download the App:
- Install the app from Google Play.
- For users without Google Play access: install the apk from the latest release
- Install & Explore: For detailed installation instructions (including for corporate devices) and a full user guide, head over to our Project Wiki!
- Google AI Edge: Core APIs and tools for on-device ML.
- LiteRT: Lightweight runtime for optimized model execution.
- LLM Inference API: Powering on-device Large Language Models.
- Hugging Face Integration: For model discovery and download.
Check out the development notes for instructions about how to build the app locally.
This is an experimental Beta release, and your input is crucial!
- 🐞 Found a bug? Report it here!
- 💡 Have an idea? Suggest a feature!
Licensed under the Apache License, Version 2.0. See the LICENSE file for details.