Skip to content
#

ai-evaluation-tools

Here is 1 public repository matching this topic...

Language: Go
Filter by language

MindTrial: Evaluate and compare AI language models (LLMs) on text-based tasks with optional file/image attachments. Supports multiple providers (OpenAI, Google, Anthropic, DeepSeek), custom tasks in YAML, and HTML/CSV reports.

  • Updated Jul 30, 2025
  • Go

Improve this page

Add a description, image, and links to the ai-evaluation-tools topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-evaluation-tools topic, visit your repo's landing page and select "manage topics."

Learn more