Skip to content
/ APA Public

🌾 πŸ₯³ πŸŒ‹ 🏰 πŸŒ… πŸŒ• Advanced Programming Assistant πŸŒ– πŸŒ” 🌈 πŸ† πŸ‘‘

License

Notifications You must be signed in to change notification settings

KennyDizi/APA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ APA – Async Prompt Optimizer

Python 3.13+ License Built with LiteLLM

Transform plain text into powerful AI prompts with async efficiency

APA is an async, provider-agnostic command-line tool that converts .txt files into structured prompts for leading LLM providers. Built on LiteLLM with enterprise-grade retry logic and clean architecture.

Features β€’ Installation β€’ Usage β€’ Configuration β€’ API β€’ Contributing


✨ Features

πŸ”Œ Universal Provider Support

  • OpenAI (GPT-4, o3, o4)
  • Anthropic (Claude 3.7, Sonnet 4, Opus 4)
  • DeepSeek
  • OpenRouter

⚑ Performance & Reliability

  • Fully async architecture
  • Real-time streaming support
  • Auto-retry with exponential backoff
  • Concurrent request handling

🧠 Smart Model Features

  • reasoning_effort for OpenAI o3/o4
  • Extended thinking tokens for Claude
  • Auto-detect model capabilities
  • Developer role injection

πŸ› οΈ Developer Experience

  • Clean architecture design
  • TOML-based configuration
  • Environment variable support
  • Library-ready API

πŸ“ Project Structure

apa/
β”œβ”€β”€ πŸ“„ configuration.toml      # Runtime settings
β”œβ”€β”€ πŸ“„ system_prompt.toml      # Customizable system prompt
β”œβ”€β”€ 🐍 __init__.py
β”œβ”€β”€ πŸ”§ config.py               # Configuration loader
β”œβ”€β”€ πŸ“‚ infrastructure/
β”‚   └── πŸ”Œ llm_client.py       # Async LiteLLM wrapper
└── πŸ“‚ services/
    └── 🎯 __init__.py         # Service facade
πŸ“„ main.py                     # CLI entry point
πŸš€ run.sh                      # Environment helper
πŸ“‹ requirements.txt            # Dependencies
πŸ“¦ pyproject.toml              # Project metadata

πŸš€ Installation

Prerequisites

  • Python 3.13+
  • API Key for at least one provider
  • uv (recommended) or pip

Quick Start

# 1. Clone the repository
git clone https://github.com/yourusername/apa.git
cd apa

# 2. Create virtual environment
uv venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# 3. Install APA
uv pip install -e .

# 4. Set up your API key
echo "OPENAI_API_KEY=sk-..." > .env

🎯 Usage

Command Line

Create a prompt file:

echo "Explain quantum computing in simple terms" > prompt.txt

Run APA:

# Using the helper script (auto-loads .env)
./run.sh --msg-file prompt.txt

# Direct execution
python main.py --msg-file prompt.txt

# Force streaming mode
python main.py --msg-file prompt.txt --stream

# After installation
apa --msg-file prompt.txt

πŸ“š As a Library

import asyncio
from apa.services import acompletion
from apa.config import load_settings

async def main():
    cfg = load_settings()
    
    # Non-streaming
    response = await acompletion(
        cfg.system_prompt,
        "Explain the SOLID principles",
        model="gpt-4",
        stream=False
    )
    print(response)
    
    # Streaming
    stream = await acompletion(
        cfg.system_prompt,
        "Write a haiku about coding",
        model="claude-3-7-sonnet",
        stream=True
    )
    async for chunk in stream:
        print(chunk, end='', flush=True)

asyncio.run(main())

βš™οΈ Configuration

πŸ“„ apa/configuration.toml

# Model parameters
temperature      = 0.2           # Creativity level (0.0-1.0)
stream           = true          # Enable real-time streaming

# Provider-specific settings
programming_language = "Python"  # Default language injected into system prompt
reasoning_effort = "high"        # OpenAI o3/o4 models only
thinking_tokens  = 16384         # Anthropic Claude models only

# Model selection
provider = "openai"              # openai | anthropic | deepseek | openrouter
model    = "o3"                  # Model identifier

# Fallback configuration (optional)
fallback_provider = "anthropic"  # Provider to use if primary fails
fallback_model = "claude-sonnet-4-20250514"  # Model to use if primary fails

πŸ€– apa/system_prompt.toml (templated)

Customize the AI assistant's behavior:

system_prompt = """
## Role
You are an advanced AI programming assistant specializing in $programming_language programming language...

## Task
Your tasks include...
"""

The only variable required is `programming_language`. The value comes from `configuration.toml`; if omitted it defaults to "Python".


### πŸ” Environment Variables

Create a `.env` file:
```bash
# Provider API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=anthropic-...
DEEPSEEK_API_KEY=...
OPENROUTER_API_KEY=...

πŸ”§ Advanced Features

Model Capabilities

Feature Supported Models
🎯 Reasoning Effort o3, o3-mini, o4, o4-mini
🧠 Extended Thinking Claude 3.7 Sonnet, Sonnet 4, Opus 4
πŸ‘¨β€πŸ’» Developer Role o1, o3, o4, gpt-4.1
🌑️ No Temperature DeepSeek Reasoner, o1-o4 series

Retry Configuration

APA automatically retries failed requests:

  • 3 attempts maximum
  • Exponential backoff: 2-8 seconds
  • Smart error handling

Fallback Mechanism

APA includes an intelligent fallback system that automatically switches providers when the primary fails:

  • Primary attempts: 3 tries with exponential backoff
  • Automatic switchover: Seamlessly transitions to fallback provider
  • Provider hot-swap: Loads provider-specific settings without restart
  • Configurable: Set fallback_provider and fallback_model in configuration.toml

To disable fallback, simply omit these keys from your configuration.

Example configuration:

# Primary provider
provider = "openai"
model = "gpt-4"

# Fallback provider (activated after 3 primary failures)
fallback_provider = "anthropic"
fallback_model = "claude-sonnet-4-20250514"

🀝 Contributing

Adding a New Provider

  1. Update PROVIDER_ENV_MAP in apa/config.py
  2. Add model capabilities to llm_client.py
  3. Test with your API key

Custom Features

  • Retry Policy: Modify @retry decorator in llm_client.py
  • New Endpoints: Extend acompletion in services
  • UI/API: Build on top of the async service layer

πŸ“„ License

MIT License - see LICENSE for details.


Built with ❀️ by Kenny Dizi

Report Bug β€’ Request Feature

About

🌾 πŸ₯³ πŸŒ‹ 🏰 πŸŒ… πŸŒ• Advanced Programming Assistant πŸŒ– πŸŒ” 🌈 πŸ† πŸ‘‘

Resources

License

Stars

Watchers

Forks

Packages

No packages published