A cloud-native workflow automation platform built in Go that enables event-driven workflows with configurable triggers and actions. Designed for Kubernetes deployments and following cloud-native principles.
Operion enables you to create automated workflows through:
- Source Providers: Self-contained modules that generate events from external sources (scheduler, webhook, kafka)
- Triggers: Workflow trigger definitions that specify conditions for workflow execution
- Actions: Operations executed in workflows (HTTP requests, file operations, logging, data transformation)
- Context: Data sharing between workflow steps
- Workers: Background processes that execute workflows
- Source Manager: Orchestrates source providers and manages their lifecycle
- Activator: Bridges source events to workflow executions
- Cloud-Native - Stateless, container-first design optimized for Kubernetes
- Event-Driven - Decoupled architecture with pub/sub messaging for scalability
- Extensible - Plugin system with dynamic .so file loading for triggers and actions
- REST API - HTTP interface for managing workflows
- CLI Tools - Command-line interfaces for activator, source manager, and worker services
- Multiple Storage Options - File-based, PostgreSQL, and cloud storage support
- Worker Management - Background execution with proper lifecycle management
- Horizontal Scaling - Support for multiple instances and load balancing
- Observability - Built-in metrics, structured logging, and health checks
The project follows a clean, layered architecture with clear separation of concerns:
- Models (
pkg/models/
) - Core domain models and interfaces - Business Logic (
pkg/workflow/
) - Workflow execution and management - Providers (
pkg/providers/
) - Self-contained event generation modules with isolated persistence - Infrastructure (
pkg/persistence/
,pkg/event_bus/
) - External integrations and data access - Extensions (
pkg/registry/
) - Plugin system for actions and triggers with .so file loading - Interface Layer (
cmd/
) - Entry points (API server, CLI tools, service managers)
- Go 1.24 or higher
# Clone the repository
git clone https://github.com/dukex/operion.git
cd operion
# Download dependencies
go mod download
# Build all components
make build
The API server supports the following environment variables:
PORT=9091 # API server port (default: 9091)
DATABASE_URL=./data/workflows # Database connection URL or file path (required)
EVENT_BUS_TYPE=gochannel # Event bus type: gochannel, kafka (required)
PLUGINS_PATH=./plugins # Path to plugins directory (default: ./plugins)
LOG_LEVEL=info # Log level: debug, info, warn, error (default: info)
Operion supports multiple persistence backends:
File-based Storage (default):
DATABASE_URL=./data/workflows
PostgreSQL Database:
DATABASE_URL=postgres://user:password@localhost:5432/operion
The PostgreSQL persistence layer includes:
- Normalized schema with separate tables for workflows, triggers, and steps
- JSONB storage for configuration data
- Automated schema migrations with version tracking
- Soft delete functionality
- Comprehensive indexing for performance
# For development (with live reload)
air
# Or run the built binary
./bin/api
The API will be available at http://localhost:3000
# Navigate to the UI directory
cd ui/operion-editor
# Install dependencies (first time only)
npm install
# Start the development server
npm run dev
The visual workflow editor will be available at http://localhost:5173
# Start source manager to run source providers (scheduler, webhook, etc.)
./bin/operion-source-manager --database-url file://./data --providers scheduler
# Start with custom configuration
SOURCE_MANAGER_ID=my-manager \
SCHEDULER_PERSISTENCE_URL=file://./data/scheduler \
./bin/operion-source-manager --database-url postgres://user:pass@localhost/db --providers scheduler,webhook
# Validate source provider configurations
./bin/operion-source-manager validate --database-url file://./data
# Start activator to bridge source events to workflow executions
./bin/operion-activator --database-url file://./data
# Start with custom activator ID
./bin/operion-activator --activator-id my-activator --database-url postgres://user:pass@localhost/db
# Start workers to execute workflows
./bin/operion-worker --database-url file://./data
# Start workers with custom worker ID
./bin/operion-worker --worker-id my-worker --database-url postgres://user:pass@localhost/db
The system uses a modern event-driven architecture with complete provider isolation:
New Source-Based Architecture:
- Source Providers - Self-contained modules that generate events from external sources:
- Each provider manages its own persistence and configuration
- Completely isolated from core system (only receives workflow definitions)
- Examples: scheduler provider (
pkg/providers/scheduler/
), webhook provider (future)
- Source Manager Service - Orchestrates source providers:
- Manages provider lifecycle (Initialize → Configure → Prepare → Start)
- Passes workflow definitions to providers during configuration
- Publishes source events to event bus
- Activator Service - Bridges source events to workflow executions:
- Listens to source events from event bus
- Matches events to workflow triggers
- Publishes
WorkflowTriggered
events for matched workflows
- Worker Service - Executes workflows step-by-step:
- Processes
WorkflowTriggered
events andWorkflowStepAvailable
events - Publishes granular events:
WorkflowStepFinished
,WorkflowStepFailed
,WorkflowFinished
- Processes
Legacy Architecture (deprecated):
- Direct workflow triggering - Legacy trigger support (to be removed)
Benefits:
- Complete Isolation: Source providers are self-contained modules
- Pluggable Architecture: Easy to add new event sources without core changes
- Flexible Persistence: Each provider can use different storage (file, database)
- Scalable: Source generation decoupled from workflow execution
# List all workflows
curl http://localhost:3000/workflows
# Health check
curl http://localhost:3000/
See ./examples/data/workflows/bitcoin-price.json
for a complete workflow example that:
- Triggers every minute via cron schedule (
schedule
trigger) - Fetches Bitcoin price data from CoinPaprika API (
http_request
action) - Processes the data using Go template transformation (
transform
action) - Posts processed data to webhook endpoint (
http_request
action) - Logs errors if any step fails (
log
action)
Actions now use a standardized contract with:
- Factory Pattern: Actions created via
ActionFactory.Create(config)
- Execution Context: Access to previous step results via
ExecutionContext.StepResults
- Template Support: Go template system for dynamic configuration
- Structured Logging: Each action receives a structured logger
- Result Mapping: Step results stored by
uid
for cross-step references
- Scheduler (
pkg/providers/scheduler/
) - Self-contained cron-based scheduler with isolated persistence- Supports file-based persistence (
file://./data/scheduler
) or database persistence (future) - Manages its own schedule models and lifecycle
- Configurable via
SCHEDULER_PERSISTENCE_URL
environment variable
- Supports file-based persistence (
- Schedule (
pkg/triggers/schedule/
) - Cron-based execution using robfig/cron with native implementation - Kafka (
pkg/triggers/kafka/
) - Message-based triggering from Kafka topics with native implementation - Redis Queue (
pkg/triggers/queue/
) - Redis-based queue consumption for task processing - Webhook (
pkg/triggers/webhook/
) - HTTP endpoint triggers for external integrations
- HTTP Request (
pkg/actions/http_request/
) - Make HTTP calls with retry logic, templating, and JSON/string response handling - Transform (
pkg/actions/transform/
) - Process data using Go templates with templating - Log (
pkg/actions/log/
) - Output structured log messages for debugging and monitoring - Plugin Actions: Custom actions via .so plugins (example in
examples/plugins/actions/log/
)
- Dynamic loading of
.so
plugin files from./plugins
directory - Factory pattern with
ActionFactory
andTriggerFactory
interfaces - Protocol-based interfaces in
pkg/protocol/
for type safety - Example plugins available in
examples/plugins/
- Native vs Plugin Actions: Core actions built-in for performance, plugins for extensibility
The executor now operates on an event-driven, step-by-step model:
- Execution Context: Maintains state across steps with
ExecutionContext.StepResults
- Step Isolation: Each step processed as individual event for scalability
- Event Publishing: Granular events published for monitoring and debugging
- State Management: Step results stored by
uid
and accessible via Go templates - Error Handling: Failed steps can route to different next steps via
on_failure
make build # Build API server for current platform
make build-linux # Cross-compile for Linux
make clean # Clean build artifacts
make test # Run all tests
make test-coverage # Generate coverage report (coverage.out and coverage.html)
make fmt # Format Go code
make lint # Run golangci-lint
The project uses GitHub Actions for continuous integration:
- Test and Coverage: Runs on every PR and push to main
- Tests with Go 1.24
- Generates coverage reports
- Uploads coverage to Codecov and Coveralls
- Runs static analysis (vet, staticcheck, golangci-lint)
- Format checking
- Builds all binaries
See .github/workflows/test-and-coverage.yml
for complete workflow configuration.
air # Start development server with live reload
./bin/api # Run built API server directly
# Build action plugin example
cd examples/plugins/actions/log
make
# Build custom plugin
# Create plugin.go implementing protocol.ActionFactory or protocol.TriggerFactory
# Export symbol: var Action protocol.ActionFactory = &MyActionFactory{}
go build -buildmode=plugin -o plugin.so plugin.go
See TODO.md for a comprehensive list of planned features organized by priority.
- RabbitMQ Trigger: AMQP message consumption with enterprise features
- AWS SQS Trigger: Native AWS queue integration with FIFO support
- Google Pub/Sub Trigger: Google Cloud messaging integration
- Email Action: SMTP-based notifications for cloud environments
- Slack/Discord Actions: Team communication via webhooks
- Database Actions: Cloud database operations (PostgreSQL, MySQL, MongoDB)
- Kubernetes Integration: Helm charts, HPA support, and service mesh compatibility
- Enhanced Observability: Prometheus metrics, Jaeger tracing, and health checks
- Security Features: OAuth2/OIDC, RBAC, secret management integration
- Multi-tenancy: Organization isolation and resource quotas
- Visual Workflow Editor: React-based browser interface for visualizing and editing workflows
- REST API: Complete workflow management via HTTP endpoints
- CLI Tools: Command-line workflow and service management