KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
-
Updated
Aug 8, 2025 - Go
KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes
Saves up to 90% of AWS EC2 costs by automating the use of spot instances on existing AutoScaling groups. Installs in minutes using CloudFormation or Terraform. Convenient to deploy at scale using StackSets. Uses tagging to avoid launch configuration changes. Automated spot termination handling. Reliable fallback to on-demand instances.
Crane is a FinOps Platform for Cloud Resource Analytics and Economics in Kubernetes clusters. The goal is not only to help users to manage cloud cost easier but also ensure the quality of applications.
Escalator is a batch or job optimized horizontal autoscaler for Kubernetes
General purpose metrics adapter for Kubernetes HPA metrics
Tortoise: Shell-Shockingly-Good Kubernetes Autoscaling
Add-on for KEDA to scale HTTP workloads
Horizontal Pod Autoscaler built with predictive abilities using statistical models
Custom Pod Autoscaler program and base images, allows creation of Custom Pod Autoscalers
Custom controller that extends the Horizontal Pod Autoscaler
GitHub Actions Runner Manager
A Kubernetes controller for automatically optimizing pod requests based on their continuous usage. VPA alternative that can work with HPA.
Google Cloud Karpenter Provider
An open cloud native capacity solution which helps you achieve ultimate resource utilization in an intelligent and risk-free way.
Sherpa is a highly available, fast, and flexible horizontal job scaling for HashiCorp Nomad. It is capable of running in a number of different modes to suit different requirements, and can scale based on Nomad resource metrics or external sources.
Kubernetes autoscaler for the workers. Resource is called WPA. Queues Supported: SQS, Beanstalkd.
Alibaba Cloud Karpenter Provider
Tool to build Docker cluster composition for Amazon EC2 Container Service(ECS)
Extensible generative AI platform on Kubernetes with OpenAI-compatible APIs.
A Kubernetes controller that modifies the CPU and/or memory resources of containers based on whether they're starting up, according to the startup/post-startup settings you provide.
Add a description, image, and links to the autoscaling topic page so that developers can more easily learn about it.
To associate your repository with the autoscaling topic, visit your repo's landing page and select "manage topics."