Just as ships are built in dry docks, platforms are crafted in DoKa Seca
β οΈ NoteDoKa Seca is still in relatively early development. At this time, do not use Doka Seca for critical production systems.
Welcome to DoKa Seca - a comprehensive framework for bootstrapping cloud-native platforms using Kubernetes in Docker (Kind)! The name "DoKa Seca" is a playful Portuguese phrase where "DoKa" incorporates the "K" from Kubernetes (representing the containerized orchestration at the heart of this project), and "Seca" means "dry" - drawing inspiration from the concept of a dry dock.
Just as ships are built, repaired, and maintained in dry docks - controlled, isolated environments where all the necessary infrastructure and tooling are readily available - DoKa Seca provides a "dry dock" for Kubernetes platforms. It creates an isolated, controlled environment where entire cloud-native platforms can be rapidly assembled, configured, and tested before being deployed to production waters.
DoKa Seca provides an opinionated, production-ready framework that automates the entire platform bootstrap process using Kind clusters. Rather than just being a collection of configurations, it's a complete platform engineering solution that provisions infrastructure, installs essential tooling, configures GitOps workflows, and sets up observability - all with a single command, in your local "dry dock" environment.
This project serves as both a personal learning journey into modern DevOps practices and a comprehensive resource for platform engineers and developers interested in rapidly spinning up production-grade Kubernetes environments. Here you'll find real-world implementations of GitOps workflows, infrastructure as code, observability stacks, and cloud-native security practices - all designed to run efficiently in local development or homelab environments while following enterprise-grade patterns and best practices.
Prerequisites
Optional tools
k9s
orfreelens
(optional, if you'd like to inspect your cluster visually)argocd
kargo
vcluster
falcoctl
karmor
clusteradm
cosign
velero
vault
minio client (mc)
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.0
$ kind version
kind v0.27.0 go1.23.6 linux/amd64
$ k3d --version
k3d version v5.8.3
k3s version v1.31.5-k3s1 (default)
$ k0s version
v1.32.4+k0s.0
$ helm version
version.BuildInfo{Version:"v3.16.1", GitCommit:"v3.16.1", GitTreeState:"", GoVersion:"go1.22.7"}
DoKa Seca supports multiple deployment topologies. Choose the one that best fits your needs:
This deploys a centralized hub cluster that manages multiple spoke clusters. The hub cluster runs ArgoCD and manages addons/workloads for all clusters.
Step 1: Deploy the Hub Cluster
# Deploy control plane cluster
cd terraform/hub-spoke/hub
terraform init
terraform apply -auto-approve
Step 2: Deploy Spoke Clusters (Optional)
cd terraform/hub-spoke/spoke
# Deploy spoke clusters for different environments
./scripts/terraform.sh spoke dev apply
./scripts/terraform.sh spoke stg apply
./scripts/terraform.sh spoke prod apply
Step 3: Verify Deployment
# Check deployed clusters
kind get clusters
# Verify spoke clusters are registered with hub ArgoCD
kubectl get secrets -n argocd -l argocd.argoproj.io/secret-type=cluster
Each cluster manages its own addons and workloads independently. Navigate to distributed configuration.
cd terraform/distributed
# Deploy clusters for each environment
./deploy.sh dev
./deploy.sh stg
./deploy.sh prod
After deployment, you can inspect the deployed clusters:
# List all kind clusters (Hub-Spoke Topology)
kind get clusters
# Expected output:
# hub-dev
# spoke-dev
# spoke-prod
# spoke-stg
Access ArgoCD UI:
# Get ArgoCD admin password
make argo-cd-password
# Forward ArgoCD port
make argo-cd-ui
# Access at: http://localhost:8088
For detailed deployment options and advanced configurations, see terraform/README.md.
If you enable in terraform.tfvars
the gitops bridge by setting enable_gitops_bridge = true
, then argocd will be also
installed and all the enabled addons. You can see that terraform will add GitOps Bridge Metadata to the ArgoCD secret.
The annotations contain metadata for the addons' Helm charts and ArgoCD ApplicationSets.
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster -o json | jq '.items[0].metadata.annotations'
The output looks like the following:
{
"addons_extras_repo_basepath": "stable",
"addons_extras_repo_revision": "main",
"addons_extras_repo_url": "https://github.com/thatmlopsguy/helm-charts",
"addons_repo_basepath": "argocd",
"addons_repo_path": "appsets",
"addons_repo_revision": "main",
"addons_repo_url": "https://github.com/thatmlopsguy/dokaseca-addons",
"cluster_name": "hub-dev",
"cluster_repo_basepath": "argocd",
"cluster_repo_path": "clusters",
"cluster_repo_revision": "dev",
"cluster_repo_url": "https://github.com/thatmlopsguy/dokaseca-clusters",
"environment": "dev",
"workload_repo_basepath": "argocd",
"workload_repo_path": "workloads",
"workload_repo_revision": "dev",
"workload_repo_url": "https://github.com/thatmlopsguy/dokaseca-workloads"
}
The labels offer a straightforward way to enable or disable an addon in ArgoCD for the cluster.
kubectl get secret -n argocd -l argocd.argoproj.io/secret-type=cluster -o json | jq '.items[0].metadata.labels'
The output looks like the following:
{
"argocd.argoproj.io/secret-type": "cluster",
"cloud_provider": "local",
"cluster_name": "hub-dev",
"enable_alloy": "false",
"enable_argo_cd": "true",
"enable_argo_cd_image_updater": "false",
"enable_argo_events": "false",
"enable_argo_rollouts": "false",
"enable_argo_workflows": "false",
"enable_trivy": "false",
"enable_vault": "false",
"enable_vcluster": "false",
"enable_vector": "false",
"enable_victoria_metrics_k8s_stack": "true",
"enable_zipkin": "false",
"environment": "dev",
"k8s_cluster_name": "hub-dev",
"k8s_domain_name": "dokaseca.local",
"kubernetes_version": "1.31.2"
}
To tear down all the resources and the kind cluster(s), run the following command:
make clean-infra
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
To increase these limits temporarily run the following commands on the host:
sudo sysctl fs.inotify.max_user_watches=1048576
sudo sysctl fs.inotify.max_user_instances=8192
Source: Pod errors due to βtoo many open filesβ
User documentation can be found on our user docs site.
All contributors are warmly welcome. If you want to become a new contributor, we are so happy! Just, before doing it, read our contributing guidelines.
Want to know about the features to come? Check out the project roadmap for more information.
DoKa Seca is licensed under Apache License, Version 2.0