-
Notifications
You must be signed in to change notification settings - Fork 771
feat: Add LimitMEMLOCK=infinity to containerd systemd service #2609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add LimitMEMLOCK=infinity to containerd systemd service #2609
Conversation
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
…ervice Signed-off-by: Kevinz <ruoshuidba@gmail.com>
0845eea
to
b8b40c0
Compare
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Add the LimitMEMLOCK=infinity
directive to the containerd systemd service configuration to enable workloads that require locked memory.
- Insert
LimitMEMLOCK=infinity
in the static service file - Update the Go template to emit the same directive and adjust import order
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
File | Description |
---|---|
pkg/service/containermanager/templates/containerd.service | Add LimitMEMLOCK=infinity under resource limits |
cmd/kk/pkg/container/templates/containerd_service.go | Reorder imports and include LimitMEMLOCK=infinity in the generated template |
Comments suppressed due to low confidence (2)
cmd/kk/pkg/container/templates/containerd_service.go:45
- [nitpick] Consider adding a unit test to verify that the generated service template includes the
LimitMEMLOCK=infinity
directive.
LimitMEMLOCK=infinity
cmd/kk/pkg/container/templates/containerd_service.go:22
- The import "github.com/lithammer/dedent" is no longer used in this file and will cause a compile error; please remove it or apply it to the template string if needed.
"github.com/lithammer/dedent"
/lgtm |
LGTM label has been added. Git tree hash: eea7411d512c4ede8d4aa45aab0b596e44f720b8
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Kevinz857, pixiake The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
PR Description: Add LimitMEMLOCK=infinity to containerd systemd service
Summary
Add
LimitMEMLOCK=infinity
configuration to the containerd systemd service file to remove memory locking limitations and support advanced container workloads.Background
The current containerd systemd service configuration has a default memory lock limit that can prevent certain workloads from running properly. This limitation affects:
Changes Made
[Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/bin/containerd Type=notify Delegate=yes KillMode=process Restart=always RestartSec=5 LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity +LimitMEMLOCK=infinity TasksMax=infinity OOMScoreAdjust=-999
Problem Solved
Before: Containers requiring memory locking would fail with errors like:
After: containerd can successfully manage containers with memory locking requirements.
Use Cases Enabled
GPU Containers
eBPF-enabled Networking
# Cilium, Calico with eBPF dataplane kubectl apply -f cilium-config.yaml
High-Performance Applications
Database Optimization
# PostgreSQL with huge pages and memory locking docker run -e POSTGRES_PASSWORD=pass --ulimit memlock=-1 postgres:15
Compatibility
Performance Impact
References
Reviewer Notes
This change aligns containerd with other container runtimes (Docker, CRI-O) that commonly use unlimited memory locking. The configuration is widely adopted in production Kubernetes clusters running GPU workloads.