-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
area/agentCilium agent related.Cilium agent related.area/ipamIP address management, including cloud IPAMIP address management, including cloud IPAMkind/bugThis is a bug in the Cilium logic.This is a bug in the Cilium logic.kind/community-reportThis was reported by a user in the Cilium community, eg via Slack.This was reported by a user in the Cilium community, eg via Slack.
Description
Is there an existing issue for this?
- I have searched the existing issues
What happened?
Cilium Agent pods are sporadically hitting a segmentation violation when trying to start. In most cases, the first or second restart comes up cleanly. We're noticing this happening when we have a cluster scale-up event - several Cilium agent pods will hit this crash and then restart at the same time.
Cilium Version
Client: 1.15.2 7cf57829 2024-03-13T15:34:43+02:00 go version go1.21.8 linux/amd64
Daemon: 1.15.2 7cf57829 2024-03-13T15:34:43+02:00 go version go1.21.8 linux/amd64
Kernel Version
5.10.210-201.852.amzn2.x86_64
Kubernetes Version
Server Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.9-eks-036c24b", GitCommit:"f75443c988661ca0a6dfa0dc01ea82dd42d31278", GitTreeState:"clean", BuildDate:"2024-04-30T23:54:04Z", GoVersion:"go1.21.9", Compiler:"gc", Platform:"linux/amd64"}
Regression
Did not see this issue with 14.5.x
Sysdump
cilium-sysdump-20240603-161546.zip
Relevant log output
level=info msg="Memory available for map entries (0.003% of 66326228992B): 165815572B" subsys=config
level=info msg="option bpf-ct-global-tcp-max set by dynamic sizing to 581809" subsys=config
level=info msg="option bpf-ct-global-any-max set by dynamic sizing to 290904" subsys=config
level=info msg="option bpf-nat-global-max set by dynamic sizing to 581809" subsys=config
level=info msg="option bpf-neigh-global-max set by dynamic sizing to 581809" subsys=config
level=info msg="option bpf-sock-rev-map-max set by dynamic sizing to 290904" subsys=config
level=info msg=" --agent-health-port='9879'" subsys=daemon
level=info msg=" --agent-labels=''" subsys=daemon
level=info msg=" --agent-liveness-update-interval='1s'" subsys=daemon
level=info msg=" --agent-not-ready-taint-key='node.cilium.io/agent-not-ready'" subsys=daemon
level=info msg=" --allocator-list-timeout='3m0s'" subsys=daemon
level=info msg=" --allow-icmp-frag-needed='true'" subsys=daemon
level=info msg=" --allow-localhost='auto'" subsys=daemon
level=info msg=" --annotate-k8s-node='false'" subsys=daemon
level=info msg=" --api-rate-limit=''" subsys=daemon
level=info msg=" --arping-refresh-period='30s'" subsys=daemon
level=info msg=" --auto-create-cilium-node-resource='true'" subsys=daemon
level=info msg=" --auto-direct-node-routes='true'" subsys=daemon
level=info msg=" --aws-enable-prefix-delegation='true'" subsys=daemon
level=info msg=" --aws-release-excess-ips='true'" subsys=daemon
level=info msg=" --bgp-announce-lb-ip='false'" subsys=daemon
level=info msg=" --bgp-announce-pod-cidr='false'" subsys=daemon
level=info msg=" --bgp-config-path='/var/lib/cilium/bgp/config.yaml'" subsys=daemon
level=info msg=" --bpf-auth-map-max='524288'" subsys=daemon
level=info msg=" --bpf-ct-global-any-max='262144'" subsys=daemon
level=info msg=" --bpf-ct-global-tcp-max='524288'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-any='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp='2h13m20s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp-fin='10s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-regular-tcp-syn='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-any='1m0s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-tcp='2h13m20s'" subsys=daemon
level=info msg=" --bpf-ct-timeout-service-tcp-grace='1m0s'" subsys=daemon
level=info msg=" --bpf-filter-priority='1'" subsys=daemon
level=info msg=" --bpf-fragments-map-max='8192'" subsys=daemon
level=info msg=" --bpf-lb-acceleration='disabled'" subsys=daemon
level=info msg=" --bpf-lb-affinity-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-algorithm='random'" subsys=daemon
level=info msg=" --bpf-lb-dsr-dispatch='opt'" subsys=daemon
level=info msg=" --bpf-lb-dsr-l4-xlate='frontend'" subsys=daemon
level=info msg=" --bpf-lb-external-clusterip='false'" subsys=daemon
level=info msg=" --bpf-lb-maglev-hash-seed='JLfvgnHc2kaSUFaI'" subsys=daemon
level=info msg=" --bpf-lb-maglev-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-maglev-table-size='16381'" subsys=daemon
level=info msg=" --bpf-lb-map-max='65536'" subsys=daemon
level=info msg=" --bpf-lb-mode='snat'" subsys=daemon
level=info msg=" --bpf-lb-rev-nat-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-rss-ipv4-src-cidr=''" subsys=daemon
level=info msg=" --bpf-lb-rss-ipv6-src-cidr=''" subsys=daemon
level=info msg=" --bpf-lb-service-backend-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-service-map-max='0'" subsys=daemon
level=info msg=" --bpf-lb-sock='false'" subsys=daemon
level=info msg=" --bpf-lb-sock-hostns-only='false'" subsys=daemon
level=info msg=" --bpf-lb-source-range-map-max='0'" subsys=daemon
level=info msg=" --bpf-map-dynamic-size-ratio='0.0025'" subsys=daemon
level=info msg=" --bpf-map-event-buffers=''" subsys=daemon
level=info msg=" --bpf-nat-global-max='524288'" subsys=daemon
level=info msg=" --bpf-neigh-global-max='524288'" subsys=daemon
level=info msg=" --bpf-policy-map-full-reconciliation-interval='15m0s'" subsys=daemon
level=info msg=" --bpf-policy-map-max='16384'" subsys=daemon
level=info msg=" --bpf-root='/sys/fs/bpf'" subsys=daemon
level=info msg=" --bpf-sock-rev-map-max='262144'" subsys=daemon
level=info msg=" --bypass-ip-availability-upon-restore='false'" subsys=daemon
level=info msg=" --certificates-directory='/var/run/cilium/certs'" subsys=daemon
level=info msg=" --cflags=''" subsys=daemon
level=info msg=" --cgroup-root='/run/cilium/cgroupv2'" subsys=daemon
level=info msg=" --cilium-endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg=" --cluster-health-port='4240'" subsys=daemon
level=info msg=" --cluster-id='0'" subsys=daemon
level=info msg=" --cluster-name='default'" subsys=daemon
level=info msg=" --clustermesh-config='/var/lib/cilium/clustermesh/'" subsys=daemon
level=info msg=" --clustermesh-ip-identities-sync-timeout='1m0s'" subsys=daemon
level=info msg=" --cmdref=''" subsys=daemon
level=info msg=" --cni-chaining-mode='none'" subsys=daemon
level=info msg=" --cni-chaining-target=''" subsys=daemon
level=info msg=" --cni-exclusive='false'" subsys=daemon
level=info msg=" --cni-external-routing='false'" subsys=daemon
level=info msg=" --cni-log-file='/var/run/cilium/cilium-cni.log'" subsys=daemon
level=info msg=" --config=''" subsys=daemon
level=info msg=" --config-dir='/tmp/cilium/config-map'" subsys=daemon
level=info msg=" --config-sources='config-map:kube-system/cilium-config'" subsys=daemon
level=info msg=" --conntrack-gc-interval='0s'" subsys=daemon
level=info msg=" --conntrack-gc-max-interval='0s'" subsys=daemon
level=info msg=" --controller-group-metrics='write-cni-file,sync-host-ips,sync-lb-maps-with-k8s-services'" subsys=daemon
level=info msg=" --crd-wait-timeout='5m0s'" subsys=daemon
level=info msg=" --custom-cni-conf='true'" subsys=daemon
level=info msg=" --datapath-mode='veth'" subsys=daemon
level=info msg=" --debug='false'" subsys=daemon
level=info msg=" --debug-verbose=''" subsys=daemon
level=info msg=" --derive-masquerade-ip-addr-from-device=''" subsys=daemon
level=info msg=" --devices='eth+'" subsys=daemon
level=info msg=" --direct-routing-device=''" subsys=daemon
level=info msg=" --disable-endpoint-crd='false'" subsys=daemon
level=info msg=" --disable-envoy-version-check='false'" subsys=daemon
level=info msg=" --disable-iptables-feeder-rules=''" subsys=daemon
level=info msg=" --dns-max-ips-per-restored-rule='1000'" subsys=daemon
level=info msg=" --dns-policy-unload-on-shutdown='false'" subsys=daemon
level=info msg=" --dnsproxy-concurrency-limit='0'" subsys=daemon
level=info msg=" --dnsproxy-concurrency-processing-grace-period='0s'" subsys=daemon
level=info msg=" --dnsproxy-enable-transparent-mode='true'" subsys=daemon
level=info msg=" --dnsproxy-lock-count='131'" subsys=daemon
level=info msg=" --dnsproxy-lock-timeout='500ms'" subsys=daemon
level=info msg=" --ec2-api-endpoint=''" subsys=daemon
level=info msg=" --egress-gateway-policy-map-max='16384'" subsys=daemon
level=info msg=" --egress-gateway-reconciliation-trigger-interval='1s'" subsys=daemon
level=info msg=" --egress-masquerade-interfaces=''" subsys=daemon
level=info msg=" --egress-multi-home-ip-rule-compat='false'" subsys=daemon
level=info msg=" --enable-auto-protect-node-port-range='true'" subsys=daemon
level=info msg=" --enable-bandwidth-manager='false'" subsys=daemon
level=info msg=" --enable-bbr='false'" subsys=daemon
level=info msg=" --enable-bgp-control-plane='false'" subsys=daemon
level=info msg=" --enable-bpf-clock-probe='false'" subsys=daemon
level=info msg=" --enable-bpf-masquerade='true'" subsys=daemon
level=info msg=" --enable-bpf-tproxy='true'" subsys=daemon
level=info msg=" --enable-cilium-api-server-access='*'" subsys=daemon
level=info msg=" --enable-cilium-endpoint-slice='false'" subsys=daemon
level=info msg=" --enable-cilium-health-api-server-access='*'" subsys=daemon
level=info msg=" --enable-custom-calls='false'" subsys=daemon
level=info msg=" --enable-encryption-strict-mode='false'" subsys=daemon
level=info msg=" --enable-endpoint-health-checking='true'" subsys=daemon
level=info msg=" --enable-endpoint-routes='true'" subsys=daemon
level=info msg=" --enable-envoy-config='false'" subsys=daemon
level=info msg=" --enable-external-ips='false'" subsys=daemon
level=info msg=" --enable-health-check-loadbalancer-ip='false'" subsys=daemon
level=info msg=" --enable-health-check-nodeport='true'" subsys=daemon
level=info msg=" --enable-health-checking='true'" subsys=daemon
level=info msg=" --enable-high-scale-ipcache='false'" subsys=daemon
level=info msg=" --enable-host-firewall='false'" subsys=daemon
level=info msg=" --enable-host-legacy-routing='false'" subsys=daemon
level=info msg=" --enable-host-port='false'" subsys=daemon
level=info msg=" --enable-hubble='true'" subsys=daemon
level=info msg=" --enable-hubble-open-metrics='true'" subsys=daemon
level=info msg=" --enable-hubble-recorder-api='true'" subsys=daemon
level=info msg=" --enable-icmp-rules='true'" subsys=daemon
level=info msg=" --enable-identity-mark='true'" subsys=daemon
level=info msg=" --enable-ip-masq-agent='false'" subsys=daemon
level=info msg=" --enable-ipsec='false'" subsys=daemon
level=info msg=" --enable-ipsec-key-watcher='true'" subsys=daemon
level=info msg=" --enable-ipv4='true'" subsys=daemon
level=info msg=" --enable-ipv4-big-tcp='false'" subsys=daemon
level=info msg=" --enable-ipv4-egress-gateway='false'" subsys=daemon
level=info msg=" --enable-ipv4-fragment-tracking='true'" subsys=daemon
level=info msg=" --enable-ipv4-masquerade='true'" subsys=daemon
level=info msg=" --enable-ipv6='false'" subsys=daemon
level=info msg=" --enable-ipv6-big-tcp='false'" subsys=daemon
level=info msg=" --enable-ipv6-masquerade='true'" subsys=daemon
level=info msg=" --enable-ipv6-ndp='false'" subsys=daemon
level=info msg=" --enable-k8s='true'" subsys=daemon
level=info msg=" --enable-k8s-api-discovery='false'" subsys=daemon
level=info msg=" --enable-k8s-endpoint-slice='true'" subsys=daemon
level=info msg=" --enable-k8s-networkpolicy='true'" subsys=daemon
level=info msg=" --enable-k8s-terminating-endpoint='true'" subsys=daemon
level=info msg=" --enable-l2-announcements='false'" subsys=daemon
level=info msg=" --enable-l2-neigh-discovery='true'" subsys=daemon
level=info msg=" --enable-l2-pod-announcements='false'" subsys=daemon
level=info msg=" --enable-l7-proxy='true'" subsys=daemon
level=info msg=" --enable-local-node-route='true'" subsys=daemon
level=info msg=" --enable-local-redirect-policy='true'" subsys=daemon
level=info msg=" --enable-masquerade-to-route-source='false'" subsys=daemon
level=info msg=" --enable-metrics='true'" subsys=daemon
level=info msg=" --enable-mke='false'" subsys=daemon
level=info msg=" --enable-monitor='true'" subsys=daemon
level=info msg=" --enable-nat46x64-gateway='false'" subsys=daemon
level=info msg=" --enable-node-port='false'" subsys=daemon
level=info msg=" --enable-pmtu-discovery='false'" subsys=daemon
level=info msg=" --enable-policy='default'" subsys=daemon
level=info msg=" --enable-recorder='false'" subsys=daemon
level=info msg=" --enable-remote-node-identity='true'" subsys=daemon
level=info msg=" --enable-runtime-device-detection='true'" subsys=daemon
level=info msg=" --enable-sctp='false'" subsys=daemon
level=info msg=" --enable-service-topology='false'" subsys=daemon
level=info msg=" --enable-session-affinity='false'" subsys=daemon
level=info msg=" --enable-srv6='false'" subsys=daemon
level=info msg=" --enable-stale-cilium-endpoint-cleanup='true'" subsys=daemon
level=info msg=" --enable-svc-source-range-check='true'" subsys=daemon
level=info msg=" --enable-tracing='false'" subsys=daemon
level=info msg=" --enable-unreachable-routes='false'" subsys=daemon
level=info msg=" --enable-vtep='false'" subsys=daemon
level=info msg=" --enable-well-known-identities='false'" subsys=daemon
level=info msg=" --enable-wireguard='false'" subsys=daemon
level=info msg=" --enable-wireguard-userspace-fallback='false'" subsys=daemon
level=info msg=" --enable-xdp-prefilter='false'" subsys=daemon
level=info msg=" --enable-xt-socket-fallback='true'" subsys=daemon
level=info msg=" --encrypt-interface=''" subsys=daemon
level=info msg=" --encrypt-node='false'" subsys=daemon
level=info msg=" --encryption-strict-mode-allow-remote-node-identities='false'" subsys=daemon
level=info msg=" --encryption-strict-mode-cidr=''" subsys=daemon
level=info msg=" --endpoint-bpf-prog-watchdog-interval='30s'" subsys=daemon
level=info msg=" --endpoint-gc-interval='5m0s'" subsys=daemon
level=info msg=" --endpoint-queue-size='25'" subsys=daemon
level=info msg=" --endpoint-status=''" subsys=daemon
level=info msg=" --eni-tags='{}'" subsys=daemon
level=info msg=" --envoy-config-timeout='2m0s'" subsys=daemon
level=info msg=" --envoy-log=''" subsys=daemon
level=info msg=" --exclude-local-address=''" subsys=daemon
level=info msg=" --external-envoy-proxy='false'" subsys=daemon
level=info msg=" --fixed-identity-mapping=''" subsys=daemon
level=info msg=" --fqdn-regex-compile-lru-size='1024'" subsys=daemon
level=info msg=" --gops-port='9890'" subsys=daemon
level=info msg=" --http-403-msg=''" subsys=daemon
level=info msg=" --http-idle-timeout='0'" subsys=daemon
level=info msg=" --http-max-grpc-timeout='0'" subsys=daemon
level=info msg=" --http-normalize-path='true'" subsys=daemon
level=info msg=" --http-request-timeout='3600'" subsys=daemon
level=info msg=" --http-retry-count='3'" subsys=daemon
level=info msg=" --http-retry-timeout='0'" subsys=daemon
level=info msg=" --hubble-disable-tls='false'" subsys=daemon
level=info msg=" --hubble-event-buffer-capacity='4095'" subsys=daemon
level=info msg=" --hubble-event-queue-size='0'" subsys=daemon
level=info msg=" --hubble-export-allowlist=''" subsys=daemon
level=info msg=" --hubble-export-denylist=''" subsys=daemon
level=info msg=" --hubble-export-fieldmask=''" subsys=daemon
level=info msg=" --hubble-export-file-compress='false'" subsys=daemon
level=info msg=" --hubble-export-file-max-backups='5'" subsys=daemon
level=info msg=" --hubble-export-file-max-size-mb='10'" subsys=daemon
level=info msg=" --hubble-export-file-path=''" subsys=daemon
level=info msg=" --hubble-flowlogs-config-path=''" subsys=daemon
level=info msg=" --hubble-listen-address=':4244'" subsys=daemon
level=info msg=" --hubble-metrics='dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction;sourceContext=workload-name|reserved-identity;destinationContext=workload-name|reserved-identity'" subsys=daemon
level=info msg=" --hubble-metrics-server=':9965'" subsys=daemon
level=info msg=" --hubble-monitor-events=''" subsys=daemon
level=info msg=" --hubble-prefer-ipv6='false'" subsys=daemon
level=info msg=" --hubble-recorder-sink-queue-size='1024'" subsys=daemon
level=info msg=" --hubble-recorder-storage-path='/var/run/cilium/pcaps'" subsys=daemon
level=info msg=" --hubble-redact-enabled='false'" subsys=daemon
level=info msg=" --hubble-redact-http-headers-allow=''" subsys=daemon
level=info msg=" --hubble-redact-http-headers-deny=''" subsys=daemon
level=info msg=" --hubble-redact-http-urlquery='false'" subsys=daemon
level=info msg=" --hubble-redact-http-userinfo='true'" subsys=daemon
level=info msg=" --hubble-redact-kafka-apikey='false'" subsys=daemon
level=info msg=" --hubble-skip-unknown-cgroup-ids='true'" subsys=daemon
level=info msg=" --hubble-socket-path='/var/run/cilium/hubble.sock'" subsys=daemon
level=info msg=" --hubble-tls-cert-file='/var/lib/cilium/tls/hubble/server.crt'" subsys=daemon
level=info msg=" --hubble-tls-client-ca-files='/var/lib/cilium/tls/hubble/client-ca.crt'" subsys=daemon
level=info msg=" --hubble-tls-key-file='/var/lib/cilium/tls/hubble/server.key'" subsys=daemon
level=info msg=" --identity-allocation-mode='crd'" subsys=daemon
level=info msg=" --identity-change-grace-period='5s'" subsys=daemon
level=info msg=" --identity-gc-interval='15m0s'" subsys=daemon
level=info msg=" --identity-heartbeat-timeout='30m0s'" subsys=daemon
level=info msg=" --identity-restore-grace-period='10m0s'" subsys=daemon
level=info msg=" --install-egress-gateway-routes='false'" subsys=daemon
level=info msg=" --install-iptables-rules='true'" subsys=daemon
level=info msg=" --install-no-conntrack-iptables-rules='false'" subsys=daemon
level=info msg=" --ip-allocation-timeout='2m0s'" subsys=daemon
level=info msg=" --ip-masq-agent-config-path='/etc/config/ip-masq-agent'" subsys=daemon
level=info msg=" --ipam='eni'" subsys=daemon
level=info msg=" --ipam-cilium-node-update-rate='15s'" subsys=daemon
level=info msg=" --ipam-default-ip-pool='default'" subsys=daemon
level=info msg=" --ipam-multi-pool-pre-allocation=''" subsys=daemon
level=info msg=" --ipsec-key-file=''" subsys=daemon
level=info msg=" --ipsec-key-rotation-duration='5m0s'" subsys=daemon
level=info msg=" --iptables-lock-timeout='5s'" subsys=daemon
level=info msg=" --iptables-random-fully='false'" subsys=daemon
level=info msg=" --ipv4-native-routing-cidr='100.64.0.0/8'" subsys=daemon
level=info msg=" --ipv4-node='auto'" subsys=daemon
level=info msg=" --ipv4-pod-subnets=''" subsys=daemon
level=info msg=" --ipv4-range='auto'" subsys=daemon
level=info msg=" --ipv4-service-loopback-address='169.254.42.1'" subsys=daemon
level=info msg=" --ipv4-service-range='auto'" subsys=daemon
level=info msg=" --ipv6-cluster-alloc-cidr='f00d::/64'" subsys=daemon
level=info msg=" --ipv6-mcast-device=''" subsys=daemon
level=info msg=" --ipv6-native-routing-cidr=''" subsys=daemon
level=info msg=" --ipv6-node='auto'" subsys=daemon
level=info msg=" --ipv6-pod-subnets=''" subsys=daemon
level=info msg=" --ipv6-range='auto'" subsys=daemon
level=info msg=" --ipv6-service-range='auto'" subsys=daemon
level=info msg=" --join-cluster='false'" subsys=daemon
level=info msg=" --k8s-api-server=''" subsys=daemon
level=info msg=" --k8s-client-burst='10'" subsys=daemon
level=info msg=" --k8s-client-qps='5'" subsys=daemon
level=info msg=" --k8s-heartbeat-timeout='30s'" subsys=daemon
level=info msg=" --k8s-kubeconfig-path=''" subsys=daemon
level=info msg=" --k8s-namespace='kube-system'" subsys=daemon
level=info msg=" --k8s-require-ipv4-pod-cidr='false'" subsys=daemon
level=info msg=" --k8s-require-ipv6-pod-cidr='false'" subsys=daemon
level=info msg=" --k8s-service-cache-size='128'" subsys=daemon
level=info msg=" --k8s-service-proxy-name=''" subsys=daemon
level=info msg=" --k8s-sync-timeout='3m0s'" subsys=daemon
level=info msg=" --k8s-watcher-endpoint-selector='metadata.name!=kube-scheduler,metadata.name!=kube-controller-manager,metadata.name!=etcd-operator,metadata.name!=gcp-controller-manager'" subsys=daemon
level=info msg=" --keep-config='false'" subsys=daemon
level=info msg=" --kube-proxy-replacement='true'" subsys=daemon
level=info msg=" --kube-proxy-replacement-healthz-bind-address=''" subsys=daemon
level=info msg=" --kvstore=''" subsys=daemon
level=info msg=" --kvstore-connectivity-timeout='2m0s'" subsys=daemon
level=info msg=" --kvstore-lease-ttl='15m0s'" subsys=daemon
level=info msg=" --kvstore-max-consecutive-quorum-errors='2'" subsys=daemon
level=info msg=" --kvstore-opt=''" subsys=daemon
level=info msg=" --kvstore-periodic-sync='5m0s'" subsys=daemon
level=info msg=" --l2-announcements-lease-duration='15s'" subsys=daemon
level=info msg=" --l2-announcements-renew-deadline='5s'" subsys=daemon
level=info msg=" --l2-announcements-retry-period='2s'" subsys=daemon
level=info msg=" --l2-pod-announcements-interface=''" subsys=daemon
level=info msg=" --label-prefix-file=''" subsys=daemon
level=info msg=" --labels=''" subsys=daemon
level=info msg=" --legacy-turn-off-k8s-event-handover='false'" subsys=daemon
level=info msg=" --lib-dir='/var/lib/cilium'" subsys=daemon
level=info msg=" --local-max-addr-scope='252'" subsys=daemon
level=info msg=" --local-router-ipv4=''" subsys=daemon
level=info msg=" --local-router-ipv6=''" subsys=daemon
level=info msg=" --log-driver=''" subsys=daemon
level=info msg=" --log-opt=''" subsys=daemon
level=info msg=" --log-system-load='false'" subsys=daemon
level=info msg=" --max-connected-clusters='255'" subsys=daemon
level=info msg=" --max-controller-interval='0'" subsys=daemon
level=info msg=" --max-internal-timer-delay='0s'" subsys=daemon
level=info msg=" --mesh-auth-enabled='true'" subsys=daemon
level=info msg=" --mesh-auth-gc-interval='5m0s'" subsys=daemon
level=info msg=" --mesh-auth-mutual-connect-timeout='5s'" subsys=daemon
level=info msg=" --mesh-auth-mutual-listener-port='0'" subsys=daemon
level=info msg=" --mesh-auth-queue-size='1024'" subsys=daemon
level=info msg=" --mesh-auth-rotated-identities-queue-size='1024'" subsys=daemon
level=info msg=" --mesh-auth-signal-backoff-duration='1s'" subsys=daemon
level=info msg=" --mesh-auth-spiffe-trust-domain='spiffe.cilium'" subsys=daemon
level=info msg=" --mesh-auth-spire-admin-socket=''" subsys=daemon
level=info msg=" --metrics=''" subsys=daemon
level=info msg=" --mke-cgroup-mount=''" subsys=daemon
level=info msg=" --monitor-aggregation='medium'" subsys=daemon
level=info msg=" --monitor-aggregation-flags='all'" subsys=daemon
level=info msg=" --monitor-aggregation-interval='5s'" subsys=daemon
level=info msg=" --monitor-queue-size='0'" subsys=daemon
level=info msg=" --mtu='0'" subsys=daemon
level=info msg=" --node-encryption-opt-out-labels='node-role.kubernetes.io/control-plane'" subsys=daemon
level=info msg=" --node-port-acceleration='disabled'" subsys=daemon
level=info msg=" --node-port-algorithm='random'" subsys=daemon
level=info msg=" --node-port-bind-protection='true'" subsys=daemon
level=info msg=" --node-port-mode='snat'" subsys=daemon
level=info msg=" --node-port-range='30000,32767'" subsys=daemon
level=info msg=" --nodeport-addresses=''" subsys=daemon
level=info msg=" --nodes-gc-interval='5m0s'" subsys=daemon
level=info msg=" --operator-api-serve-addr='127.0.0.1:9234'" subsys=daemon
level=info msg=" --operator-prometheus-serve-addr=':9963'" subsys=daemon
level=info msg=" --policy-audit-mode='false'" subsys=daemon
level=info msg=" --policy-cidr-match-mode=''" subsys=daemon
level=info msg=" --policy-queue-size='100'" subsys=daemon
level=info msg=" --policy-trigger-interval='1s'" subsys=daemon
level=info msg=" --pprof='false'" subsys=daemon
level=info msg=" --pprof-address='localhost'" subsys=daemon
level=info msg=" --pprof-port='6060'" subsys=daemon
level=info msg=" --preallocate-bpf-maps='false'" subsys=daemon
level=info msg=" --prepend-iptables-chains='true'" subsys=daemon
level=info msg=" --procfs='/host/proc'" subsys=daemon
level=info msg=" --prometheus-serve-addr=':9962'" subsys=daemon
level=info msg=" --proxy-connect-timeout='2'" subsys=daemon
level=info msg=" --proxy-gid='1337'" subsys=daemon
level=info msg=" --proxy-idle-timeout-seconds='60'" subsys=daemon
level=info msg=" --proxy-max-connection-duration-seconds='0'" subsys=daemon
level=info msg=" --proxy-max-requests-per-connection='0'" subsys=daemon
level=info msg=" --proxy-prometheus-port='9964'" subsys=daemon
level=info msg=" --read-cni-conf='/tmp/cni-configuration/cni-config'" subsys=daemon
level=info msg=" --remove-cilium-node-taints='true'" subsys=daemon
level=info msg=" --restore='true'" subsys=daemon
level=info msg=" --route-metric='0'" subsys=daemon
level=info msg=" --routing-mode='native'" subsys=daemon
level=info msg=" --service-no-backend-response='reject'" subsys=daemon
level=info msg=" --set-cilium-is-up-condition='true'" subsys=daemon
level=info msg=" --set-cilium-node-taints='true'" subsys=daemon
level=info msg=" --sidecar-istio-proxy-image='cilium/istio_proxy'" subsys=daemon
level=info msg=" --skip-cnp-status-startup-clean='false'" subsys=daemon
level=info msg=" --socket-path='/var/run/cilium/cilium.sock'" subsys=daemon
level=info msg=" --srv6-encap-mode='reduced'" subsys=daemon
level=info msg=" --state-dir='/var/run/cilium'" subsys=daemon
level=info msg=" --synchronize-k8s-nodes='true'" subsys=daemon
level=info msg=" --tofqdns-dns-reject-response-code='refused'" subsys=daemon
level=info msg=" --tofqdns-enable-dns-compression='true'" subsys=daemon
level=info msg=" --tofqdns-endpoint-max-ip-per-hostname='50'" subsys=daemon
level=info msg=" --tofqdns-idle-connection-grace-period='0s'" subsys=daemon
level=info msg=" --tofqdns-max-deferred-connection-deletes='10000'" subsys=daemon
level=info msg=" --tofqdns-min-ttl='0'" subsys=daemon
level=info msg=" --tofqdns-pre-cache=''" subsys=daemon
level=info msg=" --tofqdns-proxy-port='0'" subsys=daemon
level=info msg=" --tofqdns-proxy-response-max-delay='100ms'" subsys=daemon
level=info msg=" --trace-payloadlen='128'" subsys=daemon
level=info msg=" --trace-sock='true'" subsys=daemon
level=info msg=" --tunnel-port='0'" subsys=daemon
level=info msg=" --tunnel-protocol='vxlan'" subsys=daemon
level=info msg=" --unmanaged-pod-watcher-interval='15'" subsys=daemon
level=info msg=" --update-ec2-adapter-limit-via-api='true'" subsys=daemon
level=info msg=" --use-cilium-internal-ip-for-ipsec='false'" subsys=daemon
level=info msg=" --version='false'" subsys=daemon
level=info msg=" --vlan-bpf-bypass=''" subsys=daemon
level=info msg=" --vtep-cidr=''" subsys=daemon
level=info msg=" --vtep-endpoint=''" subsys=daemon
level=info msg=" --vtep-mac=''" subsys=daemon
level=info msg=" --vtep-mask=''" subsys=daemon
level=info msg=" --wireguard-persistent-keepalive='0s'" subsys=daemon
level=info msg=" --write-cni-conf-when-ready='/host/etc/cni/net.d/05-cilium.conflist'" subsys=daemon
level=info msg=" _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="| _| | | | | | |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.15.2 7cf57829 2024-03-13T15:34:43+02:00 go version go1.21.8 linux/amd64" subsys=daemon
level=info msg="clang (10.0.0) and kernel (5.10.210) versions: OK!" subsys=linux-datapath
level=info msg="Kernel config file not found: if the agent fails to start, check the system requirements at https://docs.cilium.io/en/stable/operations/system_requirements" subsys=probes
level=info msg="Detected mounted BPF filesystem at /sys/fs/bpf" subsys=bpf
level=info msg="Mounted cgroupv2 filesystem at /run/cilium/cgroupv2" subsys=cgroups
level=info msg="Parsing base label prefixes from default label list" subsys=labels-filter
level=info msg="Parsing additional label prefixes from user inputs: []" subsys=labels-filter
level=info msg="Final label prefixes to be used for identity evaluation:" subsys=labels-filter
level=info msg=" - reserved:.*" subsys=labels-filter
level=info msg=" - :io\\.kubernetes\\.pod\\.namespace" subsys=labels-filter
level=info msg=" - :io\\.cilium\\.k8s\\.namespace\\.labels" subsys=labels-filter
level=info msg=" - :app\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:io\\.kubernetes" subsys=labels-filter
level=info msg=" - !:kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:statefulset\\.kubernetes\\.io/pod-name" subsys=labels-filter
level=info msg=" - !:apps\\.kubernetes\\.io/pod-index" subsys=labels-filter
level=info msg=" - !:batch\\.kubernetes\\.io/job-completion-index" subsys=labels-filter
level=info msg=" - !:.*beta\\.kubernetes\\.io" subsys=labels-filter
level=info msg=" - !:k8s\\.io" subsys=labels-filter
level=info msg=" - !:pod-template-generation" subsys=labels-filter
level=info msg=" - !:pod-template-hash" subsys=labels-filter
level=info msg=" - !:controller-revision-hash" subsys=labels-filter
level=info msg=" - !:annotation.*" subsys=labels-filter
level=info msg=" - !:etcd_node" subsys=labels-filter
level=info msg=Invoked duration="735.9µs" function="pprof.glob..func1 (pkg/pprof/cell.go:50)" subsys=hive
level=info msg=Invoked duration="44.484µs" function="gops.registerGopsHooks (pkg/gops/cell.go:38)" subsys=hive
level=info msg=Invoked duration="813.598µs" function="metrics.glob..func1 (pkg/metrics/cell.go:13)" subsys=hive
level=info msg=Invoked duration="29.672µs" function="metricsmap.RegisterCollector (pkg/maps/metricsmap/metricsmap.go:281)" subsys=hive
level=info msg="Spire Delegate API Client is disabled as no socket path is configured" subsys=spire-delegate
level=info msg="Mutual authentication handler is disabled as no port is configured" subsys=auth
level=info msg="Unsupported IPAM mode, disabling PodCIDR advertisements. exportPodCIDR doesn't take effect." subsys=bgp-control-plane
level=info msg=Invoked duration=59.150145ms function="cmd.configureAPIServer (cmd/cells.go:215)" subsys=hive
level=info msg=Invoked duration="14.223µs" function="cmd.unlockAfterAPIServer (cmd/deletion_queue.go:113)" subsys=hive
level=info msg=Invoked duration="36.805µs" function="controller.Init (pkg/controller/cell.go:67)" subsys=hive
level=info msg=Invoked duration="103.089µs" function="endpointcleanup.registerCleanup (pkg/endpointcleanup/cleanup.go:66)" subsys=hive
level=info msg=Invoked duration="11.417µs" function="cmd.glob..func3 (cmd/daemon_main.go:1612)" subsys=hive
level=info msg=Invoked duration="54.222µs" function="cmd.registerEndpointBPFProgWatchdog (cmd/watchdogs.go:57)" subsys=hive
level=info msg=Invoked duration="22.611µs" function="envoy.registerEnvoyVersionCheck (pkg/envoy/cell.go:132)" subsys=hive
level=info msg=Invoked duration="6.354µs" function="bgpv1.glob..func1 (pkg/bgpv1/cell.go:71)" subsys=hive
level=info msg=Invoked duration="44.817µs" function="cmd.registerDeviceReloader (cmd/device-reloader.go:48)" subsys=hive
level=info msg=Invoked duration="12.61µs" function="utime.initUtimeSync (pkg/datapath/linux/utime/cell.go:31)" subsys=hive
level=info msg=Invoked duration="45.902µs" function="agentliveness.newAgentLivenessUpdater (pkg/datapath/agentliveness/agent_liveness.go:43)" subsys=hive
level=info msg=Invoked duration="28.301µs" function="statedb.RegisterTable[...] (pkg/statedb/db.go:121)" subsys=hive
level=info msg=Invoked duration="44.145µs" function="l2responder.NewL2ResponderReconciler (pkg/datapath/l2responder/l2responder.go:72)" subsys=hive
level=info msg=Invoked duration="44.191µs" function="garp.newGARPProcessor (pkg/datapath/garp/processor.go:27)" subsys=hive
level=info msg=Invoked duration="6.164µs" function="bigtcp.glob..func1 (pkg/datapath/linux/bigtcp/bigtcp.go:58)" subsys=hive
level=info msg=Invoked duration="5.81µs" function="linux.glob..func1 (pkg/datapath/linux/devices_controller.go:62)" subsys=hive
level=info msg=Invoked duration="41.487µs" function="ipcache.glob..func3 (pkg/datapath/ipcache/cell.go:25)" subsys=hive
level=info msg=Starting subsys=hive
level=info msg="Started gops server" address="127.0.0.1:9890" subsys=gops
level=info msg="Start hook executed" duration="359.323µs" function="gops.registerGopsHooks.func1 (pkg/gops/cell.go:43)" subsys=hive
level=info msg="Start hook executed" duration="1.339µs" function="metrics.NewRegistry.func1 (pkg/metrics/registry.go:86)" subsys=hive
level=info msg="Establishing connection to apiserver" host="" subsys=k8s-client
level=info msg="Serving prometheus metrics on :9962" subsys=metrics
level=info msg="Connected to apiserver" subsys=k8s-client
level=info msg="Start hook executed" duration=13.943706ms function="client.(*compositeClientset).onStart" subsys=hive
level=info msg="Start hook executed" duration="46.29µs" function="authmap.newAuthMap.func1 (pkg/maps/authmap/cell.go:27)" subsys=hive
level=info msg="Start hook executed" duration="9.722µs" function="configmap.newMap.func1 (pkg/maps/configmap/cell.go:23)" subsys=hive
level=info msg="Start hook executed" duration="73.133µs" function="signalmap.newMap.func1 (pkg/maps/signalmap/cell.go:44)" subsys=hive
level=info msg="Start hook executed" duration="6.495µs" function="nodemap.newNodeMap.func1 (pkg/maps/nodemap/cell.go:23)" subsys=hive
level=info msg="Start hook executed" duration="22.316µs" function="eventsmap.newEventsMap.func1 (pkg/maps/eventsmap/cell.go:35)" subsys=hive
level=info msg="Start hook executed" duration="5.558µs" function="*resource.resource[*v1.Node].Start" subsys=hive
level=info msg="Start hook executed" duration="1.129µs" function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Using autogenerated IPv4 allocation range" subsys=node v4Prefix=10.38.0.0/16
level=info msg="Start hook executed" duration=6.845972ms function="node.NewLocalNodeStore.func1 (pkg/node/local_node_store.go:95)" subsys=hive
level=info msg="Start hook executed" duration="3.602µs" function="*statedb.DB.Start" subsys=hive
level=info msg="Start hook executed" duration="9.967µs" function="hive.New.func1.2 (pkg/hive/hive.go:105)" subsys=hive
level=info msg="Start hook executed" duration="3.973µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Devices changed" devices="[eth0 eth1]" subsys=devices-controller
level=info msg="Start hook executed" duration="517.331µs" function="*linux.devicesController.Start" subsys=hive
level=info msg="Node addresses updated" device=eth0 node-addresses="10.199.195.38 (eth0)" subsys=node-address
level=info msg="Node addresses updated" device=eth1 node-addresses="100.68.14.231 (eth1)" subsys=node-address
level=info msg="Start hook executed" duration="63.868µs" function="tables.(*nodeAddressController).register.func1 (pkg/datapath/tables/node_address.go:210)" subsys=hive
level=info msg="Start hook executed" duration="953.251µs" function="*bandwidth.manager.Start" subsys=hive
level=info msg="Start hook executed" duration="134.954µs" function="modules.(*Manager).Start" subsys=hive
level=info msg="Start hook executed" duration="15.485µs" function="*cni.cniConfigManager.Start" subsys=hive
level=info msg="Reading CNI configuration file source from /tmp/cni-configuration/cni-config" subsys=cni-config
level=info msg="Start hook executed" duration=4.204208ms function="*iptables.Manager.Start" subsys=hive
level=info msg="Start hook executed" duration="3.889µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="15.977µs" function="endpointmanager.newDefaultEndpointManager.func1 (pkg/endpointmanager/cell.go:216)" subsys=hive
level=info msg="Start hook executed" duration="9.504µs" function="cmd.newPolicyTrifecta.func1 (cmd/policy.go:130)" subsys=hive
level=info msg="Start hook executed" duration="2.177µs" function="*resource.resource[*cilium.io/v2.CiliumEgressGatewayPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration=954ns function="*resource.resource[*cilium.io/v2.CiliumNode].Start" subsys=hive
level=info msg="Start hook executed" duration=879ns function="*resource.resource[*types.CiliumEndpoint].Start" subsys=hive
level=info msg="Restored 0 node IDs from the BPF map" subsys=linux-datapath
level=info msg="Start hook executed" duration="79.949µs" function="datapath.newDatapath.func1 (pkg/datapath/cells.go:170)" subsys=hive
level=info msg="Start hook executed" duration="3.726µs" function="*resource.resource[*v1.Service].Start" subsys=hive
level=info msg="Start hook executed" duration="1.346µs" function="*resource.resource[*k8s.Endpoints].Start" subsys=hive
level=info msg="Start hook executed" duration=954ns function="*resource.resource[*v1.Pod].Start" subsys=hive
level=info msg="Start hook executed" duration="2.679µs" function="*resource.resource[*v1.Namespace].Start" subsys=hive
level=info msg="Start hook executed" duration="1.251µs" function="*resource.resource[*v1.NetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="1.118µs" function="*resource.resource[*cilium.io/v2.CiliumNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration=980ns function="*resource.resource[*cilium.io/v2.CiliumClusterwideNetworkPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration=922ns function="*resource.resource[*cilium.io/v2alpha1.CiliumCIDRGroup].Start" subsys=hive
level=info msg="Start hook executed" duration=792ns function="*resource.resource[*cilium.io/v2alpha1.CiliumEndpointSlice].Start" subsys=hive
level=info msg="Start hook executed" duration=847ns function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="58.507µs" function="*manager.manager.Start" subsys=hive
level=info msg="Start hook executed" duration=516ns function="k8s.newServiceCache.func1 (pkg/k8s/service_cache.go:144)" subsys=hive
level=info msg="Serving cilium node monitor v1.2 API at unix:///var/run/cilium/monitor1_2.sock" subsys=monitor-agent
level=info msg="Start hook executed" duration="138.046µs" function="agent.newMonitorAgent.func1 (pkg/monitor/agent/cell.go:61)" subsys=hive
level=info msg="Start hook executed" duration="1.424µs" function="*resource.resource[*cilium.io/v2alpha1.CiliumL2AnnouncementPolicy].Start" subsys=hive
level=info msg="Start hook executed" duration="1.189µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="5.149µs" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="25.444µs" function="envoy.newEnvoyAccessLogServer.func1 (pkg/envoy/cell.go:107)" subsys=hive
level=info msg="Start hook executed" duration="13.086µs" function="envoy.newArtifactCopier.func1 (pkg/envoy/cell.go:178)" subsys=hive
level=info msg="Envoy: Starting access log server listening on /var/run/cilium/envoy/sockets/access_log.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration="80.689µs" function="envoy.newEnvoyXDSServer.func1 (pkg/envoy/cell.go:65)" subsys=hive
level=info msg="Start hook executed" duration="1.978µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Envoy: Starting xDS gRPC server listening on /var/run/cilium/envoy/sockets/xds.sock" subsys=envoy-manager
level=info msg="Start hook executed" duration=3.585486ms function="signal.provideSignalManager.func1 (pkg/signal/cell.go:25)" subsys=hive
level=info msg="Datapath signal listener running" subsys=signal
level=info msg="Start hook executed" duration=1.301441ms function="auth.registerAuthManager.func1 (pkg/auth/cell.go:112)" subsys=hive
level=info msg="Start hook executed" duration="4.897µs" function="auth.registerGCJobs.func1 (pkg/auth/cell.go:162)" subsys=hive
level=info msg="Start hook executed" duration="12.261µs" function="*job.group.Start" subsys=hive
level=info msg="Start hook executed" duration="1.477µs" function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="105.689µs" function="bigtcp.newBIGTCP.func1 (pkg/datapath/linux/bigtcp/bigtcp.go:240)" subsys=hive
level=info msg="Start hook executed" duration=841ns function="*cell.reporterHooks.Start" subsys=hive
level=info msg="Start hook executed" duration="63.976µs" function="*ipsec.keyCustodian.Start" subsys=hive
level=info msg="Start hook executed" duration=538ns function="*job.group.Start" subsys=hive
level=info msg="Inheriting MTU from external network interface" device=eth0 ipAddr=10.199.195.38 mtu=9001 subsys=mtu
level=info msg="Start hook executed" duration="322.389µs" function="mtu.newForCell.func1 (pkg/mtu/cell.go:40)" subsys=hive
level=info msg="Auto-enabling \"enable-node-port\", \"enable-external-ips\", \"bpf-lb-sock\", \"enable-host-port\", \"enable-session-affinity\" features" subsys=daemon
level=warning msg="No valid cgroup base path found: socket load-balancing tracing with Hubble will not work.See the kubeproxy-free guide for more details." subsys=cgroup-manager
level=info msg="Removed map pin at /sys/fs/bpf/tc/globals/cilium_ipcache, recreating and re-pinning map cilium_ipcache" file-path=/sys/fs/bpf/tc/globals/cilium_ipcache name=cilium_ipcache subsys=bpf
level=info msg="Restored services from maps" failedServices=0 restoredServices=31 subsys=service
level=info msg="Restored backends from maps" failedBackends=0 restoredBackends=118 skippedBackends=0 subsys=service
level=info msg="Reading old endpoints..." subsys=daemon
level=info msg="No old endpoints found." subsys=daemon
level=info msg="Waiting until all Cilium CRDs are available" subsys=k8s
level=info msg="All Cilium CRDs have been found and are available" subsys=k8s
level=info msg="Retrieved node information from kubernetes node" nodeName=ip-10-199-195-38.us-west-2.compute.internal subsys=daemon
level=info msg="k8s mode: Allowing localhost to reach local endpoints" subsys=daemon
level=info msg="Direct routing device detected" direct-routing-device=eth0 subsys=linux-datapath
level=info msg="Masquerading IP selected for device" device=eth0 ipv4=10.199.195.38 subsys=node
level=info msg="Masquerading IP selected for device" device=eth1 ipv4=100.68.14.231 subsys=node
level=info msg="Enabling k8s event listener" subsys=k8s-watcher
level=info msg="Using discoveryv1.EndpointSlice" subsys=k8s
level=info msg="Policy Add Request" ciliumNetworkPolicy="[&{EndpointSelector:{\"matchLabels\":{\"k8s:app.kubernetes.io/instance\":\"metrics-server\",\"k8s:app.kubernetes.io/name\":\"metrics-server\",\"k8s:io.kubernetes.pod.namespace\":\"kube-system\"}} NodeSelector:{} Ingress:[{IngressCommonRule:{FromEndpoints:[{}] FromRequires:[] FromCIDR: FromCIDRSet:[] FromEntities:[] aggregatedSelectors:[]} ToPorts:[{Ports:[{Port:8443 Protocol:TCP}] TerminatingTLS:<nil> OriginatingTLS:<nil> ServerNames:[] Listener:<nil> Rules:<nil>}] ICMPs:[] Authentication:<nil>}] IngressDeny:[] Egress:[{EgressCommonRule:{ToEndpoints:[{}] ToRequires:[] ToCIDR: ToCIDRSet:[] ToEntities:[] ToServices:[] ToGroups:[] aggregatedSelectors:[]} ToPorts:[] ToFQDNs:[] ICMPs:[] Authentication:<nil>}] EgressDeny:[] Labels:[k8s:io.cilium.k8s.policy.derived-from=NetworkPolicy k8s:io.cilium.k8s.policy.name=metrics-server k8s:io.cilium.k8s.policy.namespace=kube-system k8s:io.cilium.k8s.policy.uid=07cc9954-142a-43ed-a52e-bea9957666d5] Description:}]" policyAddRequest=0ad84dd0-b451-467f-99b8-c7a5fef80714 subsys=daemon
level=info msg="Policy imported via API, recalculating..." policyAddRequest=0ad84dd0-b451-467f-99b8-c7a5fef80714 policyRevision=2 subsys=daemon
level=info msg="NetworkPolicy successfully added" k8sApiVersion= k8sNetworkPolicyName=metrics-server subsys=k8s-watcher
level=info msg="Removing stale endpoint interfaces" subsys=daemon
level=info msg="Skipping kvstore configuration" subsys=daemon
level=info msg="Waiting until local node addressing before starting watchers depending on it" subsys=k8s-watcher
level=info msg="Initializing node addressing" subsys=daemon
level=info msg="Initializing CRD-based IPAM" subsys=ipam
level=info msg="Subscribed to CiliumNode custom resource" name=ip-10-199-195-38.us-west-2.compute.internal subsys=ipam
level=info msg="Creating or updating CiliumNode resource" node=ip-10-199-195-38.us-west-2.compute.internal subsys=nodediscovery
level=info msg="Added CiliumLocalRedirectPolicy" ciliumLocalRedirectPolicyName=node-local-dns-cilium k8sApiVersion=cilium.io/v2 k8sNamespace=kube-system k8sUID=7de53d1d-ee41-451b-a2e1-b265792a4924 subsys=k8s-watcher
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-118.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-119.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-130.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-135.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-146.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-183.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-218.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-240.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-241.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-242.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-195-56.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-100.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-165.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-200.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-241.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-50.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-65.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-7.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-73.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-85.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-196-90.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-105.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-108.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-116.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-13.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-137.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-14.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-152.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-46.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-54.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-6.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-75.us-west-2.compute.internal subsys=nodemanager
level=info msg="Node updated" clusterName=default nodeName=ip-10-199-197-77.us-west-2.compute.internal subsys=nodemanager
level=info msg="Waiting for CiliumNode custom resource to become available..." name=ip-10-199-195-38.us-west-2.compute.internal subsys=ipam
level=info msg="Successfully synchronized CiliumNode custom resource" name=ip-10-199-195-38.us-west-2.compute.internal subsys=ipam
level=info msg="Native routing CIDR does not contain VPC CIDR, trying next" ipv4-native-routing-cidr=100.0.0.0/8 subsys=ipam vpc-cidr=10.199.192.0/21
level=info msg="Native routing CIDR contains VPC CIDR, ignoring autodetected VPC CIDRs." ipv4-native-routing-cidr=100.0.0.0/8 subsys=ipam vpc-cidr=100.66.0.0/18
level=info msg="All required IPs are available in CRD-backed allocation pool" available=16 name=ip-10-199-195-38.us-west-2.compute.internal required=8 subsys=ipam
level=info msg="Restoring endpoints..." subsys=daemon
level=info msg="Endpoints restored" failed=0 restored=0 subsys=daemon
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x2d5c1e7]
goroutine 1 [running]:
github.com/cilium/cilium/daemon/cmd.coalesceCIDRs({0xc0011200e0, 0x2, 0x4?})
/go/src/github.com/cilium/cilium/daemon/cmd/ipam.go:189 +0x127
github.com/cilium/cilium/daemon/cmd.(*Daemon).allocateDatapathIPs(0xc001044000, {0x3f51010, 0xc0009f4b50}, {0x0, 0x0, 0x0}, {0x0, 0x0, 0x0})
/go/src/github.com/cilium/cilium/daemon/cmd/ipam.go:281 +0x1ec
github.com/cilium/cilium/daemon/cmd.(*Daemon).allocateRouterIPv4(0x60b3140?, {0x3f51010?, 0xc0009f4b50?}, {0x0?, 0x0?, 0x0?}, {0x0?, 0x0?, 0x2fc7940?})
/go/src/github.com/cilium/cilium/daemon/cmd/ipam.go:165 +0x1e5
github.com/cilium/cilium/daemon/cmd.(*Daemon).allocateIPs(0xc001044000, {0x3f466d8, 0xc000923cc0}, {{0x0, 0x0, 0x0}, {0x0, 0x0, 0x0}, {0x0, ...}, ...})
/go/src/github.com/cilium/cilium/daemon/cmd/ipam.go:475 +0xc5
github.com/cilium/cilium/daemon/cmd.newDaemon({0x3f466d8, 0xc000923cc0}, 0xc000cb96d0, 0xc00215f200)
/go/src/github.com/cilium/cilium/daemon/cmd/daemon.go:895 +0x50b2
github.com/cilium/cilium/daemon/cmd.newDaemonPromise.func1({0x3569a20, 0x496e00})
/go/src/github.com/cilium/cilium/daemon/cmd/daemon_main.go:1686 +0x66
github.com/cilium/cilium/pkg/hive/cell.Hook.Start(...)
/go/src/github.com/cilium/cilium/pkg/hive/cell/lifecycle.go:45
github.com/cilium/cilium/pkg/hive/cell.(*DefaultLifecycle).Start(0xc0004264b0, {0x3f46748?, 0xc0004b4620?})
/go/src/github.com/cilium/cilium/pkg/hive/cell/lifecycle.go:108 +0x337
github.com/cilium/cilium/pkg/hive.(*Hive).Start(0xc000ae5b80, {0x3f46748, 0xc0004b4620})
/go/src/github.com/cilium/cilium/pkg/hive/hive.go:310 +0xf9
github.com/cilium/cilium/pkg/hive.(*Hive).Run(0xc000ae5b80)
/go/src/github.com/cilium/cilium/pkg/hive/hive.go:210 +0x73
github.com/cilium/cilium/daemon/cmd.NewAgentCmd.func1(0xc000940400?, {0x3931281?, 0x4?, 0x39310ed?})
/go/src/github.com/cilium/cilium/daemon/cmd/root.go:39 +0x17b
github.com/spf13/cobra.(*Command).execute(0xc000004300, {0xc000072050, 0x1, 0x1})
/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:987 +0xaa3
github.com/spf13/cobra.(*Command).ExecuteC(0xc000004300)
/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:1115 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
/go/src/github.com/cilium/cilium/vendor/github.com/spf13/cobra/command.go:1039
github.com/cilium/cilium/daemon/cmd.Execute(0xc000ae5b80?)
/go/src/github.com/cilium/cilium/daemon/cmd/root.go:79 +0x13
main.main()
/go/src/github.com/cilium/cilium/daemon/main.go:14 +0x57
Anything else?
Not all agents experience this issue. We are seeing this behavior across multiple clusters - typically larger ones with more scaling activity.
Cilium Users Document
- Are you a user of Cilium? Please add yourself to the Users doc
Code of Conduct
- I agree to follow this project's Code of Conduct
Metadata
Metadata
Assignees
Labels
area/agentCilium agent related.Cilium agent related.area/ipamIP address management, including cloud IPAMIP address management, including cloud IPAMkind/bugThis is a bug in the Cilium logic.This is a bug in the Cilium logic.kind/community-reportThis was reported by a user in the Cilium community, eg via Slack.This was reported by a user in the Cilium community, eg via Slack.