Skip to content

TOB-K8S-009: Superficial health check provides false sense of safety #81141

@cji

Description

@cji

This issue was reported in the Kubernetes Security Audit Report

Description
Kubernetes includes many components that can fail for a multitude of reasons. Health checks are an important tool in mitigating unnoticed component failures. However, the kubeadm health checks are superficial, and do not contain actual service checks:

func CheckClusterHealth(client clientset.Interface, ignoreChecksErrors sets.String) error {
    fmt.Println("[upgrade] Making sure the cluster is healthy:")

    healthChecks := []preflight.Checker{
        &healthCheck{
            name:   "APIServerHealth",
            client: client,
            f:      apiServerHealthy,
        },
        &healthCheck{
            name:   "MasterNodesReady",
            client: client,
            f:      masterNodesReady,
        },
        // TODO: Add a check for ComponentStatuses here?
    }

    healthChecks = append(healthChecks, &healthCheck{
        name:   "StaticPodManifest",
        client: client,
        f:      staticPodManifestHealth,
    })

    return preflight.RunChecks(healthChecks, os.Stderr, ignoreChecksErrors)
}

Figure 31.2: The CheckClusterHealth check; note specifically the TODO

Facile checks may give the appearance of a healthy set of Pods or nodes, in spite of a more subtle failure that requires attention.

Exploit Scenario
Alice configures a Kubernetes cluster using the base configuration and distribution. Alice assumes the Kubernetes health check includes all connected control plane components, but it only includes the API server and master nodes, not components such as the scheduler or controller manager.

Recommendation
Short term, ensure essential master plane components are included within the preflight health checks.

Long term, consider taking a modular approach for health checks, allowing arbitrary components to be included in the preflight health checks.

Anything else we need to know?:

See #81146 for current status of all issues created from these findings.

The vendor gave this issue an ID of TOB-K8S-009 and it was finding 33 of the report.

The vendor considers this issue Informational Severity.

To view the original finding, begin on page 79 of the Kubernetes Security Review Report

Environment:

  • Kubernetes version: 1.13.4

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/securitykind/bugCategorizes issue or PR as related to a bug.sig/cluster-lifecycleCategorizes an issue or PR as relevant to SIG Cluster Lifecycle.wg/security-auditCategorizes an issue or PR as relevant to WG Security Audit.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions