Skip to content

Upgrade bundled GlusterFS tools #43069

@bootc

Description

@bootc

I recently tried mounting a GlusterFS volume (run outside Kubernetes) within my K8s cluster, but mounting failed because of "Server is operating at an op-version which is not supported" errors. After much investigation, it appears that the reason is that the reason is due to old GlusterFS tools in the hyperkube image.

Kubernetes version (use kubectl version):
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.4+coreos.0", GitCommit:"97c11b097b1a2b194f1eddca8ce5468fcc83331c", GitTreeState:"clean", BuildDate:"2017-03-08T23:54:21Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: self-hosted "bare metal" VMs
  • OS (e.g. from /etc/os-release): CoreOS 1298.5.0
  • Kernel (e.g. uname -a): 4.9.9-coreos-r1
  • Install tools: coreos, matchbox, bootkube

Anything else we need to know:

The underlying issue seems to be that hyperkube is built using a Debian Jessie (stable/8.7) image. Debian only has GlusterFS 3.5.2 in stable. GlusterFS 3.8.8 is available in Stretch (testing) as well as jessie-backports so it should be reasonably straightforward to pluck more recent versions of those packages for hyperkube.

The errors produced go into /var/lib/kubelet/plugins/kubernetes.io/glusterfs/glusterfsvol/glusterfs-glusterfs.log and look like:

[2017-03-10 09:43:33.094004] E [glusterfsd-mgmt.c:1297:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-03-10 09:43:33.094087] E [glusterfsd-mgmt.c:1388:mgmt_getspec_cbk] 0-mgmt: Server is operating at an op-version which is not supported
[2017-03-10 09:43:33.126728] E [glusterfsd-mgmt.c:1297:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-03-10 09:43:33.126783] E [glusterfsd-mgmt.c:1388:mgmt_getspec_cbk] 0-mgmt: Server is operating at an op-version which is not supported

Please consider upgrading these tools for interoperability with newer Gluster volumes.

I originally reported this as coreos/coreos-kubernetes#849.

Metadata

Metadata

Assignees

No one assigned

    Labels

    lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.sig/releaseCategorizes an issue or PR as relevant to SIG Release.sig/storageCategorizes an issue or PR as relevant to SIG Storage.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions