Skip to content

Podman errors using CNI on Risc-V archtecture #3462

@carlosedp

Description

@carlosedp

After all work by @giuseppe towards having Podman on Risc-V architecture, everything works fine (building, running, etc) containers but I'm seeing some problems with CNI (not using --net host).

➜ podman version
Version:            1.4.4-dev
RemoteAPI Version:  1
Go Version:         devel +f980a63fcb Fri May 24 20:26:57 2019 +1000
Git Commit:         ffbc4a97801a59a887c49016a17efd0782c1aa77
Built:              Thu Jun 27 10:34:21 2019
OS/Arch:            linux/riscv64

➜ sudo podman info --debug
debug:
compiler: gc
git commit: ffbc4a97801a59a887c49016a17efd0782c1aa77
go version: devel +f980a63fcb Fri May 24 20:26:57 2019 +1000
podman version: 1.4.4-dev
host:
BuildahVersion: 1.9.0
Conmon:
    package: Unknown
    path: /usr/local/bin/conmon
    version: 'conmon version 0.4.1-dev, commit: 2a7ec7b01abd46bc3084571097bd1a949173f245'
Distribution:
    distribution: fedora
    version: "31"
MemFree: 5505134592
MemTotal: 6247546880
OCIRuntime:
    package: Unknown
    path: /usr/local/bin/crun
    version: crun 0.6
SwapFree: 0
SwapTotal: 0
arch: riscv64
cpus: 6
hostname: fedora-riscv
kernel: 5.1.0-06536-gef75bd71c5d3-dirty
os: linux
rootless: false
uptime: 12m 47.92s
registries:
blocked: null
insecure: null
search:
- docker.io
- registry.fedoraproject.org
- registry.access.redhat.com
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
    number: 1
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
ImageStore:
    number: 1
RunRoot: /var/run/containers/storage
VolumePath: /var/lib/containers/storage/volumes

On Fedora VM:

Linux fedora-riscv 5.1.0-06536-gef75bd71c5d3-dirty #9 SMP Mon Jun 24 18:28:34 -03 2019 riscv64 riscv64 riscv64 GNU/Linux

➜ sudo iptables --version
iptables v1.8.0 (legacy)

Starting a podman container with CNI gives:

➜ sudo podman run -d --name echo -p 8080:8080 carlosedp/echo_on_riscv
Error: unable to start container "echo": error adding firewall rules for container 85a04312ec0a55e5e93d6cd057217a7951d47493ebf9fe35cb892bab66f7e1ed: failed to add the address 10.88.0.4/32 to trusted zone: COMMAND_FAILED: '/usr/sbin/iptables-restore -w -n' failed: iptables-restore: line 9 failed

After this, deleting and starting a new container with host network, the container doesn't give an error but the application becomes unaccessible until reboot.

carlosedp in ~ at fedora-riscv
➜ sudo podman run -d --name echo --net host -p 8080:8080 carlosedp/echo_on_riscv
b784d5b9ccc6c95f7867129b0a84211a5db56b4b4fa595d65959dd401f0c6c27

➜ sudo podman ps -a
CONTAINER ID  IMAGE                                     COMMAND      CREATED         STATUS             PORTS  NAMES
b784d5b9ccc6  docker.io/carlosedp/echo_on_riscv:latest  /echo-riscv  29 seconds ago  Up 27 seconds ago         echo

➜ curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: No route to host

➜ sudo netstat -anp |grep 8080
tcp6       0      0 :::8080                 :::*                    LISTEN      1383/echo-riscv

With host network (--net host), it works (start, stop, delete).

On Debian VM:

Linux debian-riscvqemu 5.1.0-06536-gef75bd71c5d3 #6 SMP Sun Jun 9 12:37:11 -03 2019 riscv64 GNU/Linux
➜  ~ sudo iptables --version
iptables v1.8.2 (nf_tables)

Container starts and runs with CNI but shows error when deleting.

➜  ~ sudo podman ps -a
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
➜  ~ sudo podman images
REPOSITORY                          TAG      IMAGE ID       CREATED      SIZE
docker.io/carlosedp/echo_on_riscv   latest   20d457ffcf56   2 days ago   9.08 MB
➜  ~ sudo podman run -d --name echo -p 8080:8080 carlosedp/echo_on_riscv
    dd905c3e6ae6f8eced9c3e870d0d54ecbb461f3da40d334623dd5cb6f0486694
➜  ~ curl localhost:8080
Hello, World! I'm running on linux/riscv64 inside a container!%
➜  ~ sudo podman rm -f echo
ERRO[0000] Error deleting network: could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -N CNI-DN-eef0b591187e4d05dada4 --wait]: exit status 1: iptables v1.8.2 (nf_tables): Chain already exists
ERRO[0000] Error while removing pod from CNI network "podman": could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -N CNI-DN-eef0b591187e4d05dada4 --wait]: exit status 1: iptables v1.8.2 (nf_tables): Chain already exists
ERRO[0000] unable to cleanup network for container dd905c3e6ae6f8eced9c3e870d0d54ecbb461f3da40d334623dd5cb6f0486694: "error tearing down CNI namespace configuration for container dd905c3e6ae6f8eced9c3e870d0d54ecbb461f3da40d334623dd5cb6f0486694: could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -N CNI-DN-eef0b591187e4d05dada4 --wait]: exit status 1: iptables v1.8.2 (nf_tables): Chain already exists\n"
dd905c3e6ae6f8eced9c3e870d0d54ecbb461f3da40d334623dd5cb6f0486694

After this, even starting a new container with host network, the application becomes unaccessible until reboot.

➜  ~ sudo podman run -d --net host --name echo -p 8080:8080 carlosedp/echo_on_riscv
f5c55947ec0a54bf1e26676f9781c8149967278a7df8f328e712313be8f8752f
➜  ~ curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused

Its port only gets ipv6 stack:

➜  ~ sudo podman ps -a
CONTAINER ID  IMAGE                                     COMMAND      CREATED         STATUS             PORTS  NAMES
f5c55947ec0a  docker.io/carlosedp/echo_on_riscv:latest  /echo-riscv  33 seconds ago  Up 31 seconds ago         echo
➜  ~ sudo netstat -anp |grep 8080
tcp6       0      0 :::8080                 :::*                    LISTEN      1252/echo-riscv

Let me know how can I further help debugging this. There is a Debian VM available for download here in case needed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    locked - please file new issue/PRAssist humans wanting to comment on an old issue or PR with locked comments.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions