Skip to main content

Understand Container 7: use CNI to setup network

CNI means Container Runtime Interface, originated from coreOs for rkt's network solution, and beat Docker's CNM as being adopted by k8s as the network plugin interface.
In this blog we are going to see how to use CNI, to be specific, the bridge plugin, to setup the network for containers spawned by runc and achieve the same result/topology as we did in the last blog using netns as the hook.

Overview

The caller/user of CNI (eg: you calling from a shell, a container runtime/orchestrator, such as runc or k8s) interact with a plugin using two things: a network configuration file and some environment variables. The configuration files has the configs of network (or subnet) the container supposed to connect to; the environment variables include the path regarding where to find the plugin binary and network configuration files, plus "add/delete which container to/from which network namespace", which can well be implemented by passing arguments to the plugin (instead of using environment variable). It's not a big issue but looks a little bit "unusual" to use environment to pass arguments.
For a more detailed introduction of CNI, see here and here.

Use CNI plugins

build/install plugins

go get github.com/containernetworking/plugins
cd $GOPATH/src/github.com/containernetworking/plugins
./build.sh
mkdir -p /opt/cni/bin/bridge
sudo cp bin/* c

Use CNI

We'll be using following simple (and dirty) script to exercise CNI with runc. It covers all the essential concepts in one place, which is nice.
$ cat runc_cni.sh
#!/bin/sh

# need run with root
# ADD or DEL or VERSION
action=$1 
cid=$2
pid=$(runc ps $cid | sed '1d' | awk '{print $2}')
plugin=/opt/cni/bin/bridge

export CNI_PATH=/opt/cni/bin/
export CNI_IFNAME=eth0
export CNI_COMMAND=$action
export CNI_CONTAINERID=$cid
export CNI_NETNS=/proc/$pid/ns/net

$plugin <<EOF
{
    "cniVersion": "0.2.0",
    "name": "mynet",
    "type": "bridge",
    "bridge": "cnibr0",             
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "172.19.1.0/24",
        "routes": [
            { "dst": "0.0.0.0/0" }
        ],
     "dataDir": "/run/ipam-state"
    },
    "dns": {
      "nameservers": [ "8.8.8.8" ]
    }
}
EOF
It may be not obvious to a newcomer that we are using two plugins here, bridge plugin and host-local. The format is to set up a bridge network (as well as veth pair) and the late is to set up allocate and assign ip to the containers (and the bridge gateway), which is called ipam (IP Address Management), as you might have noticed in the config key.
The internal working of the bridge plugging is almost same as the netns does and we are not going to repeat it here.
Start a container called c1sudo runc run c1.
Then, put c1 into the network:
sudo ./runc_cni.sh ADD c1
Below is the output, telling you the ip and gateway of c1, among other things.
{
    "cniVersion": "0.2.0",
    "ip4": {
        "ip": "172.19.1.6/24",
        "gateway": "172.19.1.1",
        "routes": [
            {
                "dst": "0.0.0.0/0",
                "gw": "172.19.1.1"
            }
        ]
    },
    "dns": {
        "nameservers": [
            "8.8.8.8"
        ]
    }
}
You can create another container c2 and put it into the same network in a similar way, and now we create a subnet with two containers inside. They can talk to the each other and call can ping outside IPs, thanks route setting and IP masquerade. However, the dns won't work.
You can also remove a container from the network, after which the container won't be connected to the bridge anymore.
sudo ./runc_cni.sh DEL c1
However, the IP resource won't be reclaimed automatically, you have to do that "manually".
That is it, as we said this will be a short ride. Have fun with CNI.

Popular posts from this blog

Understand Container - Index Page

This is an index page to a series of 8 articles on container implementation. OCI Specification Linux Namespaces Linux Cgroup Linux Capability Mount and Jail User and Root Network and Hook Network and CNI
Update:
This page has a very good page view after being created. Then I was thinking if anyone would be interested in a more polished, extended, and easier to read version.
So I started a book called "understand container". Let me know if you will be interested in the work by subscribing here and I'll send the first draft version which will include all the 8 articles here. The free subscription will end at 31th, Oct, 2018.

* Remember to click "Share email with author (optional)", so that I can send the book to your email directly. 

Cheers,


Android Camera2 API Explained

Compared with the old camera API, the Camera2 API introduced in the L is a lot more complex: more than ten classes are involved, calls (almost always) are asynchronized, plus lots of capture controls and meta data that you feel confused about.

Understand Container: OCI Specification

OCI is the industry collaborated effort to define an open containers specifications regarding container format and runtime - that is the official tone and is true. The history of how it comes to where it stands today from the initial disagreement is a very interesting story or case study regarding open source business model and competition.

But past is past, nowadays, OCI is non-argumentable THE container standard, IMO, as we'll see later in the article it is adopted by most of the mainstream container implementation, including docker, and container orchestration system, such as kubernetes, Plus, it is particularly helpful to anyone trying to understand how the containers works internally. Open source code are awesome but it is double awesome with high quality documentation!