KVM is a virtualization solution for linux largely used in enterprise environment. It’s permits to define overlay networks scalable over public network by open virtual switch. Layer 2 networks encapsulated over udp packet, by VXLAN or GRE tunnel, enable an use efficient and versatile of network infrastructure.
Docker is a open platform that permits to run applications inside containers well isolated. It’s support overlay network like KVM using native linux bridge.
In this context, the article will explain how to share a overlay network between a kvm virtual machine and a docker container.
The reference architecture is the following:
The diagram above represents one overlay network shared between a virtual machine inside KVM and a docker container. The overlay network is encapsulated inside udp packet with a particular VNI (virtual network identifier).
The docker-node-01 is a Centos 7.2 system, the KVM instead is a Ubuntu 16.
Let’s start with Docker overlay network configuration.
Docker overlay network configuration
Docker gives the possibility to create overlay network natively by the following docker command:
[root@docker-node-01 docker]# docker network create –subnet 10.65.10.0/24 –driver overlay red
249690033b0dab7f42cbe0a0582609024d3b8c672ce488a74afe558dc2b91807
This overlay network uses vxlan as encapsulation protocol and bridge native linux. I already explained how it works in my article https://www.securityandit.com/network/inside-docker-overlay-network/.
The native method of docker is useful only for overlay network between container: for integrating with overlay KVM network it’s necessary to link a virtual interface of container to a open virtual switch.
For doing that, you should first install openswitch in docker system. For centos you can follow the procedure in this link http://supercomputing.caltech.edu/blog/index.php/2016/05/03/open-vswitch-installation-on-centos-7-2/. After that, a virtual open switch will be created:
[root@docker-node-01 ~]#ovs-vsctl add-br ovs-br1
[root@docker-node-01 ~]#ovs-vsctl show
d08d383b-9a9f-4044-a308-195f7892ad05
Bridge “ovs-br1”
Port “ovs-br1”
Interface “ovs-br1”
Now the docker container must be created. In my laboratory I used a light container called busybox, a minimal linux distro.
[root@docker-node-01 ~]# docker run –name container_01 -itd busybox
[root@docker-node-01 ~]# docker exec -it container_01 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
136: eth0@if137: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.9/16 scope global eth0
valid_lft forever preferred_lft forever
As you can see, one only interface, used for reach the external world. was created in the container.
Another virtual interface, directly connected to virtual switch already created above. is now added:
[root@docker-node-01 ~]# ovs-docker add-port ovs-br1 eth1 container_01 –ipaddress=10.65.10.2/24
[root@docker-node-01 ~]# docker exec -it container_01 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
136: eth0@if137: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.9/16 scope global eth0
valid_lft forever preferred_lft forever
138: eth1@if139: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 7e:54:37:b3:3b:26 brd ff:ff:ff:ff:ff:ff
inet 10.65.10.2/24 scope global eth1
valid_lft forever preferred_lft forever
valid_lft forever preferred_lft forever
[root@docker-node-01 ~]# ovs-vsctl show
d08d383b-9a9f-4044-a308-195f7892ad05
Bridge “ovs-br1”
Port “ovs-br1”
Interface “ovs-br1”
type: internal
Port “f12b29c5c66c4_l”
Interface “f12b29c5c66c4_l”
ovs_version: “2.5.1”
A new interface has been created on docker container (138: eth1) directly connected to ovs-bri switch. This command shows how the virtual interfaces are connected: the interface f12b29c5c66c4_l of the ovs-bri switch is the another pair of the virtual interface 138:eth1 inside the container.
[root@docker-node-01 ~]# ethtool -S f12b29c5c66c4_l
NIC statistics:
peer_ifindex: 138
The last step is to create another port attached on the virtual switch for enabling the vxlan tunneling protocol with the KVM system:
[root@docker-node-01 ~]# ovs-vsctl add-port ovs-br1 tun0 — set interface tun0 type=vxlan options:remote_ip=192.168.1.51 options:key=123
[root@docker-node-01 ~]# ovs-vsctl show
d08d383b-9a9f-4044-a308-195f7892ad05
Bridge “ovs-br1”
Port “tun0”
Interface “tun0″
type: vxlan
options: {key=”123″, remote_ip=”192.168.1.51”}
Port “ovs-br1”
Interface “ovs-br1”
type: internal
Port “f12b29c5c66c4_l”
Interface “f12b29c5c66c4_l”
ovs_version: “2.5.1”
Let’s configure now the KVM system.
KVM overlay network configuration
The installation of KVM is very simple: for ubuntu you should install the following packages:
[root@kvm-node-01 ~]#apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils
After that, it’s needed to install virt-manager that is a GUI for managing virtual machine.
[root@kvm-node-01 ~]#apt-get install virt-manager
Virt-manager is a virt client that uses the services of virtd: the interface daemon to to QEMU-KVM hypervisor. For starting it:
[root@kvm-node-01 ~]#systemctl enable libvirtd && systemctl start libvirtd
The opwnswitch installation on Ubuntu is simpler than Centos:
[root@kvm-node-01 ~]#apt-get install openvswitch
[root@kvm-node-01 ~]#systemctl enable openvswitch && systemctl start openvswitch
The system is ready for creating the virtual machine. Before creating it, a virtual switch must be created as done in the docker system:
[root@kvm-node-01 ~]# ovs-vsctl add-br ovs-br1
[root@kvm-node-01 ~]# ovs-vsctl show
2a9aea3b-2ea4-40b0-b6ba-c878ef9fffdd
Bridge “ovs-br1”
Port “ens160”
Interface “ens160”
Port “ovs-br1”
Interface “ovs-br1”
type: internal
The virtual interface tun0 is added to virtual switch already created for tunneling all the layer 2 traffic versus the virtual switch of docker system. The VXLAN VNI is the same as before:123:
[root@kvm-node-01 ~]# ovs-vsctl add-port ovs-br1 tun0 — set Interface tun0 type=vxlan options:remote_ip=192.168.1.52 options:key=123
[root@kvm-node-01 ~]# ovs-vsctl show
2a9aea3b-2ea4-40b0-b6ba-c878ef9fffdd
Bridge “ovs-br1”
Port “ens160”
Interface “ens160”
Port “ovs-br1”
Interface “ovs-br1”
type: internal
Port “tun0”
Interface “tun0″
type: vxlan
options: {key=”123″, remote_ip=”192.168.1.52”}
The next step to do before creating the virtual machine is to define an KVM bridged network attached to virtual switch ovs-br1 already created.
[root@kvm-node-01 ~]# vi overlay-network.xml
<network>
<name>OverlayNetwork</name>
<forward mode=’bridge’/>
<bridge name=’ovs-br1‘/>
<virtualport type=’openvswitch’/>
<portgroup name=’novlan’ default=’yes’>
</portgroup>
</network>
[root@kvm-node-01 ~]# virsh net-define overlay-network.xml
Network OverlayNetwork defined from overlay-network.xml
[root@kvm-node-01 ~]# virsh net-start default
Network OverlayNetwork started
[root@kvm-node-01 ~]# virsh net-list
Name State Autostart Persistent
Now it’s possible to create a virtual machine and add a interface OverlayNetwork inside it. For doing that I used virtual manager GUI. The virtual machine is called vm-01:
[root@kvm-node-01 ~]# virsh -c qemu:///system list
Id Name State
—————————————————-
13 vm-01 running
[root@kvm-node-01 ~]#virsh domiflist vm-01
Interface Type Source Model MAC
——————————————————-
vnet0 bridge default rtl8139 52:54:00:cb:61:09
The virtual interface vnet0 has beed added to virtual switch ovs-br1:
[root@kvm-node-01 ~]# ovs-vsctl show
2a9aea3b-2ea4-40b0-b6ba-c878ef9fffdd
Bridge “ovs-br1”
Port “ens160”
Interface “ens160”
Port “ovs-br1”
Interface “ovs-br1”
type: internal
Port “vnet0”
Interface “vnet0”
Port “tun0”
Interface “tun0″
type: vxlan
options: {key=”123″, remote_ip=”192.168.1.52”}
It’s possible to give to the vnet0 interface of virtual machine the following ip 10.65.10.1/24 and try to ping it from docker container:
[root@docker-node-01 ~]#docker exec -it 6276d2565185 ping 10.65.10.1
PING 10.65.10.1 (10.65.10.1): 56 data bytes
64 bytes from 10.65.10.1: seq=0 ttl=64 time=0.475 ms
64 bytes from 10.65.10.1: seq=1 ttl=64 time=0.421 ms
— 10.65.10.1 ping statistics —
4 packets transmitted, 0 received, 100% packet loss, time 2999ms.
Following the trace showed with wireshark:
Conclusions
In this article I showed how is possible to integrate a KVM virtual machine with a docker container using openswitch and VXLAN protocol.
This approach can be used for extendngi layer 2 networks of different data centers creating scalable and high availability infrastructure.
It’s possible also to have tagged vlan and extend it by the VNI VXLAN field.
Don’t hesitate to contact me for any question or suggestion.