how to enable SR-IOV in docker container

  • how to verify basic information of NIC
ethtool -i p1p1
lspci | grep Ethernet
lspci -Dvmm|grep -B 1 -A 4 Ethernet

# if using Intel chipsets, enable IOMMU in grub config file
sudo vi /etc/default/grub
GRUB_CMDLINE_LINUX=”… intel_iommu=on …”

sudo update-grub
sudo reboot

  • how to set the number of VFs to ixgbe kernel driver
sudo vi /etc/modprobe.d/ixgbe.conf
options ixgbe max_vfs=8
# alternative runtime command
echo 2 > /sys/bus/pci/devices/0000:82:00.0/sriov_numvfs

# in some cases, it is required to set ixgbe.max_vfs parameter in grub
sudo vi /etc/default/grub
GRUB_CMDLINE_LINUX=”… intel_iommu=on … ixgbe.max_vfs=8″

sudo update-grub
sudo reboot

  • how to verify whether the VFs are created successfully
sudo lspci |grep Ethernet |grep "Virtual Function"
ip link show

  • how to manually enable VF in docker container
ifconfig p1p1_0
pid=$(docker inspect -f ’{{.State.Pid}}’ $dName)
sudo mkdir -p /var/run/netns/
sudo ln -s /proc/$pid/ns/net /var/run/netns/$pid
sudo ip link set $host_if netns $pid
sudo ip netns exec $pid ip link list
sudo ip netns exec $pid ip link set $host_if name $guest_if
sudo ip netns exec $pid ip addr add $ip_addr dev $guest_if
sudo ip netns exec $pid ip link set $guest_if up
docker exec -it $dName ifconfig

  • how to use pipework to enable VF in docker container
sudo wget -O /usr/local/bin/pipework \
sudo chmod +x /usr/local/bin/pipework
docker ps
pipework --direct-phys p1p1_0 -i eth1 $dName $dIP(eg.
docker ps
docker exec -it $dName /bin/bash
# pipework (
# pipework is a shell script used for making complex networking in container easy,
# where macvlan bridge is used to create a VF.

APACHE=$(docker run -name dApache -d apache /usr/sbin/httpd -D FOREGROUND)
MYSQL=$(docker run -name dMysql -d mysql /usr/sbin/mysqld_safe)
pipework br1 $APACHE
pipework br1 $MYSQL
ip addr add dev br1                    // private subnet
pipework eth1 $CONTAINERID [dhclient | dhcp | dhcp:-f]  // use DHCP client
pipework br1 $CONTAINERID   // config gateway
pipework br1 -i $guest_if_name -l $host_if_name ...

pipework eth2 $(docker run -d hipache /usr/sbin/hipache)
pipework eth3 $(docker run -d hipache /usr/sbin/hipache)

# Note that this will use macvlan subinterfaces, so you can actually put multiple
# containers on the same physical interface. If you don't want to virtualize
# the interface, you can use the --direct-phys option to namespace an interface
# exclusively to a container without using a macvlan bridge.

pipework --direct-phys $VFNAME $CONTAINERID

# This is useful for assigning SR-IOV VFs to containers, but be aware of added
# latency when using the NIC to switch packets between containers on the same host

# pipework forked
# pipework forked is a derivative that provides SR-IOV support for VF, but I can't
# distinguish what the difference is.

# Integrating pipework with other tools
# @dreamcat4 has built an amazing fork of pipework that can be integrated with
# other tools in the Docker ecosystem, like Compose or Crane. It can be used in
# "one shot," to create a bunch of network connections between containers;
# it can run in the background as a daemon, watching the Docker events API, and
# automatically invoke pipework when containers are started, and it can also
# expose pipework itself through an API.
# VLAN support of pipework
# Virtual LAN (VLAN)
# If you want to attach the container to a specific VLAN, the VLAN ID can be
# specified using the [MAC]@VID notation in the MAC address parameter.

# Note: VLAN attachment is currently only supported for containers to be attached
# to either an Open vSwitch bridge or a physical interface. Linux bridges are
# currently not supported.

pipework ovsbr0 $(docker run -d zerorpcworker) dhcp @10
pipework eth0 $(docker run -d zerorpcworker) dhcp @20

  • how to run docker container over isolated CPUs
vi /etc/default/grub
GRUB_CMDLINE_LINUX="... isolcpus=12-15 ..."

docker run -ti --cpuset-cpus="12-15" ...

try to run cyclictest making noise(eg. kernel build) outside docker container.

Leave a Reply

Your email address will not be published. Required fields are marked *