- why kubevirt or virtlet?
both of Virtlet and KubeVirt address the needs of almost every development teams who have or wants to adopt Kubernetes but still possess existing virtual machine based workloads which can’t be easily containerized. They provide a unified platform where developers can build, modify and deploy applications in both containers and virtual machines in a shared environment.
In recent years, containers have dominated the computer world. Organizations are moving to micro-services architecture because of scalability and flexibility, but organizations are bound to existing applications which are not easy to run in containers and requires particular kernel tuning configurations to run them efficiently, e.g. databases or IIS. Running virtual machines separately from containers means additional infrastructure and operations cost. Virtlet and KubeVirt both provides an alternate solution to keep using the existing Kubernetes infrastructure with the ability to run non-containerized applications in virtual machines parallel to containerized applications.
RancherVM extends the Kubernetes API with CRD(Custom Resource Definition) as similar as Kubevirt. A VM pod looks like a regular pod. However, VM instance runs in a container inside of each VM pod. In contrast, Kubevirt runs VM pod in bare-metal, which means VM pod runs VM natively and is handled not by kubelet but by virt-handler.
I think RancherVM is in the middle of CRD-based Kubevirt and CRI-based Virtlet.
KubeVirt extends Kubernetes by adding resource types for VMs through Kubernetes Custom Resource Definitions API.
It enables to run VMs along with containers on existing Kubernetes nodes. VMs run inside regular Kubernetes pods, where they have access to standard pod networking and storage, and can be managed using standard Kubernetes tools such as kubectl. Build on mature technology like KVM, qemu, libvirtd, Kubernetes kubevirt can be installed and removed in existing k8s cluster. VMs run as part of pod, so utilize all other k8s components like DNS, RBAC, Network Policies etc. Run VM with images in qemu qcow2 format, same as in OpenStack. With Kubevirt, VMs still run as bare-metal, not inside containers.
- comparison between kubevirt and virtlet
Kubevirt uses Custom Resourse Definitions (CRD) from Kubernetes but it doesn’t implement the Container Runtime Interface (CRI). It treats VMs like different objects and needs an extra command line tool, virtctl.
If you are looking for something that implements CRI, look at Virtlet, a project developed by Mirantis. The main difference is that it treats VMs like pods, thus enabling all deployment methods like ReplicaSet, DaemonSet, Jobs, etc that can’t be used with Kubevirt(who has its own resource called VirtualMachineReplicaSet). You also have Liveness/Readiness probes, taints and tolerations, Virtlet also supports SR-IOV, unsupported by Kubevirt. Its greater limit is in the storage, which support only FlexVolumes, at least until Container Storage Interface(CSI) will be mature. Kubevirt has a broader support for storage backends.
Kubevirt takes a different approach by design. They don’t try to make VMs look like pods/containers, but to make VMs a first-class Kubernetes resource type. Like not all workloads can be managed by a replication controller or replica set
(thus daemon sets, jobs, etc) VMs have very different characteristics, so Kubevirt thinks that making them look as pods is the wrong idea.
KubeVirt and Virtlet are implemented in drastically different ways. KubeVirt is a virtual machine management add-on for Kubernetes providing control of VMs as Kubernetes Custom Resources. Virtlet, on the other hand is a CRI(Container Runtime Interface) implementation, which means that Kubernetes sees VMs in the same way it sees Docker containers.
If you are simply trying to provide the ability to run traditional VMs to your Kubernetes users without adding additional configuration, KubeVirt might do the job — if your users are willing to learn the additional commands necessary to
make use of it.
On the other hand, if you want to treat your VMs identically to your non-VM pods, or particularly if you have a hard-core use-case such as NFV, you’ll need to use Virtlet instead.
Virtlet project is an open source project and is managed by migrants they are focusing mainly on easy usability and networking features like SRIOV for 5G & NFV use case. KubeVirt is also an open source project lead by the Kubernetes community itself. They are focusing mainly on providing flexibility across the platform for storage and networking so that it can be easily pluggable with any technology.
- reasoning of my choice
It’s about balance, I think. Virtual machines have different semantics than containers. For virtual machines, one needs to describe some aspects of the virtual hardware, and since applications in virtual machines usually mix the application with its data, persistent storage is needed. There may be expectations such as layer 2 networks, PXE boot for provisioning, cloning images and live migration.
KubeVirt is designed to maintain a balance by providing virtualization capabilities, yet keeping the Kubernetes philosophy and semantics. This enables a transition path where virtual machines can behave the same as before but also leverage Kubernetes infrastructure, tools, management.