- stable-2.3 | kata-deploy: fix tar command in dockerfile - stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.2 - stable-2.3 Missing backports - stable-2.3 | docs: Fix kernel configs README spelling errors - docs: Fix outdated links - stable-2.3 | versions: Upgrade to Cloud Hypervisor v20.1 - Backport osbuilder: Revert to using apk.static for Alpine - stable-2.3 | runtime: only call stopVirtiofsd when shared_fs is virtio-fs - Backport versions: Use Ubuntu initrd for non-musl archs - stable-2.3 | Upgrade to Cloud Hypervisor v20.0 and Openapi-generator v5.3.0 - stable-2.3 | packaging: Fix missing commit message in building kata-runtime - stable-2.3 | runtime: enable vhost-net for rootless hypervisor - [backport] agent: create directories for watchable-bind mounts - runtime: enable FUSE_DAX kernel config for DAXdfbe74c4kata-deploy: fix tar command in dockerfile9e7eed7cversions: Upgrade to Cloud Hypervisor v20.253cf1dd0tools/packaging: add copyright to kata-monitor's Dockerfilea4dee6a5packaging: delint tests dockerfilesfd87b60cpackaging: delint kata-deploy dockerfiles2cb4f7baci/openshift-ci: delint dockerfiles993dcc94osbuilder: delint dockerfilesbbd7cc2fpackaging: delint kata-monitor dockerfiles9837ec72packaging: delint static-build dockerfiles8785106fpackaging/qemu: Use QEMU script to update submodulesa915f082packaging/qemu: Use partial git cloneec3faab8security: Update rust crate versions1f61be84osbuilder: Add protoc to the alpine containerd2d8f9acosbuilder: avoid to copy versions.txt which already deprecatedca30eee3kata-manager: Retrieve static tarball0217abcekata-deploy: Deal with empty containerd conf file572b25ddosbuilder: be runtime consistent also with podman build84e69ecbagent: user container ID as watchable storage key for hashmap77b6cfbddocs: Fix kernel configs README spelling errors24085c95docs: Fix outdated k8s link514bf74fdocs: Replicate branch rename on runtime-spec77a2502acri-o: Update links for the CRI-O github page6413ecf4docs: Backport source reorganization linksa0bed72dversions: Upgrade to Cloud Hypervisor v20.1d03e05e8versions: Use fixed, minor version for Alpine0f7db91cosbuilder: Revert to using apk.static for Alpine271d67a8runtime: only call stopVirtiofsd when shared_fs is virtio-fs7c15335dversions: Use Ubuntu initrd for non-musl archs15080f20virtcontainers: clh: Upgrade to openapi-generator v5.3.0c2b8eb3cvirtcontainers: clh: Re-generate the client codefe0fbab5versions: Upgrade to Cloud Hypervisor v20.0be5468fdpackaging: Fix missing commit message in building kata-runtime18bb9a5druntime: enable vhost-net for rootless hypervisor3458073dagent: create directories for watchable-bind mounts0e91503cruntime: enable FUSE_DAX kernel config for DAX Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
kata-deploy
kata-deploy provides a Dockerfile, which contains all of the binaries
and artifacts required to run Kata Containers, as well as reference DaemonSets, which can
be utilized to install Kata Containers on a running Kubernetes cluster.
Note, installation through DaemonSets successfully installs katacontainers.io/kata-runtime on
a node only if it uses either containerd or CRI-O CRI-shims.
Kubernetes quick start
Install Kata on a running Kubernetes cluster
Installing the latest image
The latest image refers to pre-release and release candidate content. For stable releases, please, use the "stable" instructions.
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
Installing the stable image
The stable image refers to the last stable releases content.
Note that if you use a tagged version of the repo, the stable image does match that version. For instance, if you use the 2.2.1 tagged version of the kata-deploy.yaml file, then the version 2.2.1 of the kata runtime will be deployed.
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
For your k3s cluster, do:
$ GO111MODULE=auto go get github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy
$ kubectl apply -k kata-deploy/overlays/k3s
Ensure kata-deploy is ready
kubectl -n kube-system wait --timeout=10m --for=condition=Ready -l name=kata-deploy pod
Run a sample workload
Workloads specify the runtime they'd like to utilize by setting the appropriate runtimeClass object within
the Pod specification. The runtimeClass examples provided define a node selector to match node label katacontainers.io/kata-runtime:"true",
which will ensure the workload is only scheduled on a node that has Kata Containers installed
runtimeClass is a built-in type in Kubernetes. To apply each Kata Containers runtimeClass:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
The following YAML snippet shows how to specify a workload should use Kata with Cloud Hypervisor:
spec:
template:
spec:
runtimeClassName: kata-clh
The following YAML snippet shows how to specify a workload should use Kata with Firecracker:
spec:
template:
spec:
runtimeClassName: kata-fc
The following YAML snippet shows how to specify a workload should use Kata with QEMU:
spec:
template:
spec:
runtimeClassName: kata-qemu
To run an example with kata-clh:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-clh.yaml
To run an example with kata-fc:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
To run an example with kata-qemu:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
The following removes the test pods:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-clh.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
Remove Kata from the Kubernetes cluster
Removing the latest image
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
$ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
After ensuring kata-deploy has been deleted, cleanup the cluster:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion. This process should take, at most, 5 minutes.
After that, let's delete the cleanup daemon-set, the added RBAC and runtime classes:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
Removing the stable image
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
$ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
After ensuring kata-deploy has been deleted, cleanup the cluster:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stabe.yaml
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion. This process should take, at most, 5 minutes.
After that, let's delete the cleanup daemon-set, the added RBAC and runtime classes:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stable.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
kata-deploy details
Dockerfile
The Dockerfile used to create the container image deployed in the DaemonSet is provided here. This image contains all the necessary artifacts for running Kata Containers, all of which are pulled from the Kata Containers release page.
Host artifacts:
cloud-hypervisor,firecracker,qemu-system-x86_64, and supporting binariescontainerd-shim-kata-v2kata-collect-data.shkata-runtime
Virtual Machine artifacts:
kata-containers.imgandkata-containers-initrd.img: pulled from Kata GitHub releases pagevmlinuz.containerandvmlinuz-virtiofs.container: pulled from Kata GitHub releases page
DaemonSets and RBAC
Two DaemonSets are introduced for kata-deploy, as well as an RBAC to facilitate
applying labels to the nodes.
Kata deploy
This DaemonSet installs the necessary Kata binaries, configuration files, and virtual machine artifacts on
the node. Once installed, the DaemonSet adds a node label katacontainers.io/kata-runtime=true and reconfigures
either CRI-O or containerd to register three runtimeClasses: kata-clh (for Cloud Hypervisor isolation), kata-qemu (for QEMU isolation),
and kata-fc (for Firecracker isolation). As a final step the DaemonSet restarts either CRI-O or containerd. Upon deletion,
the DaemonSet removes the Kata binaries and VM artifacts and updates the node label to katacontainers.io/kata-runtime=cleanup.
Kata cleanup
This DaemonSet runs of the node has the label katacontainers.io/kata-runtime=cleanup. These DaemonSets removes
the katacontainers.io/kata-runtime label as well as restarts either CRI-O or containerd systemctl
daemon. You cannot execute these resets during the preStopHook of the Kata installer DaemonSet,
which necessitated this final cleanup DaemonSet.