- runtime# make sure the "Shutdown" trace span have a correct end - tracing: Accept multiple dynamic tags - logging: Enable agent debug output for release builds - agent: "Revert agent: Disable seccomp feature on aarch64 temporarily" - runtime: Enhancement for Makefile - osbuilder: build image-builder image from Fedora 34 - agent: refactor process IO processing - agent-ctl: Update for Hybrid VSOCK - docs: Fix outdated links - ci/install_libseccomp: Fix libseccomp build and misc improvement - virtcontainers: simplify read-only mount handling - runtime: add fast-test to let test exit on error - test: Fix random failure for TestIoCopy - cli: Show available guest protection in env output - Update k8s, critools, and CRI-O to their 1.22 release - package: assign proper value to redefined_string in build-kernel.sh - agent: Make wording of error message match CRI-O test suite - docs: Moving from EOT to EOF - virtcontainers: api: update the functions in the api.md docs - release: Upload libseccomp sources with notice to release page - virtcontainers: check that both initrd and image are not set - agent: Fix the configuration sample file - runtime: set tags for trace span - agent-ctl: Implement Linux OCI spec handling - runtime: Remove comments about unsupported features in config for clh - tools/packaging: Add options for VFIO to guest kernel - agent/runtime: Add seccomp feature - ci: test-kata-deploy: Get rid of slash-command-action action - This is to bump the OOT QAT 1.7 driver version to the latest version.… - forwarder: Drop privileges when using hybrid VSOCK - packaging/static-build: s390x fixes - agent-ctl: improve the oci_to_grpc code - agent: do not return error but print it if task wait failed - virtcontainers: delete duplicated notify in watchHypervisor function - agent: Handle uevent remove actions - enable unit test on arm - rustjail: Consistent coding style of LinuxDevice type - cli: Fix outdated kata-runtime bash completion - Allow VFIO devices to be used as VFIO devices in the container - Expose top level hypervisor methods - - Upgrade to Cloud Hypervisor v19.0 - docs: use-cases: Update Intel SGX use case - virtcontainers: clh: Enable the `seccomp` feature - runtime: delete cri containerd plugin from versions.yaml - docs: Write tracing documentation - runtime: delete useless src/runtime/cli/exit.go - snap: add cloud-hypervisor and experimental kernel - osbuilder: Call detect_rust_version() right before install_rust.sh - docs: Updating Developer Guide re qemu-img - versions: Add libseccomp and gperf version - Enable agent tracing for hybrid VSOCK hypervisors - runtime: optimize test code - runtime: use containerd package instead of cri-containerd - runtime: update sandbox root dir cleanup behavior in rootless hypervisor - utils: kata-manager: Update kata-manager.sh for new containerd config - osbuilder: Re-enable building the agent in Docker - agent: Do not fail when trying to adding existing routes - tracing: Fix typo in "package" tag name - kata-deploy: add .dockerignore file - runtime: change name in config settings back to "kata" - tracing: Remove trace mode and trace type09d5d88runtime: tracing: Change method for adding tagsbcf3e82logging: Enable agent debug output for release buildsa239a38osbuilder: build image-builder image from Fedora 34375ad2bruntime: Enhancement for Makefileb468dc5agent: Use dup3 system call in unit tests of seccomp1aaa059agent: "Revert agent: Disable seccomp feature on aarch64 temporarily"1e331f7agent: refactor process IO processing9d3ec58runtime: make sure the "Shutdown" trace span have a correct end3f21af9runtime: add fast-test to let test exit on error9b270d7ci/install_libseccomp: use a temporary work directory98b4406ci/install_libseccomp: Fix fail when DESTDIR is set338ac87virtcontainers: api: update the functions in the api.md docs23496f9release: Upload libseccomp sources with notice to release pagee610fc8runtime: Remove comments about unsupported features in config for clh7e40195agent-ctl: Add stub for AddSwap API82de838agent-ctl: Update for Hybrid VSOCKd1bcf10forwarder: Remove quotes from socket path in doce66d047virtcontainers: simplify read-only mount handlingbdf4824tools/packaging: Add options for VFIO to guest kernelc509a20agent-ctl: Implement Linux OCI spec handling42add7fagent: Disable seccomp feature on aarch64 temporarily5dfedc2docs: Add explanation about seccomp45e7c2cstatic-checks: Add step for installing libseccompa3647e3osbuilder: Set up libseccomp library3be50adagent: Add support for Seccomp4280415agent: Fix the configuration sample fileb0bc71fci: test-kata-deploy: Get rid of slash-command-action action309dae6virtcontainers: check that both initrd and image are not seta10cfffforwarder: Fix changing log level6abccb9forwarder: Drop privileges when using hybrid VSOCKbf00b8dagent-ctl: improve the oci_to_grpc codeb67fa9eforwarder: Make explicit root checke377578forwarder: Fix docs socket path5f30633virtcontainers: delete duplicated notify in watchHypervisor function5f5eca6agent: do not return error but print it if task wait failedd2a7b6fpackaging/static-build: s390x fixes6cc8000cli: Show available guest protection in env output2063b13virtcontainers: Add func AvailableGuestProtectionsa13e2f7agent: Handle uevent remove actions34273daruntime/device: Allow VFIO devices to be presented to guest as VFIO devices68696e0runtime: Add parameter to constrainGRPCSpec to control VFIO handlingd9e2e9eruntime: Rename constraintGRPCSpec to improve grammar57ab408runtime: Introduce "vfio_mode" config variable and annotation730b9c4agent/device: Create device nodes for VFIO devices175f9b0rustjail: Allow container devices in subdirectories9891efcrustjail: Correct sanity checks on device pathd6b62c0rustjail: Change mknod_dev() and bind_dev() to take relative device path2680c0brustjail: Provide useful context on device node creation errors42b92b2agent/device: Allow container devname to differ from the host827a41fagent/device: Refactor update_spec_device_list()8ceadccagent/device: Sanity check guest IOMMU groupsff59db7agent/device: Add function to get IOMMU group for a PCI device13b06a3agent/device: Rebind VFIO devices to VFIO driver inside gueste22bd78agent/device: Add helper function for binding a guest device to a driverb40eedcrustjail: Consistent coding style of LinuxDevice type57c0f93agent: fix race condition when test watcher1a96b8btemplate: disable template unit test on arm43b13a4runtime: DefaultMaxVCPUs should not greater than defaultMaxQemuVCPUsc59c367runtime: current vcpu number should be limitedfa92251runtime: kernel version with '+' as suffix panic in parse52268d0hypervisor: Expose the hypervisor itselfa72bed5hypervisor: update tests based on createSandbox->CreateVM changef434bcbhypervisor: createSandbox is CreateVM76f1ce9hypervisor: startSandbox is StartVMfd24a69hypervisor: waitSandbox is waitVMa6385c8hypervisor: stopSandbox is StopVMf989078hypervisor: resumeSandbox is ResumeVM73b4f27hypervisor: saveSandbox is SaveVM7308610hypervisor: pauseSandbox is nothing but PauseVM8f78e1chypervisor: The SandboxConsole is the VM's console4d47aeehypervisor: Export generic interface methods6baf258hypervisor: Minimal exports of generic hypervisor internal fields37fa453osbuilder: Update QAT driver in Dockerfile8030b6cvirtcontainers: clh: Re-generate the client code8296754versions: Upgrade to Cloud Hypervisor v19.02b13944docs: Fix outdated links4f75ccbdocs: use-cases: Update Intel SGX use case4f018b5runtime: delete useless src/runtime/cli/exit.go7a80aebdocs: Moving from EOT to EOF09a5e03docs: Write tracing documentationb625f62runtime: delete cri containerd plugin from versions.yaml24fff57snap: make curl commands consistent2b9f79csnap: add cloud-hypervisor and experimental kernel273a1a9runtime: optimize test code76f16fdruntime: use containerd package instead of cri-containerd6d55b1bdocs: use containerd to replace cri-containerded02bc9packaging: add containerd to versions.yaml50da26dosbuilder: Call detect_rust_version() right before install_rust.shb4fadc9docs: Updating Developer Guide re qemu-imgb8e69ceversions: Add libseccomp and gperf version17a8c5cruntime: Fix random failure for TestIoCopyf34f67dosbuilder: Specify version when installing Rust135a080osbuilder: Pass CI env to container agent buildeb5dd76osbuilder: Re-enable building the agent in Dockerbcffa26tracing: Fix typo in "package" tag namee61f5e2runtime: Show socket path in kata-env output5b3a349trace-forwarder: Support Hybrid VSOCKe42bc05kata-deploy: add .dockerignore file321be0ftracing: Remove trace mode and trace type7d0b616agent: Do not fail when trying to adding existing routes3f95469runtime: logging: Add variable for syslog tagadc9e0bruntime: fix two bugs in rootless hypervisor51cbe14runtime: Add option "disable_seccomp" to config hypervisor.clh98b7350virtcontainers: clh: Enable the `seccomp` feature46720c6runtime: set tags for trace spand789b42package: assign proper value to redefined_string4d7ddffutils: kata-manager: Update kata-manager.sh for new containerd configf5172d1cli: Fix outdated kata-runtime bash completiond45c86dversions: Update CRI-O to its 1.22 releasec4a6426versions: Update k8s & critools to v1.22881b996agent: Make wording of error message match CRI-O test suite Signed-off-by: Peng Tao <bergwolf@hyper.sh>
kata-deploy
kata-deploy provides a Dockerfile, which contains all of the binaries
and artifacts required to run Kata Containers, as well as reference DaemonSets, which can
be utilized to install Kata Containers on a running Kubernetes cluster.
Note, installation through DaemonSets successfully installs katacontainers.io/kata-runtime on
a node only if it uses either containerd or CRI-O CRI-shims.
Kubernetes quick start
Install Kata on a running Kubernetes cluster
Installing the latest image
The latest image refers to pre-release and release candidate content. For stable releases, please, use the "stable" instructions.
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
Installing the stable image
The stable image refers to the last stable releases content.
Note that if you use a tagged version of the repo, the stable image does match that version. For instance, if you use the 2.2.1 tagged version of the kata-deploy.yaml file, then the version 2.2.1 of the kata runtime will be deployed.
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
For your k3s cluster, do:
$ GO111MODULE=auto go get github.com/kata-containers/kata-containers
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/tools/packaging/kata-deploy
$ kubectl apply -k kata-deploy/overlays/k3s
Ensure kata-deploy is ready
kubectl -n kube-system wait --timeout=10m --for=condition=Ready -l name=kata-deploy pod
Run a sample workload
Workloads specify the runtime they'd like to utilize by setting the appropriate runtimeClass object within
the Pod specification. The runtimeClass examples provided define a node selector to match node label katacontainers.io/kata-runtime:"true",
which will ensure the workload is only scheduled on a node that has Kata Containers installed
runtimeClass is a built-in type in Kubernetes. To apply each Kata Containers runtimeClass:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
The following YAML snippet shows how to specify a workload should use Kata with Cloud Hypervisor:
spec:
template:
spec:
runtimeClassName: kata-clh
The following YAML snippet shows how to specify a workload should use Kata with Firecracker:
spec:
template:
spec:
runtimeClassName: kata-fc
The following YAML snippet shows how to specify a workload should use Kata with QEMU:
spec:
template:
spec:
runtimeClassName: kata-qemu
To run an example with kata-clh:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-clh.yaml
To run an example with kata-fc:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
To run an example with kata-qemu:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
The following removes the test pods:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-clh.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-fc.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/examples/test-deploy-kata-qemu.yaml
Remove Kata from the Kubernetes cluster
Removing the latest image
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy.yaml
$ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
After ensuring kata-deploy has been deleted, cleanup the cluster:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion. This process should take, at most, 5 minutes.
After that, let's delete the cleanup daemon-set, the added RBAC and runtime classes:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
Removing the stable image
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
$ kubectl -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
After ensuring kata-deploy has been deleted, cleanup the cluster:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stabe.yaml
The cleanup daemon-set will run a single time, cleaning up the node-label, which makes it difficult to check in an automated fashion. This process should take, at most, 5 minutes.
After that, let's delete the cleanup daemon-set, the added RBAC and runtime classes:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-cleanup/base/kata-cleanup-stable.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/runtimeclasses/kata-runtimeClasses.yaml
kata-deploy details
Dockerfile
The Dockerfile used to create the container image deployed in the DaemonSet is provided here. This image contains all the necessary artifacts for running Kata Containers, all of which are pulled from the Kata Containers release page.
Host artifacts:
cloud-hypervisor,firecracker,qemu-system-x86_64, and supporting binariescontainerd-shim-kata-v2kata-collect-data.shkata-runtime
Virtual Machine artifacts:
kata-containers.imgandkata-containers-initrd.img: pulled from Kata GitHub releases pagevmlinuz.containerandvmlinuz-virtiofs.container: pulled from Kata GitHub releases page
DaemonSets and RBAC
Two DaemonSets are introduced for kata-deploy, as well as an RBAC to facilitate
applying labels to the nodes.
Kata deploy
This DaemonSet installs the necessary Kata binaries, configuration files, and virtual machine artifacts on
the node. Once installed, the DaemonSet adds a node label katacontainers.io/kata-runtime=true and reconfigures
either CRI-O or containerd to register three runtimeClasses: kata-clh (for Cloud Hypervisor isolation), kata-qemu (for QEMU isolation),
and kata-fc (for Firecracker isolation). As a final step the DaemonSet restarts either CRI-O or containerd. Upon deletion,
the DaemonSet removes the Kata binaries and VM artifacts and updates the node label to katacontainers.io/kata-runtime=cleanup.
Kata cleanup
This DaemonSet runs of the node has the label katacontainers.io/kata-runtime=cleanup. These DaemonSets removes
the katacontainers.io/kata-runtime label as well as restarts either CRI-O or containerd systemctl
daemon. You cannot execute these resets during the preStopHook of the Kata installer DaemonSet,
which necessitated this final cleanup DaemonSet.