mirror of
https://github.com/aljazceru/kata-containers.git
synced 2025-12-17 22:34:25 +01:00
First of all, this is a controversial piece, and I know that. In this commit we're trying to make a less greedy approach regards the amount of vCPUs we allocate for the VMM, which will be advantageous mainly when using the `static_sandbox_resource_mgmt` feature, which is used by the confidential guests. The current approach we have basically does: * Gets the amount of vCPUs set in the config (an integer) * Gets the amount of vCPUs set as limit (an integer) * Sum those up * Starts / Updates the VMM to use that total amount of vCPUs The fact we're dealing with integers is logical, as we cannot request 500m vCPUs to the VMMs. However, it leads us to, in several cases, be wasting one vCPU. Let's take the example that we know the VMM requires 500m vCPUs to be running, and the workload sets 250m vCPUs as a resource limit. In that case, we'd do: * Gets the amount of vCPUs set in the config: 1 * Gets the amount of vCPUs set as limit: ceil(0.25) * 1 + ceil(0.25) = 1 + 1 = 2 vCPUs * Starts / Updates the VMM to use 2 vCPUs With the logic changed here, what we're doing is considering everything as float till just before we start / update the VMM. So, the flow describe above would be: * Gets the amount of vCPUs set in the config: 0.5 * Gets the amount of vCPUs set as limit: 0.25 * ceil(0.5 + 0.25) = 1 vCPUs * Starts / Updates the VMM to use 1 vCPUs In the way I've written this patch we introduce zero regressions, as the default values set are still the same, and those will only be changed for the TEE use cases (although I can see firecracker, or any other user of `static_sandbox_resource_mgmt=true` taking advantage of this). There's, though, an implicit assumption in this patch that we'd need to make explicit, and that's that the default_vcpus / default_memory is the amount of vcpus / memory required by the VMM, and absolutely nothing else. Also, the amount set there should be reflected in the podOverhead for the specific runtime class. One other possible approach, which I am not that much in favour of taking as I think it's **less clear**, is that we could actually get the podOverhead amount, subtract it from the default_vcpus (treating the result as a float), then sum up what the user set as limit (as a float), and finally ceil the result. It could work, but IMHO this is **less clear**, and **less explicit** on what we're actually doing, and how the default_vcpus / default_memory should be used. Fixes: #6909 Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com> Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
Howto Guides
Kubernetes Integration
- Run Kata containers with
crictl - Run Kata Containers with Kubernetes
- How to use Kata Containers and Containerd
- How to use Kata Containers and containerd with Kubernetes
- Kata Containers and service mesh for Kubernetes
- How to import Kata Containers logs into Fluentd
Hypervisors Integration
Currently supported hypervisors with Kata Containers include:
-
qemu -
cloud-hypervisor -
firecrackerIn the case of
firecrackerthe use of a block devicesnapshotteris needed for the VM rootfs. Refer to the following guide for additional configuration steps: -
ACRNWhile
qemu,cloud-hypervisorandfirecrackerwork out of the box with installation of Kata, some additional configuration is needed in case ofACRN. Refer to the following guides for additional configuration steps:
Advanced Topics
- How to use Kata Containers with virtio-fs
- Setting Sysctls with Kata
- What Is VMCache and How To Enable It
- What Is VM Templating and How To Enable It
- Privileged Kata Containers
- How to load kernel modules in Kata Containers
- How to use Kata Containers with
virtio-mem - How to set sandbox Kata Containers configurations with pod annotations
- How to monitor Kata Containers in K8s
- How to use hotplug memory on arm64 in Kata Containers
- How to setup swap devices in guest kernel
- How to run rootless vmm
- How to run Docker with Kata Containers
- How to run Kata Containers with
nydus - How to run Kata Containers with AMD SEV-SNP
- How to use EROFS to build rootfs in Kata Containers
- How to run Kata Containers with kinds of Block Volumes