This is to prepare a secure image tarball to run a confidential
container for IBM Z SE(TEE).
Fixes: #6206
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
In online-kbs attestation the guest is given the location of the
keybroker server to connect after launch. This patch appends the
IP:Port of the online-kbs to the kernel params of the guest.
Patch also simplifies the kbs config into "mode" = offline/online,
and updates SEV config variable names and default values
Fixes: #5661#5715
Signed-off-by: Jim Cadden <jcadden@ibm.com>
For kata containers, rootfs is used in the read-only way.
EROFS can noticably decrease metadata overhead.
On the basis of supporting the EROFS file system, it supports using the config parameter to switch the file system used by rootfs.
Fixes: #6063
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Signed-off-by: yaoyinnan <yaoyinnan@foxmail.com>
As Cloud Hypervisor and QEMU are using different rootfs images (the
former with `offline_fs_kbc` as aa_kbc, and the latter with `eaa_kbc`),
we need to differentiate the kernel parameters passed to each one of
those, as the `root_hash.txt` file used for measured boot will differ
according to the rootfs used.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
It turns out that there's more work needed to be done on the Cloud
Hypervisor side so we can fully support EAA_KBC with it.
For now, let's remove the configuration as the tests are not currently
passing when using it, and stick to the `offline_fs_kbc` and its
specific image for the Cloud Hypervisor + TDX case.
Fixes: #5862
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The `qemu-tdx` configuration is tied to using `offline_fs_kbc` as the
aa_kbc, which is something we're moving away from.
With this in mind, let's rename the `qemu-tdx-eaa-kbc` to `qemu-tdx` and
decrease the amount of the way too many configurations that we ship.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Pass SELinux policy for containers to the agent if `disable_guest_selinux`
is set to `false` in the runtime configuration. The `container_t` type
is applied to the container process inside the guest by default.
Users can also set a custom SELinux policy to the container process using
`guest_selinux_label` in the runtime configuration. This will be an
alternative configuration of Kubernetes' security context for SELinux
because users cannot specify the policy in Kata through Kubernetes's security
context. To apply SELinux policy to the container, the guest rootfs must
be CentOS that is created and built with `SELINUX=yes`.
Fixes: #4812
Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
The default vhost-user-fs queue-size of qemu is 128 now. Set it to 1024
by default which is same as clh. Also make this value configurable.
Fixes: #5694
Signed-off-by: liyuxuan.darfux <liyuxuan.darfux@bytedance.com>
As we're switching TDX to using EAA KBC instead of OfflineFS KBC, let's
add the configuration files needed for testing this before we fully
switch TDX to using such an image.
Fixes: #5563
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is based on a patch from @niteeshkd that adds a config
parameter to choose between AMD SEV and SEV-SNP VMs as the
confidential guest type in case both types are supported. SEV is
the default.
Signed-off-by: Joana Pecholt <joana.pecholt@aisec.fraunhofer.de>
Adds initrd configuration option to the configuration.toml that is
generated for the setup using QEMU.
Signed-off-by: Joana Pecholt <joana.pecholt@aisec.fraunhofer.de>
Added default sev kata config template.
Added required default variables in Makefile.
Fixes#5012Fixes#5008
Signed-Off-By: Ryan Savino <ryan.savino@amd.com>
Let's add a new configuration file for using a cloud hypervisor (and all
the needed artefacts) that are TDX capable.
This PR extends the Makefile in order to provide variables to be set
during the build time that are needed for the proper configuration of
the VMM, such as:
* Specific kernel parameters to be used with TDX
* Specific kernel features to be used when using TDX
* Artefacts path for the artefacts built to be used with TDX
* Kernel
* TD-Shim
The reason we don't hack into the current Cloud Hypervisor configuration
file is because we want to ship both configurations, with for the
non-TEE use case and one for the TDX use case.
It's important to note that the Cloud Hypervisor used upstream is
already built with TDX support.
Fixes: #4831
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
When booting the TDX kernel with `tdx_disable_filter`, as it's been done
for QEMU, VirtioFS can work without any issues.
Whether this will be part of the upstream kernel or not is a different
story, but it easily could make it there as Cloud Hypervisor relies on
the VIRTIO_F_IOMMU_PLATFORM feature, which forces the guest to use the
DMA API, making these devices compatible with TDX.
See Sebastien Boeuf's explanation of this in the
3c973fa7ce208e7113f69424b7574b83f584885d commit:
"""
By using DMA API, the guest triggers the TDX codepath to share some of
the guest memory, in particular the virtqueues and associated buffers so
that the VMM and vhost-user backends/processes can access this memory.
"""
Fixes: #4977
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
When booting the TDX kernel with `tdx_disable_filter`, as it's been done
for QEMU, VirtioFS can work without any issues.
Whether this will be part of the upstream kernel or not is a different
story, but it easily could make it there as Cloud Hypervisor relies on
the VIRTIO_F_IOMMU_PLATFORM feature, which forces the guest to use the
DMA API, making these devices compatible with TDX.
See Sebastien Boeuf's explanation of this in the
3c973fa7ce208e7113f69424b7574b83f584885d commit:
"""
By using DMA API, the guest triggers the TDX codepath to share some of
the guest memory, in particular the virtqueues and associated buffers so
that the VMM and vhost-user backends/processes can access this memory.
"""
Fixes: #4977
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
AMD SEV pre-attestation is handled by the runtime before the guest is
launched. Guest VM is started paused and the runtime communicates with a
remote keybroker service (e.g., simple-kbs) to validate the attestation
measurement and to receive launch secret. Upon validation, the launch
secret is injected into guest memory and the VM is started.
Fixes: #4280
Signed-off-by: Jim Cadden <jcadden@ibm.com>
Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
Let's add a new configuration file for using a QEMU (and all the needed
artefacts) that are TDX capable.
This PR extends the Makefile in order to provide variables to be set
during the build time that are needed for the proper configuration of
the VMM, such as:
* Specific kernel parameters to be used with TDX
* Specific kernel features to be used when using TDX
* Artefacts path for the artefacts built to be used with TDX
* QEMU
* Kernel
* TDVF
The reason we don't hack into the current QEMU configuration file is
because we want to ship both configurations, with for the non-TEE use
case and one for the TDX use case.
Fixes: #4830
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This configuration will allow users to choose between different
I/O backends for qemu, with the default being io_uring.
This will allow users to fallback to a different I/O mechanism while
running on kernels olders than 5.1.
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
Enable Kata runtime to handle `disable_selinux` flag properly in order
to be able to change the status by the runtime configuration whether the
runtime applies the SELinux label to VMM process.
Fixes: #4599
Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
For the CC build we need to enable such a flag, and the cleaner way to
do so is exposing it in the Makefile and, later on, making sure its
correct value to the build script.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Expose the newly added `default_maxmemory` to the project's Makefile and
to the configuration files.
Fixes: #4516
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Enable "-sandbox on" in qemu can introduce another protect layer
on the host, to make the secure container more secure.
The default option is disable because this feature may introduce some
performance cost, even though user can enable
/proc/sys/net/core/bpf_jit_enable to reduce the impact.
Fixes: #2266
Signed-off-by: Feng Wang <feng.wang@databricks.com>
With everything implemented, let's now expose the disk rate limiter
configuration options in the Cloud Hypervisor configuration file.
Fixes: #4139
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
With everything implemented, let's now expose the net rate limiter
configuration options in the Cloud Hypervisor configuration file.
Fixes: #4017
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
virtiofsd's debug will be enabled if hypervisor's debug has been
enabled, this will generate too many noisy logs from virtiofsd.
Unbind the relationship of log level between virtiofsd and
hypervisor, if users want to see debug log of virtiofsd,
can set it by:
virtio_fs_extra_args = ["-o", "log_level=debug"]
Fixes: #3303
Signed-off-by: bin <bin@hyper.sh>