Pod annotations (io.katacontainers.*) are not meaningful
for the remote hypervisor. This patch disables pod annotations
in the kata-remote settings of the containerd configuration.
Fixes: #6345
Signed-off-by: Yohei Ueda <yohei@jp.ibm.com>
This PR removes unwanted white spaces in order to fix the format
of the kata-deploy-binaries script.
Fixes: #6962
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
We've a discrepancy on what's set along the scripts used to build the
Kata Cotainers artefacts locally.
Some of those were missing a way to easily debug them in case of a
failure happens, but one specific one (build-and-upload-payload.sh)
could actually silently fail.
All of those have been changed as part of this commut.
Fixes: #6908
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit ae24dc73c1)
We're facing some issues to download / use the public key provided by
google for installing kubernetes as part of the kata-deploy image.
```
The following signatures couldn't be verified because the public key is
not available: NO_PUBKEY B53DC80D13EDEF05
Reading package lists... Done
W: GPG error: https://packages.cloud.google.com/apt kubernetes-xenial
InRelease: The following signatures couldn't be verified because the
public key is not available: NO_PUBKEY B53DC80D13EDEF05 E: The
repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is
not signed.
N: Updating from such a repository can't be done securely, and is
therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user
configuration details.
```
Let's work this around following the suggestion made by @dims, at:
https://github.com/kubernetes/k8s.io/pull/4837#issuecomment-1446426585
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 636539bf0c)
I cannot easily pin-point which commit dropped it, but my gut feeling is
that it's the result of an erroneous conflict resolution when merging
content from main to the CCv0 branch.
Regardless of when / why it happened, as the root_hash logic ended up
being dropped, workflows that depend on that are now failing.
With everything said in mind, let's bring the logic back.
Fixes: #6901
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This reverts commit 3018c9ad51.
As the Jenkins TDX CI is running on a system with a TDX stack called
"2022ww44", we should keep the QEMU / kernel / OVMF versions matching
what's provided in that stack.
The reason we were able to update this on `main` is because the GHA TDX
CI is running on a TDX stack called "2023ww01", but we have decided to
NOT take the bullet, NOT updating the Jenkins CI in order to avoid
unexepected breakages.
This regression was introduced as part of the last CCv0 merge to main,
and would've been caught by the CI, and should've been caught by the
reviewer (myself :-)), but CI was having a hard time to even build the
compoenents and I wrote in the PR and I'm quoting it here: "I rather
deal with possible breakages on this later on, than block this PR to get
in." ... and here we are. :-)
Fixes: #6884
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This reverts commit f33345c311.
As the Jenkins TDX CI is running on a system with a TDX stack called
"2022ww44", we should keep the QEMU / kernel / OVMF versions matching
what's provided in that stack.
The reason we were able to update this on `main` is because the GHA TDX
CI is running on a TDX stack called "2023ww01", but we have decided to
NOT take the bullet, NOT updating the Jenkins CI in order to avoid
unexepected breakages.
This regression was introduced as part of the last CCv0 merge to main,
and would've been caught by the CI, and should've been caught by the
reviewer (myself :-)), but CI was having a hard time to even build the
compoenents and I wrote in the PR and I'm quoting it here: "I rather
deal with possible breakages on this later on, than block this PR to get
in." ... and here we are. :-)
Fixes: #6884
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This partially reverts commit 054174d3e6
As the Jenkins TDX CI is running on a system with a TDX stack called
"2022ww44", we should keep the QEMU / kernel / OVMF versions matching
what's provided in that stack.
The reason we were able to update this on `main` is because the GHA TDX
CI is running on a TDX stack called "2023ww01", but we have decided to
NOT take the bullet, NOT updating the Jenkins CI in order to avoid
unexepected breakages.
This regression was introduced as part of the last CCv0 merge to main,
and would've been caught by the CI, and should've been caught by the
reviewer (myself :-)), but CI was having a hard time to even build the
compoenents and I wrote in the PR and I'm quoting it here: "I rather
deal with possible breakages on this later on, than block this PR to get
in." ... and here we are. :-)
Fixes: #6884
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
`uname -m` produces `x86_64`, but container image convention
is to use `amd64`, so update this in the tag
Fixes: #6820
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
(cherry picked from commit 2856d3f23d)
Last but not least add the continerd shim configuration
pointing to the correct configuration-<shim>.toml
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
With each release make sure we ship a GPU and TEE enabled kernel
This adds tdx-experimental kernel support
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
TDVF caching is not working as the tarball name is incorrect. The result
expected is kata-static-tdvf.tar.xz, but it's looking for
kata-static-tdx.tar.xz.
This happens as a logic to convert tdx -> tdvf has been added as part of
the building scripts, but I missed doing this as part of the caching
scripts.
Fixes: #6669
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
TDX QEMU caching is not working as expected, as we're checking for its
version looking at "assets.hypervisor.${QEMU_FLAVOUR}.version", which is
correct for standard QEMU. However, for TDX QEMU we should be checking
for "assets.hypervisor.${QEMU_FLAVOUR}.tag"
Fixes: #6668
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's ensure the node is ready after the CRI Engine restart, otherwise
we may proceed and scripts may simply fail if they try to deploy a pod
while the CRI Engine is not yet restarted (and, consequently, the node
is not Ready).
Related: #6649
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
readinessProbe will help us to only have the kata-deploy pod marked as
Ready when it finishes all the needed configurations in the node.
Related: #6649
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As TEEs cannot hotplug memory / CPU, we *must* consider the default
values for those as part of the podOverhead.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
If conf_guest is set we need to update the CONFIG_LOCALVERSION
to match the suffix created in install_kata
-nvidia-gpu-{snp|tdx}, the linux headers will be named the very
same if build with make deb-pkg for TDX or SNP.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Let's make sure we configure containerd for the kata-qemu-tdx handler
and ship the kata-qemu-tdx runtime class for kubernetes.
Fixes: #6537
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the ability to cache OVMF, which right now we're only building
and shipping it for TDX.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
OVMF for TDX as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's update the OVMF for TDX version to what's the latest tested
release of the Intel TDX tools with Kata Containers.
This change requires a newer version of `nasm` than the one provided by
the container used to build the project. This change will also be
needed for SEV-SNP and was originally done by Alex Carter (thanks!).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Alex Carter <Alex.Carter@ibm.com>
As we'll be using this from different places in the near future, let's
create a helper function as part of the libs.sh.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make users aware of the cache_components_main.sh that they can
also cache the kernel-tdx-experimental builds.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
kernel-tdx-experimental as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's create a `install_kernel_helper()` function, as it was already
done for QEMU, and rely on that when calling `install_kernel` and
`install_kernel_dragonball_experimental`.
This helps us to reduce the code duplication by a fair amount.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's update the Kernel TDX version to what's the latest tested release
of the Intel TDX tools with Kata Containers.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's do what we already did when caching the kernel, and allow passing
a FLAVOUR of the project to build.
By doing this we can re-use the same function used to cache QEMU to also
cache any kind of experimental QEMU that we may happen to have.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add the needed targets and modifications to be able to build
qemu-tdx-experimental as part of the local-build scripts.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make sure the `qemu_suffix` and `qemu_tarball_name` can be
specified. With this we make it really easy to reuse this script for
any addition flavour of an experimental QEMU that ends up having to be
built (specifically looking at the ones for Confidential Containers
here).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's update the QEMU TDX version to what's the latest tested release of
the Intel TDX tools with Kata Containers.
In order to do such update, we had to relax the checks on the QEMU
version for some of the configuration options, as those were removed
right after the window was open for the 7.1.0 development (thus the
7.0.50 check).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As the dragonball kernel is shipped as part of our releases, it must be
added to the `all` target.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
In order to make it easier to read, let's just rename the
install_dragonball_experimental_kernel and install_experimental_kernel
to install_kernel_dragonball_experimental and
install_kernel_experimental, respectively.
This allows us to quickly get to those functions when looking for
`install_kernel`.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>