As we only support one stable branch, it'll be used as part of the
stable-3.2 and onwards.
Fixes: #7518
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This requires the GITHUB_UPLOAD_TOKEN. While we're here, let's also fix
the name of the action and remove the "-tarball" suffix, as it's not
really a tarball.
Fixes: #7497
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
`stage` has been added, but only hooked up to the amd64 logic, leaving
arm64 and s390x behind.
Let's fix this right now, and make sure no error occurs when passing
this down to the yaml files.
Fixes: #7497
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This reverts commit 7c857d38c1.
I've misunderstood the error given by github action, let's fix this in
the next commit.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Otherwise we'll face the following error as part of our GHA:
```
The workflow is not valid.
kata-containers/kata-containers/.github/workflows/release-$foo.yaml
(Line: 13, Col: 14): Invalid input, stage is not defined in the
referenced workflow.
```
Fixes: #7497
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make sure we can rely on the tests passing down whether they want
to be tested using Node Feataure Discovery or not.
Right now, only the TDX job has this option set to "true", all the other
jobs have this option set to "false".
We can and have to merge this one before merging the NFD related patches
as:
1) It causes no harm in exporting this environment variable, but not
having it used
2) It will allow us to test the NFD after this one is merged, as changes
in the yaml file, in the case of the pull_request_target event, are
not taken into consideration before they're merged
Fixes: #7495
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is needed in order to not lose track of what's been created and
what's been added here and there.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Static checks for dragonball are landing on any of the self-hosted
runners, and the reason for that is because "self-hosted" was the label
selector used.
Let's use "dragonball" instead, as the machine has that label as well.
Fixes: #7464
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This splits deploying Kata and running the tests into separate commands
to make it possible to rerun tests locally without having to redeploy
Kata each time.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
This job will run on a nested virt capable Azure VM (improving test
concurrency). This is just a placeholder while we adapt the test to GHA.
Fixes: #6555
Signed-off-by: Jeremi Piotrowski <jpiotrowski@microsoft.com>
This will triger the nydus tests, but as they currently are they'll just
return "okay" without actually executing.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This newly added GHA does nothing, is not even triggered, and it's just
a placeholder that we'll grow in the next commits / PRs, so we can
actually start running the nydus tests as part of our CI.
Fixes: #6543
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make things simpler to figure out which version of Kata
Containers has been deployed, and also which artefacts come with it.
This will help us immensely in the future, for the TEEs use case, so we
can easily know whether we can deploy a specific guest kernel for a
specific host kernel.
Fixes: #7394
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Although this file is far away from being a SBOM, it'll help folks to
easily visualise which components are part of a release, and even have
SBOMs generated from that.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We don't need to export KUBECONFIG there. Let's just make sure we have
the server correctly setup and avoid doing that.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Now that we have a new TDX machine plugged into our CI, let's re-enable
the TDX tests.
Fixes: #7368
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As we'll be testing against the LTS and the Active versions of
containers, let's add those entries to the versions.yaml file and make
sure we export what we want to use for the tests as an env var.
The approach taken should not break the current way of getting the
containerd version.
LTS and Active versions of containerd can be found at:
https://containerd.io/releases/#support-horizonFixes: #6543
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make sure this is exported, as it'll be needed in order to install
`yq`, which will be used to get the versions of the dependencies to be
installed.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make sure we install the needed dependencies for running the
`cri-containerd` tests.
Right now this commit is basically adding a placeholder, and later on,
when we'll actually be able to test the job, we'll add the logic of
installing the needed dependencies.
The obvious dependencies we've spotted so far are:
* From the OS
* jq
* curl (already present)
* From our repo
* yq (using the install_yq script)
* From GitHub
* cri-containerd
* cri-tools
* cni plugins
We may need a few more packages, but we will only figure this out as
part of the actual work.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This PR builds the foundation for us to start migrating the
cri-containerd tests from Jenkins to GitHub Actions.
Right now the test does nothing and should always finish successfully.
The coming PRs will actually introduce logic to the `gha-run.sh` script
where we'll be able to run the tests and make sure those pass before
having them actually merged.
Fixes: #6543
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
TDX tests need to be temporarily disabled as the current machine
allocated for this will be off for some time, and a new machine only
will be added next week.
Fixes: #7307
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is a very nice suggestion from Steve Horsman, as with that we can
manually trigger the workflow anytime we need to test it, instead of
waiting for a full day for it to be retriggered via the `schedule`
event.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Passing the commit hash as the "pr-number" has shown problematic as it
would make the AKS cluster name longer than what's accepted by AKS.
One easy way to solve this is just passing "nightly" as the PR number,
as that's only used to create the cluster.
Fixes: #7247
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We should not go through the trouble of running all our tests on AKS /
Azure / baremetal machines in case a PR only changes our documentation.
Fixes: #7258
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As we need to pass down the commit sha to the jobs that will be
triggered from the `push` event, we must be careful on what exactly
we're using there.
At first we were using ${{ github.ref }}, but this turns out to be the
**branch name**, rather than the commit hash. In order to actually get
the commit hash, Let's use ${{ github.sha }} instead.
Fixes: #7247
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As we need to pass down the commit sha to the jobs that will be
triggered from the `schedule` event, we must be careful on what exactly
we're using there.
At first we were using ${{ github.ref }}, but this turns out to be the
**branch name**, rather than the commit hash. In order to actually get
the commit hash, Let's use ${{ github.sha }} instead, as described by
https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#Fixes: #7247
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
On cc3993d860 we introduced a regression,
where we started passing inputs.commit-hash, instead of
github.event.pull_request.head.sha. However, we have been setting
commit-hash to github.event.pull_request.sha, meaning that we're mssing
a `.head.` there.
github.event.pull_request.sha is empty for the pull_request_target
event, leading the CI to pull the content from `main` instead of the
content from the PR.
Fixes: #7247
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The latter workflow is breaking as it doesn't recognise ${GITHUB_REF},
the former would most likely break as well, but it didn't get triggered
yet.
The error we're facing is:
```
Determining the checkout info
/usr/bin/git branch --list --remote origin/${GITHUB_REF}
/usr/bin/git tag --list ${GITHUB_REF}
Error: A branch or tag with the name '${GITHUB_REF}' could not be found
```
Fixes: #7247
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We have to do this, otherwise we cannot log into azure.
This is a regression introduced by
106e305717.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Instead of passing "-${{ inputs.tag }}-amd64", we must only pass
"-${{ inputs.tag }}".
This is a regression introduced by
106e305717.
Fixes: #7247
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
de83cd9de7 tried to solve an issue, but it
clearly seems that I'm using env wrongly, as what ended up being passed
as input was "$VAR", instead of the content of the VAR variable.
As we can simply avoid using those here, let's do it and save us a
headache.
Fixes: #7247
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
It has to have steps declared, and we need to make it a dependency for
the nightly kata-containers-ci-on-push job.
Fixes: #7247
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>