For the CC build we need to enable such a flag, and the cleaner way to
do so is exposing it in the Makefile and, later on, making sure its
correct value to the build script.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Now kata shim only supports stdout/stderr of fifo from
containerd/CRI-O, but shim v2 supports logging plugins,
and nerdctl default will use the binary schema for logs.
This commit will add the others type of log plugins:
- file
- binary
In case of binary, kata shim will receive a stdout/stderr like:
binary:///nerdctl?_NERDCTL_INTERNAL_LOGGING=/var/lib/nerdctl/1935db59
That means the nerdctl process will handle the logs(stdout/stderr)
Fixes: #4420
Signed-off-by: Bin Liu <bin@hyper.sh>
Before, we maintained almost identical structures between our persist
API and what we keep for our devices, with the persist API being a
slight subset of device structures.
Let's deduplicate this, now that persist is importing device package.
Json unmarshal of prior persist structure will work fine, since it was
an exact subset of fields.
Fixes: #4468
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
Rather than have device package depend on persist, let's define the
(almost duplicate) structures within device itself, and have the Kata
Container's persist pkg import these.
This'll help avoid unecessary dependencies within our core packages.
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
Similar to network, we can use multiple queues for virtio-block
devices. This can help improve storage performance.
This commit changes the number of queues for block devices to
the number of cpus for cloud-hypervisor and qemu.
Today the default number of cpus a VM starts with is 1.
Hence the queues used will be 1. This change will help
improve performance when the default cold-plugged cpus is greater
than one by changing this in the config file. This may also help
when we use the sandboxing feature with k8s that passes down
the sum of the resources required down to Kata.
Fixes#4502
Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
Enable "-sandbox on" in qemu can introduce another protect layer
on the host, to make the secure container more secure.
The default option is disable because this feature may introduce some
performance cost, even though user can enable
/proc/sys/net/core/bpf_jit_enable to reduce the impact.
Fixes: #2266
Signed-off-by: Feng Wang <feng.wang@databricks.com>
Remove space from root span name to follow camel casing of other tracing
span names in the runtime and to make parsing easier in testing.
Fixes#4483
Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
By comparing the content of the old url and the new url,
ensure that their content is consistent and does not contain ambiguities
Fixes: #4454
Signed-off-by: Binbin Zhang <binbin36520@gmail.com>
Let's improve the log so we make it clear that we're only *actually*
adding the net device to the Cloud Hypervisor configuration when calling
our own version of VmAddNetPut().
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We want to have the file descriptors of the opened tuntap device to pass
them down to the VMMs, so the VMMs don't have to explicitly open a new
tuntap device themselves, as the `container_kvm_t` label does not allow
such a thing.
With this change we ensure that what's currently done when using QEMU as
the hypervisor, can be easily replicated with other VMMs, even if they
don't support multiqueue.
As a side effect of this, we need to close the received file descriptors
in the code of the VMMs which are not going to use them.
Fixes: #3533
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Adding FFI_NO_PI to the netlink flags causes no harm to the supported
and tested hypervisors as when opening the device by its name Cloud
Hypervisor[0], Firecracker[1], and QEMU[2] do set the flag already.
However, when receiving the file descriptor of an opened tutap device
Cloud Hypervisor is not able to set the flag, leaving the guest without
connectivity.
To avoid such an issue, let's simply add the FFI_NO_PI flag to the
netlink flags and ensure, from our side, that the VMMs don't have to set
it on their side when dealing with an already opened tuntap device.
Note that there's a PR opened[3] just for testing that this change
doesn't cause any breakage.
[0]: e52175c2ab/net_util/src/tap.rs (L129)
[1]: b6d6f71213/src/devices/src/virtio/net/tap.rs (L126)
[2]: 3757b0d08b/net/tap-linux.c (L54)
[3]: https://github.com/kata-containers/kata-containers/pull/4292
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is basically a no-op right now, as:
* netPair.TapInterface.VMFds is nil
* the tap name is still passed to Cloud Hypervisor, which is the Cloud
Hypervisor's first choice when opening a tap device.
In the very near future we'll stop passing the tap name to Cloud
Hypervisor, and start passing the file descriptors of the opened tap
instead.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Knowing that VmAddNetPut works as expected, let's switch to manually
building the request and writing it to the appropriate socket.
By doing this it gives us more flexibility to, later on, pass the file
descriptor of the tuntap device to Cloud Hypervisor, as openAPI doesn't
support such operation (it has no notion of SCM Rights).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Instead of creating the VM with the network device already plugged in,
let's actually add the network device *after* the VM is created, but
*before* the Vm is actually booted.
Although it looks like it doesn't make any functional difference between
what's done in the past and what this commit introduces, this will be
used to workaround a limitation on OpenAPI when it comes to passing down
the network device's file descriptor to Cloud Hypervisor, so Cloud
Hypervisor can use it instead of opening the device by its name on the
VMM side.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
VmAddNetPut is the API provided by the Cloud Hypervisor client (auto
generated) code to hotplug a new network device to the VM.
Let's expose it now as it'll be used as part this series, mostly to
guide the reviewer through the process of what we have to do, as later
on, spoiler alert, it'll end up being removed.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
So far this has been done for x86_64. Now that the support for building
and testing has been added for all arches, let's do the second part of
the switch.
We're still not done yet for powerpc, as some a virtifosd crash on the
rust version has been found by the maintainer.
Fixes: #4258, #4260
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Changed bitsize for parsing functions to 64-bit in order to avoid
parsing errors.
Fixes#4435
Signed-off-by: Alexandru Matei <alexandru.matei@uipath.com>
Add more detail to the `kata-monitor` doc to allow an admin to make a
more informed decision about where and how to run the daemon.
Fixes: #4416.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
GetOOMEvent is a blocking call that will fail if
the container exit, in this case, it's not an error or warning.
Changing the log level for logs in case of GetOOMEvent call fails
will reduce log noise in a large cluster that has pods
creating/deleting frequently.
Fixes: #4376
Signed-off-by: Bin Liu <bin@hyper.sh>
Since #902 the `io.katacontainers.config.hypervisor` pod annotations
have only been permitted if explicitly allowed in the global
configuration. The default global configuration allows no such
annotations. That's important because several of those annotations
would cause Kata to execute arbitrary binaries, and so were wildly
unsafe.
However, this is inconvenient for the
`io.katacontainers.config.hypervisor.enable_iommu` annotation
specifically, which controls whether the sandbox VM includes a vIOMMU.
A guest side vIOMMU is necessary to implement VFIO passthrough devices
with `vfio_mode = vfio`, so enabling that mode of operation currently
requires a global configuration change, and can't just be enabled
per-pod.
Unlike some of the other hypervisor annotations, the `enable_iommu`
annotation is quite safe. By default the vIOMMU is not present, so
allowing a user to override it for a pod only improves their
facilities for isolation. Even if the global default were changed to
enable the vIOMMU, that doesn't compel the guest kernel to use it, so
allowing a user to disable the vIOMMU doesn't materially affect
isolation either.
Therefore, allow the io.katacontainers.config.hypervisor.enable_iommu
annotation to work in the default configurations.
fixes#4330
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Set thestop container force flag to true so that the container state is always set to
“StateStopped” after the container wait goroutine is finished. This is necessary for
the following delete container step to succeed.
Fixes: #4359
Signed-off-by: Feng Wang <feng.wang@databricks.com>
In linux 5.14 and hopefully some backports, core scheduling allows processes to
be co scheduled within the same domain on SMT enabled systems.
Containerd impl sets the core sched domain when launching a shim. This
allows a clean way for each shim(container/pod) to be in its own domain and any
additional containers, (v2 pods) be be launched with the same domain as well as
any exec'd process added to the container.
kernel docs: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/core-scheduling.html
For Kata specifically, we will look for SCHED_CORE environment variable
to be set to indicate we shuold create a new schedule core domain.
This is equivalent to the containerd shim's PR: e48bbe8394Fixes: #4309
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
Signed-off-by: Michael Crosby <michael@thepasture.io>
While end users can connect directly to the shim, let's provide a way to
easily get/set iptables from kata-runtime itself.
Fixes: #4080
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
Without this, potential errors are silently dropped. Let's ensure we
return the error code as well as potenial data from the response.
Signed-off-by: Eric Ernst <eric_ernst@apple.com>