mirror of
https://github.com/aljazceru/kata-containers.git
synced 2026-01-23 08:14:35 +01:00
Merge pull request #488 from jodh-intel/doc-fixes
docs: Fix typos and formatting
This commit is contained in:
@@ -33,7 +33,7 @@
|
||||
* [Run Kata Containers with Kubernetes](#run-kata-containers-with-kubernetes)
|
||||
* [Install a CRI implementation](#install-a-cri-implementation)
|
||||
* [CRI-O](#cri-o)
|
||||
* [containerd with cri plugin](#containerd-with-cri-plugin)
|
||||
* [containerd with CRI plugin](#containerd-with-cri-plugin)
|
||||
* [Install Kubernetes](#install-kubernetes)
|
||||
* [Configure for CRI-O](#configure-for-cri-o)
|
||||
* [Configure for containerd](#configure-for-containerd)
|
||||
@@ -133,7 +133,7 @@ $ sudo sed -i 's/^\(initrd =.*\)/# \1/g' /etc/kata-containers/configuration.toml
|
||||
```
|
||||
The rootfs image is created as shown in the [create a rootfs image](#create-a-rootfs-image) section.
|
||||
|
||||
One of the `initrd` and `image` options in kata runtime config file **MUST** be set but **not both**.
|
||||
One of the `initrd` and `image` options in Kata runtime config file **MUST** be set but **not both**.
|
||||
The main difference between the options is that the size of `initrd`(10MB+) is significantly smaller than
|
||||
rootfs `image`(100MB+).
|
||||
|
||||
@@ -297,7 +297,7 @@ $ sudo rm -rf ${ROOTFS_DIR}
|
||||
$ cd $GOPATH/src/github.com/kata-containers/osbuilder/rootfs-builder
|
||||
$ script -fec 'sudo -E GOPATH=$GOPATH AGENT_INIT=yes USE_DOCKER=true SECCOMP=no ./rootfs.sh ${distro}'
|
||||
```
|
||||
`AGENT_INIT` controls if the guest image uses kata agent as the guest `init` process. When you create an initrd image,
|
||||
`AGENT_INIT` controls if the guest image uses the Kata agent as the guest `init` process. When you create an initrd image,
|
||||
always set `AGENT_INIT` to `yes`. By default `seccomp` packages are not included in the initrd image. Set `SECCOMP` to `yes` to include them.
|
||||
|
||||
You MUST choose one of `alpine`, `centos`, `clearlinux`, `euleros`, and `fedora` for `${distro}`.
|
||||
@@ -348,8 +348,8 @@ $ curl -LOk ${kernel_url}
|
||||
$ tar -xf ${kernel_tar_file}
|
||||
$ mv .config "linux-${kernel_version}"
|
||||
$ pushd "linux-${kernel_version}"
|
||||
$ curl -L https://raw.githubusercontent.com/kata-containers/packaging/master/kernel/patches/4.19.x/0001-NO-UPSTREAM-9P-always-use-cached-inode-to-fill-in-v9.patch | patch -p1
|
||||
$ curl -L https://raw.githubusercontent.com/kata-containers/packaging/master/kernel/patches/4.19.x/0002-Compile-in-evged-always.patch | patch -p1
|
||||
$ curl -L https://raw.githubusercontent.com/kata-containers/packaging/master/kernel/patches/4.19.x/0003-NO-UPSTREAM-9P-always-use-cached-inode-to-fill-in-v9.patch | patch -p1
|
||||
$ curl -L https://raw.githubusercontent.com/kata-containers/packaging/master/kernel/patches/4.19.x/0004-Compile-in-evged-always.patch | patch -p1
|
||||
$ make ARCH=${kernel_dir} -j$(nproc)
|
||||
$ kata_kernel_dir="/usr/share/kata-containers"
|
||||
$ kata_vmlinuz="${kata_kernel_dir}/kata-vmlinuz-${kernel_version}.container"
|
||||
@@ -371,7 +371,7 @@ When setting up Kata using a [packaged installation method](https://github.com/k
|
||||
|
||||
## Build a custom QEMU
|
||||
|
||||
Your qemu directory need to be prepared with source code. Alternatively, you can use the [Kata containers QEMU](https://github.com/kata-containers/qemu/tree/master) and checkout the recommended branch:
|
||||
Your QEMU directory need to be prepared with source code. Alternatively, you can use the [Kata containers QEMU](https://github.com/kata-containers/qemu/tree/master) and checkout the recommended branch:
|
||||
|
||||
```
|
||||
$ go get -d github.com/kata-containers/qemu
|
||||
@@ -397,7 +397,7 @@ $ sudo -E make install
|
||||
>
|
||||
> - You should only do this step if you are on aarch64/arm64.
|
||||
> - You should include [Eric Auger's latest PCDIMM/NVDIMM patches](https://patchwork.kernel.org/cover/10647305/) which are
|
||||
> under upstream review for supporting nvdimm on aarch64.
|
||||
> under upstream review for supporting NVDIMM on aarch64.
|
||||
>
|
||||
You could build the custom `qemu-system-aarch64` as required with the following command:
|
||||
```
|
||||
@@ -508,7 +508,7 @@ Restart CRI-O to take changes into account
|
||||
$ sudo systemctl restart crio
|
||||
```
|
||||
|
||||
### containerd with cri plugin
|
||||
### containerd with CRI plugin
|
||||
|
||||
If you select containerd with `cri` plugin, follow the "Getting Started for Developers"
|
||||
instructions [here](https://github.com/containerd/cri#getting-started-for-developers)
|
||||
@@ -522,12 +522,11 @@ To customize containerd to select Kata Containers runtime, follow our
|
||||
|
||||
Depending on what your needs are and what you expect to do with Kubernetes,
|
||||
please refer to the following
|
||||
[documentation](https://kubernetes.io/docs/setup/pick-right-solution/) to
|
||||
install it correctly.
|
||||
[documentation](https://kubernetes.io/docs/setup/) to install it correctly.
|
||||
|
||||
Kubernetes talks with CRI implementations through a `container-runtime-endpoint`,
|
||||
also called CRI socket. This socket path is different depending on which CRI
|
||||
implementation you chose, and the kubelet service has to be updated accordingly.
|
||||
implementation you chose, and the Kubelet service has to be updated accordingly.
|
||||
|
||||
### Configure for CRI-O
|
||||
|
||||
@@ -549,8 +548,8 @@ documentation [here](https://github.com/kata-containers/documentation/blob/maste
|
||||
|
||||
## Run a Kubernetes pod with Kata Containers
|
||||
|
||||
After you update your kubelet service based on the CRI implementation you
|
||||
are using, reload and restart kubelet. Then, start your cluster:
|
||||
After you update your Kubelet service based on the CRI implementation you
|
||||
are using, reload and restart Kubelet. Then, start your cluster:
|
||||
```bash
|
||||
$ sudo systemctl daemon-reload
|
||||
$ sudo systemctl restart kubelet
|
||||
@@ -564,11 +563,11 @@ $ sudo kubeadm init --skip-preflight-checks --cri-socket /run/containerd/contain
|
||||
$ export KUBECONFIG=/etc/kubernetes/admin.conf
|
||||
```
|
||||
|
||||
You can force kubelet to use Kata Containers by adding some _untrusted_
|
||||
You can force Kubelet to use Kata Containers by adding some `untrusted`
|
||||
annotation to your pod configuration. In our case, this ensures Kata
|
||||
Containers is the selected runtime to run the described workload.
|
||||
|
||||
_nginx-untrusted.yaml_
|
||||
`nginx-untrusted.yaml`
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@@ -594,7 +593,7 @@ If you are unable to create a Kata Container first ensure you have
|
||||
before attempting to create a container. Then run the
|
||||
[`kata-collect-data.sh`](https://github.com/kata-containers/runtime/blob/master/data/kata-collect-data.sh.in)
|
||||
script and paste its output directly into a
|
||||
[github issue](https://github.com/kata-containers/kata-containers/issues/new).
|
||||
[GitHub issue](https://github.com/kata-containers/kata-containers/issues/new).
|
||||
|
||||
> **Note:**
|
||||
>
|
||||
|
||||
@@ -53,7 +53,7 @@ the concept is referred to using a link).
|
||||
Important information that is not part of the main document flow should be
|
||||
added as a Note in bold with all content contained within a block quote:
|
||||
|
||||
> **Note:** This is areally important point!
|
||||
> **Note:** This is a really important point!
|
||||
>
|
||||
> This particular note also spans multiple lines. The entire note should be
|
||||
> included inside the quoted block.
|
||||
@@ -118,7 +118,7 @@ utility.
|
||||
in a *bash code block* with every command line prefixed with `$ ` to denote
|
||||
a shell prompt:
|
||||
|
||||
```
|
||||
<pre>
|
||||
|
||||
```bash
|
||||
$ echo "Hi - I am some bash code"
|
||||
@@ -126,7 +126,7 @@ utility.
|
||||
$ [ $? -eq 0 ] && echo "success"
|
||||
```
|
||||
|
||||
```
|
||||
<pre>
|
||||
|
||||
- If a command needs to be run as the `root` user, it must be run using
|
||||
`sudo(8)`.
|
||||
@@ -142,7 +142,7 @@ utility.
|
||||
- In the unusual case that you need to display command *output*, use an
|
||||
unadorned code block (\`\`\`):
|
||||
|
||||
```
|
||||
<pre>
|
||||
|
||||
The output of the `ls(1)` command is expected to be:
|
||||
|
||||
@@ -150,7 +150,7 @@ utility.
|
||||
ls: cannot access '/foo': No such file or directory
|
||||
```
|
||||
|
||||
```
|
||||
<pre>
|
||||
|
||||
- Long lines should not span across multiple lines by using the '`\`'
|
||||
continuation character.
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
* [docker run and shared memory](#docker-run-and-shared-memory)
|
||||
* [docker run and sysctl](#docker-run-and-sysctl)
|
||||
* [Docker daemon features](#docker-daemon-features)
|
||||
* [selinux support](#selinux-support)
|
||||
* [SELinux support](#selinux-support)
|
||||
* [Architectural limitations](#architectural-limitations)
|
||||
* [Networking limitations](#networking-limitations)
|
||||
* [Support for joining an existing VM network](#support-for-joining-an-existing-vm-network)
|
||||
@@ -64,10 +64,10 @@ spec and the non-standard extensions provided by `runc`.
|
||||
|
||||
# Scope
|
||||
|
||||
Each known limitation is captured in a separate github issue that contains
|
||||
Each known limitation is captured in a separate GitHub issue that contains
|
||||
detailed information about the issue. These issues are tagged with the
|
||||
`limitation` label. This document is a curated summary of important known
|
||||
limitations and provides links to the relevant github issues.
|
||||
limitations and provides links to the relevant GitHub issues.
|
||||
|
||||
The following link shows the latest list of limitations:
|
||||
|
||||
@@ -76,7 +76,7 @@ The following link shows the latest list of limitations:
|
||||
# Contributing
|
||||
|
||||
If you would like to work on resolving a limitation, please refer to the
|
||||
[contributers guide](https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md).
|
||||
[contributors guide](https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md).
|
||||
If you wish to raise an issue for a new limitation, either
|
||||
[raise an issue directly on the runtime](https://github.com/kata-containers/runtime/issues/new)
|
||||
or see the
|
||||
@@ -136,7 +136,7 @@ these commands is potentially challenging.
|
||||
See issue https://github.com/clearcontainers/runtime/issues/341 and [the constraints challenge](#the-constraints-challenge) for more information.
|
||||
|
||||
For CPUs resource management see
|
||||
[cpu-constraints](design/cpu-constraints.md).
|
||||
[CPU constraints](design/cpu-constraints.md).
|
||||
|
||||
### docker run and shared memory
|
||||
|
||||
@@ -156,10 +156,10 @@ See issue https://github.com/kata-containers/runtime/issues/185 for more informa
|
||||
## Docker daemon features
|
||||
|
||||
Some features enabled or implemented via the
|
||||
[dockerd daemon](https://docs.docker.com/config/daemon/) configuration are not yet
|
||||
[`dockerd` daemon](https://docs.docker.com/config/daemon/) configuration are not yet
|
||||
implemented.
|
||||
|
||||
### selinux support
|
||||
### SELinux support
|
||||
|
||||
The `dockerd` configuration option `"selinux-enabled": true` is not presently implemented
|
||||
in Kata Containers. Enabling this option causes an OCI runtime error.
|
||||
@@ -168,7 +168,7 @@ See issue https://github.com/kata-containers/runtime/issues/784 for more informa
|
||||
|
||||
The consequence of this is that the [Docker --security-opt is only partially supported](#docker---security-opt-option-partially-supported).
|
||||
|
||||
Kubernetes [selinux labels](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#assign-selinux-labels-to-a-container) will also not be applied.
|
||||
Kubernetes [SELinux labels](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#assign-selinux-labels-to-a-container) will also not be applied.
|
||||
|
||||
# Architectural limitations
|
||||
|
||||
@@ -244,7 +244,7 @@ Note: The `--security-opt apparmor=your_profile` is not yet supported. See https
|
||||
|
||||
## The constraints challenge
|
||||
|
||||
Applying resource constraints such as cgroup, cpu, memory, and storage to a workload is not always straightforward with a VM based system. A Kata Container runs in an isolated environment inside a virtual machine. This, coupled with the architecture of Kata Containers, offers many more possibilities than are available to traditional Linux containers due to the various layers and contexts.
|
||||
Applying resource constraints such as cgroup, CPU, memory, and storage to a workload is not always straightforward with a VM based system. A Kata Container runs in an isolated environment inside a virtual machine. This, coupled with the architecture of Kata Containers, offers many more possibilities than are available to traditional Linux containers due to the various layers and contexts.
|
||||
|
||||
In some cases it might be necessary to apply the constraints to multiple levels. In other cases, the hardware isolated VM provides equivalent functionality to the the requested constraint.
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This document lists the tasks required to create a Kata Release.
|
||||
|
||||
It should be pasted directly into a github issue and each item checked off as it is completed.
|
||||
It should be pasted directly into a GitHub issue and each item checked off as it is completed.
|
||||
|
||||
- [ ] Raise PRs to update the `VERSION` file in the following repositories:
|
||||
- [ ] [agent][agent]
|
||||
@@ -32,17 +32,17 @@ It should be pasted directly into a github issue and each item checked off as it
|
||||
- [ ] [image][image]
|
||||
- [ ] [initrd][initrd]
|
||||
- [ ] [proxy][proxy]
|
||||
- [ ] [qemu-lite][qemu-lite]
|
||||
- [ ] [`qemu-lite`][qemu-lite]
|
||||
- [ ] [runtime][runtime]
|
||||
- [ ] [shim][shim]
|
||||
- [ ] [throttler][throttler]
|
||||
|
||||
- [ ] Generate snap packages based on `HEAD`
|
||||
- [ ] Push snap packages via snapcraft tool
|
||||
- [ ] Pubish snap packages in the snapcraft store
|
||||
- [ ] Publish snap packages in the snapcraft store
|
||||
|
||||
- [ ] Installation tests (must be done for major releases):
|
||||
- [ ] Centos
|
||||
- [ ] CentOS
|
||||
- [ ] Fedora
|
||||
- [ ] Ubuntu
|
||||
|
||||
|
||||
@@ -17,7 +17,7 @@ providing only bug and security fixes.
|
||||
Kata Containers will maintain two stable release branches in addition to the master branch.
|
||||
Once a new MAJOR or MINOR release is created from master, a new stable branch is created for
|
||||
the prior MAJOR or MINOR release and the older stable branch is no longer maintained. End of
|
||||
maintainence for a branch is announced on the Kata Containers mailing list. Users can determine
|
||||
maintenance for a branch is announced on the Kata Containers mailing list. Users can determine
|
||||
the version currently installed by running `kata-runtime kata-env`. It is recommended to use the
|
||||
latest stable branch available.
|
||||
|
||||
@@ -81,7 +81,7 @@ stable and master. While this is not in place currently, it should be considered
|
||||
### Patch releases
|
||||
|
||||
Releases are normally made every other week for patch releases, which include a GitHub release as
|
||||
well as binary packages. These patch releases are made for both stable branches, and a 'release candidate'
|
||||
well as binary packages. These patch releases are made for both stable branches, and a "release candidate"
|
||||
for the next `MAJOR` or `MINOR` is created from master. If there are no changes across all the repositories, no
|
||||
release is created and an announcement is made on the developer mailing list to highlight this.
|
||||
If a release is being made, each repository is tagged for this release, regardless
|
||||
@@ -103,8 +103,8 @@ Kata guarantees compatibility between components that are within one minor relea
|
||||
|
||||
This is critical for dependencies which cross between host (runtime, shim, proxy) and
|
||||
the guest (hypervisor, rootfs and agent). For example, consider a cluster with a long-running
|
||||
deployment, workload-never-dies, all on kata version 1.1.3 components. If the operator updates
|
||||
deployment, workload-never-dies, all on Kata version 1.1.3 components. If the operator updates
|
||||
the Kata components to the next new minor release (i.e. 1.2.0), we need to guarantee that the 1.2.0
|
||||
runtime still communicates with 1.1.3 agent within workload-never-dies.
|
||||
|
||||
Handling live-update is out of the scope of this document. See this [kata-runtime issue](https://github.com/kata-containers/runtime/issues/492) for details.
|
||||
Handling live-update is out of the scope of this document. See this [`kata-runtime` issue](https://github.com/kata-containers/runtime/issues/492) for details.
|
||||
|
||||
@@ -157,7 +157,7 @@ new package versions are published.
|
||||
|
||||
The `kata-linux-container` package contains a Linux\* kernel based on the
|
||||
latest vanilla version of the
|
||||
[longterm kernel](https://www.kernel.org/)
|
||||
[long-term kernel](https://www.kernel.org/)
|
||||
plus a small number of
|
||||
[patches](https://github.com/kata-containers/packaging/tree/master/kernel).
|
||||
|
||||
@@ -165,7 +165,7 @@ The `Longterm` branch is only updated with
|
||||
[important bug fixes](https://www.kernel.org/category/releases.html)
|
||||
meaning this package is only updated when necessary.
|
||||
|
||||
The guest kernel package is updated when a new longterm kernel is released
|
||||
The guest kernel package is updated when a new long-term kernel is released
|
||||
and when any patch updates are required.
|
||||
|
||||
### Image
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
# Kata Containers and vsocks
|
||||
# Kata Containers and VSOCKs
|
||||
|
||||
- [Introduction](#introduction)
|
||||
- [proxy communication diagram](#proxy-communication-diagram)
|
||||
- [vsock communication diagram](#vsock-communication-diagram)
|
||||
- [VSOCK communication diagram](#vsock-communication-diagram)
|
||||
- [System requirements](#system-requirements)
|
||||
- [Advantages of using vsocks](#advantages-of-using-vsocks)
|
||||
- [Advantages of using VSOCKs](#advantages-of-using-vsocks)
|
||||
- [High density](#high-density)
|
||||
- [Reliability](#reliability)
|
||||
|
||||
@@ -13,12 +13,12 @@
|
||||
There are two different ways processes in the virtual machine can communicate
|
||||
with processes in the host. The first one is by using serial ports, where the
|
||||
processes in the virtual machine can read/write data from/to a serial port
|
||||
device and the processes in the host can read/write data from/to a unix socket.
|
||||
device and the processes in the host can read/write data from/to a Unix socket.
|
||||
Most GNU/Linux distributions have support for serial ports, making it the most
|
||||
portable solution. However, the serial link limits read/write access to one
|
||||
process at a time. To deal with this limitation the resources (serial port and
|
||||
unix socket) must be multiplexed. In Kata Containers those resources are
|
||||
multiplexed by using [kata-proxy][2] and [yamux][3], the following diagram shows
|
||||
Unix socket) must be multiplexed. In Kata Containers those resources are
|
||||
multiplexed by using [`kata-proxy`][2] and [Yamux][3], the following diagram shows
|
||||
how it's implemented.
|
||||
|
||||
|
||||
@@ -52,12 +52,12 @@ how it's implemented.
|
||||
`----------------------'
|
||||
```
|
||||
|
||||
A newer, simpler method is [vsocks][4], which can accept connections from
|
||||
multiple clients and does not require multiplexers ([kata-proxy][2] and
|
||||
[yamux][3]). The following diagram shows how it's implemented in Kata Containers.
|
||||
A newer, simpler method is [VSOCKs][4], which can accept connections from
|
||||
multiple clients and does not require multiplexers ([`kata-proxy`][2] and
|
||||
[Yamux][3]). The following diagram shows how it's implemented in Kata Containers.
|
||||
|
||||
|
||||
### vsock communication diagram
|
||||
### VSOCK communication diagram
|
||||
|
||||
```
|
||||
.----------------------.
|
||||
@@ -84,7 +84,7 @@ multiple clients and does not require multiplexers ([kata-proxy][2] and
|
||||
## System requirements
|
||||
|
||||
The host Linux kernel version must be greater than or equal to v4.8, and the
|
||||
`vhost_vsock` module must be loaded or built-in (CONFIG_VHOST_VSOCK=y). To
|
||||
`vhost_vsock` module must be loaded or built-in (`CONFIG_VHOST_VSOCK=y`). To
|
||||
load the module run the following command:
|
||||
|
||||
```
|
||||
@@ -95,14 +95,14 @@ The Kata Containers version must be greater than or equal to 1.2.0 and `use_vsoc
|
||||
must be set to `true` in the runtime [configuration file][1].
|
||||
|
||||
### With VMWare guest
|
||||
To use Kata Containers with vsocks in a VMWare guest environment, first stop the vmware-tools service and unload the VMWare Linux kernel module.
|
||||
To use Kata Containers with VSOCKs in a VMWare guest environment, first stop the `vmware-tools` service and unload the VMWare Linux kernel module.
|
||||
```
|
||||
sudo systemctl stop vmware-tools
|
||||
sudo modprobe -r vmw_vsock_vmci_transport
|
||||
sudo modprobe -i vhost_vsock
|
||||
```
|
||||
|
||||
## Advantages of using vsocks
|
||||
## Advantages of using VSOCKs
|
||||
|
||||
### High density
|
||||
|
||||
@@ -111,18 +111,18 @@ Using a proxy for multiplexing the connections between the VM and the host uses
|
||||
memory that could have been used to host more PODs. When we talk about density
|
||||
each kilobyte matters and it might be the decisive factor between run another
|
||||
POD or not. For example if you have 500 PODs running in a server, the same
|
||||
amount of [kata-proxy][2] processes will be running and consuming for around
|
||||
2250MB of RAM. Before making the decision not to use vsocks, you should ask
|
||||
amount of [`kata-proxy`][2] processes will be running and consuming for around
|
||||
2250MB of RAM. Before making the decision not to use VSOCKs, you should ask
|
||||
yourself, how many more containers can run with the memory RAM consumed by the
|
||||
kata-proxies?.
|
||||
Kata proxies?
|
||||
|
||||
### Reliability
|
||||
|
||||
[kata-proxy][2] is in charge of multiplexing the connections between virtual
|
||||
[`kata-proxy`][2] is in charge of multiplexing the connections between virtual
|
||||
machine and host processes, if it dies all connections get broken. For example
|
||||
if you have a [POD][5] with 10 containers running, if kata-proxy dies it would
|
||||
if you have a [POD][5] with 10 containers running, if `kata-proxy` dies it would
|
||||
be impossible to contact your containers, though they would still be running.
|
||||
Since communication via vsocks is direct, the only way to lose communication
|
||||
Since communication via VSOCKs is direct, the only way to lose communication
|
||||
with the containers is if the VM itself or the [shim][6] dies, if this happens
|
||||
the containers are removed automatically.
|
||||
|
||||
|
||||
@@ -34,7 +34,7 @@ This is an architectural overview of Kata Containers, based on the 1.5.0 release
|
||||
The two primary deliverables of the Kata Containers project are a container runtime
|
||||
and a CRI friendly shim. There is also a CRI friendly library API behind them.
|
||||
|
||||
The [Kata Containers runtime (kata-runtime)](https://github.com/kata-containers/runtime)
|
||||
The [Kata Containers runtime (`kata-runtime`)](https://github.com/kata-containers/runtime)
|
||||
is compatible with the [OCI](https://github.com/opencontainers) [runtime specification](https://github.com/opencontainers/runtime-spec)
|
||||
and therefore works seamlessly with the
|
||||
[Docker\* Engine](https://www.docker.com/products/docker-engine) pluggable runtime
|
||||
@@ -45,32 +45,32 @@ select between the [default Docker and CRI shim runtime (runc)](https://github.c
|
||||
and `kata-runtime`.
|
||||
|
||||
`kata-runtime` creates a QEMU\*/KVM virtual machine for each container or pod,
|
||||
the Docker engine or Kubernetes' `kubelet` creates respectively.
|
||||
the Docker engine or `kubelet` (Kubernetes) creates respectively.
|
||||
|
||||

|
||||
|
||||
The [`containerd-shim-kata-v2` (shown as `shimv2` from this point onwards)](https://github.com/kata-containers/runtime/tree/master/containerd-shim-v2)
|
||||
is another Kata Containers entrypoint, which
|
||||
implements the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2) for Kata.
|
||||
With `shimv2`, kubernetes can launch Pod and OCI compatible containers with one shim (the `shimv2`) per Pod instead
|
||||
With `shimv2`, Kubernetes can launch Pod and OCI compatible containers with one shim (the `shimv2`) per Pod instead
|
||||
of `2N+1` shims (a `containerd-shim` and a `kata-shim` for each container and the Pod sandbox itself), and no standalone
|
||||
`kata-proxy` process even if no vsock is available.
|
||||
`kata-proxy` process even if no VSOCK is available.
|
||||
|
||||

|
||||

|
||||
|
||||
The container process is then spawned by
|
||||
[agent](https://github.com/kata-containers/agent), an agent process running
|
||||
as a daemon inside the virtual machine. kata-agent runs a gRPC server in
|
||||
the guest using a virtio serial or vsock interface which QEMU exposes as a socket
|
||||
file on the host. kata-runtime uses a gRPC protocol to communicate with
|
||||
as a daemon inside the virtual machine. `kata-agent` runs a gRPC server in
|
||||
the guest using a VIRTIO serial or VSOCK interface which QEMU exposes as a socket
|
||||
file on the host. `kata-runtime` uses a gRPC protocol to communicate with
|
||||
the agent. This protocol allows the runtime to send container management
|
||||
commands to the agent. The protocol is also used to carry the I/O streams (stdout,
|
||||
stderr, stdin) between the containers and the manage engines (e.g. Docker Engine).
|
||||
|
||||
For any given container, both the init process and all potentially executed
|
||||
commands within that container, together with their related I/O streams, need
|
||||
to go through the virtio serial or vsock interface exported by QEMU.
|
||||
In the virtio serial case, a [Kata Containers
|
||||
to go through the VIRTIO serial or VSOCK interface exported by QEMU.
|
||||
In the VIRTIO serial case, a [Kata Containers
|
||||
proxy (`kata-proxy`)](https://github.com/kata-containers/proxy) instance is
|
||||
launched for each virtual machine to handle multiplexing and demultiplexing
|
||||
those commands and streams.
|
||||
@@ -96,7 +96,7 @@ As a result, there will not be any of the additional processes previously listed
|
||||
|
||||
The container workload, that is, the actual OCI bundle rootfs, is exported from the
|
||||
host to the virtual machine. In the case where a block-based graph driver is
|
||||
configured, virtio-scsi will be used. In all other cases a 9pfs virtio mount point
|
||||
configured, `virtio-scsi` will be used. In all other cases a 9pfs VIRTIO mount point
|
||||
will be used. `kata-agent` uses this mount point as the root filesystem for the
|
||||
container processes.
|
||||
|
||||
@@ -111,8 +111,8 @@ to create virtual machines where containers will run:
|
||||
### QEMU/KVM
|
||||
|
||||
Depending on the host architecture, Kata Containers supports various machine types,
|
||||
for example `pc` and `q35` on x86 systems, `virt` on ARM systems and `pseries` on IBM Power systems. Kata Containers'
|
||||
default machine type is `pc`. The default machine type and its [`Machine accelerators`](#machine-accelerators) can
|
||||
for example `pc` and `q35` on x86 systems, `virt` on ARM systems and `pseries` on IBM Power systems. The default Kata Containers
|
||||
machine type is `pc`. The default machine type and its [`Machine accelerators`](#machine-accelerators) can
|
||||
be changed by editing the runtime [`configuration`](#configuration) file.
|
||||
|
||||
The following QEMU features are used in Kata Containers to manage resource constraints, improve
|
||||
@@ -129,7 +129,7 @@ Machine accelerators are architecture specific and can be used to improve the pe
|
||||
and enable specific features of the machine types. The following machine accelerators
|
||||
are used in Kata Containers:
|
||||
|
||||
- nvdimm: This machine accelerator is x86 specific and only supported by `pc` and
|
||||
- NVDIMM: This machine accelerator is x86 specific and only supported by `pc` and
|
||||
`q35` machine types. `nvdimm` is used to provide the root filesystem as a persistent
|
||||
memory device to the Virtual Machine.
|
||||
|
||||
@@ -277,7 +277,7 @@ spawns the container process inside the VM, leveraging the `libcontainer` packag
|
||||
At this point the container process is running inside of the VM, and it is represented
|
||||
on the host system by the `kata-shim` process.
|
||||
|
||||

|
||||

|
||||
|
||||
#### `start`
|
||||
|
||||
@@ -290,7 +290,7 @@ With traditional containers, [`start`](https://github.com/kata-containers/runtim
|
||||
2. Call into the post-start hooks. Usually, this is a no-op since nothing is provided
|
||||
(this needs clarification)
|
||||
|
||||

|
||||

|
||||
|
||||
#### `exec`
|
||||
|
||||
@@ -303,9 +303,9 @@ container. In Kata Containers, this is handled as follows:
|
||||
original `kata-shim` representing the container process. This new `kata-shim` is
|
||||
used for the new exec process.
|
||||
|
||||
Now the `exec`'ed process is running within the VM, sharing `uts`, `pid`, `mnt` and `ipc` namespaces with the container process.
|
||||
Now the process started with `exec` is running within the VM, sharing `uts`, `pid`, `mnt` and `ipc` namespaces with the container process.
|
||||
|
||||

|
||||

|
||||
|
||||
#### `kill`
|
||||
|
||||
@@ -342,7 +342,7 @@ cannot be deleted unless the OCI runtime is explicitly being asked to, by using
|
||||
|
||||
If the sandbox is not stopped, but the particular container process returned on
|
||||
its own already, the `kata-runtime` will first go through most of the steps a `kill`
|
||||
would go through for a termination signal. After this process, or if the sandboxID was already stopped to begin with, then `kata-runtime` will:
|
||||
would go through for a termination signal. After this process, or if the `sandboxID` was already stopped to begin with, then `kata-runtime` will:
|
||||
|
||||
1. Remove container resources. Every file kept under `/var/{lib,run}/virtcontainers/sandboxes/<sandboxID>/<containerID>`.
|
||||
2. Remove sandbox resources. Every file kept under `/var/{lib,run}/virtcontainers/sandboxes/<sandboxID>`.
|
||||
@@ -380,13 +380,13 @@ is used, the I/O streams associated with each process needs to be multiplexed an
|
||||
to multiple `kata-shim` and `kata-runtime` clients associated with the VM. Its
|
||||
main role is to route the I/O streams and signals between each `kata-shim`
|
||||
instance and the `kata-agent`.
|
||||
`kata-proxy` connects to `kata-agent` on a unix domain socket that `kata-runtime` provides
|
||||
`kata-proxy` connects to `kata-agent` on a Unix domain socket that `kata-runtime` provides
|
||||
while spawning `kata-proxy`.
|
||||
`kata-proxy` uses [`yamux`](https://github.com/hashicorp/yamux) to multiplex gRPC
|
||||
requests on its connection to the `kata-agent`.
|
||||
|
||||
When proxy type is configured as "proxyBuiltIn", we do not spawn a separate
|
||||
process to proxy grpc connections. Instead a built-in yamux grpc dialer is used to connect
|
||||
When proxy type is configured as `proxyBuiltIn`, we do not spawn a separate
|
||||
process to proxy gRPC connections. Instead a built-in Yamux gRPC dialer is used to connect
|
||||
directly to `kata-agent`. This is used by CRI container runtime server `frakti` which
|
||||
calls directly into `kata-runtime`.
|
||||
|
||||
@@ -411,11 +411,11 @@ reaper and the `kata-agent`. `kata-shim`:
|
||||
`containerID` and `execID`. The `containerID` and `execID` are used to identify
|
||||
the true container process that the shim process will be shadowing or representing.
|
||||
- Forwards the standard input stream from the container process reaper into
|
||||
`kata-proxy` using grpc `WriteStdin` gRPC API.
|
||||
`kata-proxy` using gRPC `WriteStdin` gRPC API.
|
||||
- Reads the standard output/error from the container process.
|
||||
- Forwards signals it receives from the container process reaper to `kata-proxy`
|
||||
using `SignalProcessRequest` API.
|
||||
- Monitors terminal changes and forwards them to `kata-proxy` using grpc `TtyWinResize`
|
||||
- Monitors terminal changes and forwards them to `kata-proxy` using gRPC `TtyWinResize`
|
||||
API.
|
||||
|
||||
|
||||
@@ -426,9 +426,9 @@ At some point in a container lifecycle, container engines will set up that names
|
||||
to add the container to a network which is isolated from the host network, but
|
||||
which is shared between containers
|
||||
|
||||
In order to do so, container engines will usually add one end of a `virtual ethernet
|
||||
(veth)` pair into the container networking namespace. The other end of the `veth`
|
||||
pair is added to the container network.
|
||||
In order to do so, container engines will usually add one end of a virtual
|
||||
ethernet (`veth`) pair into the container networking namespace. The other end of
|
||||
the `veth` pair is added to the container network.
|
||||
|
||||
This is a very namespace-centric approach as many hypervisors (in particular QEMU)
|
||||
cannot handle `veth` interfaces. Typically, `TAP` interfaces are created for VM
|
||||
@@ -450,25 +450,25 @@ and [CNI](https://github.com/containernetworking/cni) for networking management.
|
||||
|
||||
__CNM lifecycle__
|
||||
|
||||
1. RequestPool
|
||||
1. `RequestPool`
|
||||
|
||||
2. CreateNetwork
|
||||
2. `CreateNetwork`
|
||||
|
||||
3. RequestAddress
|
||||
3. `RequestAddress`
|
||||
|
||||
4. CreateEndPoint
|
||||
4. `CreateEndPoint`
|
||||
|
||||
5. CreateContainer
|
||||
5. `CreateContainer`
|
||||
|
||||
6. Create `config.json`
|
||||
|
||||
7. Create PID and network namespace
|
||||
|
||||
8. ProcessExternalKey
|
||||
8. `ProcessExternalKey`
|
||||
|
||||
9. JoinEndPoint
|
||||
9. `JoinEndPoint`
|
||||
|
||||
10. LaunchContainer
|
||||
10. `LaunchContainer`
|
||||
|
||||
11. Launch
|
||||
|
||||
@@ -482,7 +482,7 @@ __Runtime network setup with CNM__
|
||||
|
||||
2. Create the network namespace
|
||||
|
||||
3. Call the prestart hook (from inside the netns)
|
||||
3. Call the `prestart` hook (from inside the netns)
|
||||
|
||||
4. Scan network interfaces inside netns and get the name of the interface
|
||||
created by prestart hook
|
||||
@@ -538,21 +538,21 @@ such as networking, storage, mount, PID, etc. is called a
|
||||
[Pod](https://kubernetes.io/docs/user-guide/pods/).
|
||||
A node can have multiple pods, but at a minimum, a node within a Kubernetes cluster
|
||||
only needs to run a container runtime and a container agent (called a
|
||||
[kubelet](https://kubernetes.io/docs/admin/kubelet/)).
|
||||
[Kubelet](https://kubernetes.io/docs/admin/kubelet/)).
|
||||
|
||||
A Kubernetes cluster runs a control plane where a scheduler (typically running on a
|
||||
dedicated master node) calls into a compute kubelet. This kubelet instance is
|
||||
dedicated master node) calls into a compute Kubelet. This Kubelet instance is
|
||||
responsible for managing the lifecycle of pods within the nodes and eventually relies
|
||||
on a container runtime to handle execution. The kubelet architecture decouples
|
||||
on a container runtime to handle execution. The Kubelet architecture decouples
|
||||
lifecycle management from container execution through the dedicated
|
||||
`gRPC` based [Container Runtime Interface (CRI)](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/container-runtime-interface-v1.md).
|
||||
|
||||
In other words, a kubelet is a CRI client and expects a CRI implementation to
|
||||
In other words, a Kubelet is a CRI client and expects a CRI implementation to
|
||||
handle the server side of the interface.
|
||||
[CRI-O\*](https://github.com/kubernetes-incubator/cri-o) and [Containerd CRI Plugin\*](https://github.com/containerd/cri) are CRI implementations that rely on [OCI](https://github.com/opencontainers/runtime-spec)
|
||||
compatible runtimes for managing container instances.
|
||||
|
||||
Kata Containers is an officially supported CRI-O and Containerd CRI Plugin runtime. It is OCI compatible and therefore aligns with each projects' architecture and requirements.
|
||||
Kata Containers is an officially supported CRI-O and Containerd CRI Plugin runtime. It is OCI compatible and therefore aligns with project's architecture and requirements.
|
||||
However, due to the fact that Kubernetes execution units are sets of containers (also
|
||||
known as pods) rather than single containers, the Kata Containers runtime needs to
|
||||
get extra information to seamlessly integrate with Kubernetes.
|
||||
@@ -562,7 +562,7 @@ get extra information to seamlessly integrate with Kubernetes.
|
||||
The Kubernetes\* execution unit is a pod that has specifications detailing constraints
|
||||
such as namespaces, groups, hardware resources, security contents, *etc* shared by all
|
||||
the containers within that pod.
|
||||
By default the kubelet will send a container creation request to its CRI runtime for
|
||||
By default the Kubelet will send a container creation request to its CRI runtime for
|
||||
each pod and container creation. Without additional metadata from the CRI runtime,
|
||||
the Kata Containers runtime will thus create one virtual machine for each pod and for
|
||||
each containers within a pod. However the task of providing the Kubernetes pod semantics
|
||||
@@ -579,7 +579,7 @@ pod creation request from a container one.
|
||||
|
||||
As of Kata Containers 1.5, using `shimv2` with containerd 1.2.0 or above is the preferred
|
||||
way to run Kata Containers with Kubernetes ([see the howto](https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md#configure-containerd-to-use-kata-containers)).
|
||||
The CRI-O will catch up soon ([kubernetes-sigs/cri-o#2024](https://github.com/kubernetes-sigs/cri-o/issues/2024)).
|
||||
The CRI-O will catch up soon ([`kubernetes-sigs/cri-o#2024`](https://github.com/kubernetes-sigs/cri-o/issues/2024)).
|
||||
|
||||
Refer to the following how-to guides:
|
||||
|
||||
@@ -597,7 +597,7 @@ specific annotations to the OCI configuration file (`config.json`) which is pass
|
||||
the OCI compatible runtime.
|
||||
|
||||
Before calling its runtime, CRI-O will always add a `io.kubernetes.cri-o.ContainerType`
|
||||
annotation to the `config.json` configuration file it produces from the kubelet CRI
|
||||
annotation to the `config.json` configuration file it produces from the Kubelet CRI
|
||||
request. The `io.kubernetes.cri-o.ContainerType` annotation can either be set to `sandbox`
|
||||
or `container`. Kata Containers will then use this annotation to decide if it needs to
|
||||
respectively create a virtual machine or a container inside a virtual machine associated
|
||||
@@ -647,7 +647,7 @@ developers applications would be `untrusted` by default. Developers workloads ca
|
||||
be buggy, unstable or even include malicious code and thus from a security perspective
|
||||
it makes sense to tag them as `untrusted`. A CRI-O and Kata Containers based
|
||||
Kubernetes cluster handles this use case transparently as long as the deployed
|
||||
containers are properly tagged. All `untrusted` containers will be handled by kata Containers and thus run in a hardware virtualized secure sandbox while `runc`, for
|
||||
containers are properly tagged. All `untrusted` containers will be handled by Kata Containers and thus run in a hardware virtualized secure sandbox while `runc`, for
|
||||
example, could handle the `trusted` ones.
|
||||
|
||||
CRI-O's default behavior is to trust all pods, except when they're annotated with
|
||||
@@ -669,7 +669,7 @@ a pod is **not** `Privileged` the runtime selection is done as follows:
|
||||
|
||||
Kata Containers utilizes the Linux kernel DAX [(Direct Access filesystem)](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/dax.txt)
|
||||
feature to efficiently map some host-side files into the guest VM space.
|
||||
In particular, Kata Containers uses the QEMU nvdimm feature to provide a
|
||||
In particular, Kata Containers uses the QEMU NVDIMM feature to provide a
|
||||
memory-mapped virtual device that can be used to DAX map the virtual machine's
|
||||
root filesystem into the guest memory address space.
|
||||
|
||||
@@ -677,7 +677,7 @@ Mapping files using DAX provides a number of benefits over more traditional VM
|
||||
file and device mapping mechanisms:
|
||||
|
||||
- Mapping as a direct access devices allows the guest to directly access
|
||||
the host memory pages (such as via eXicute In Place (XIP)), bypassing the guest
|
||||
the host memory pages (such as via Execute In Place (XIP)), bypassing the guest
|
||||
page cache. This provides both time and space optimizations.
|
||||
- Mapping as a direct access device inside the VM allows pages from the
|
||||
host to be demand loaded using page faults, rather than having to make requests
|
||||
@@ -687,12 +687,12 @@ file and device mapping mechanisms:
|
||||
share pages.
|
||||
|
||||
Kata Containers uses the following steps to set up the DAX mappings:
|
||||
1. QEMU is configured with an nvdimm memory device, with a memory file
|
||||
backend to map in the host-side file into the virtual nvdimm space.
|
||||
2. The guest kernel command line mounts this nvdimm device with the DAX
|
||||
1. QEMU is configured with an NVDIMM memory device, with a memory file
|
||||
backend to map in the host-side file into the virtual NVDIMM space.
|
||||
2. The guest kernel command line mounts this NVDIMM device with the DAX
|
||||
feature enabled, allowing direct page mapping and access, thus bypassing the
|
||||
guest page cache.
|
||||
|
||||

|
||||
|
||||
Information on the use of nvdimm via QEMU is available in the [QEMU source code](http://git.qemu-project.org/?p=qemu.git;a=blob;f=docs/nvdimm.txt;hb=HEAD)
|
||||
Information on the use of NVDIMM via QEMU is available in the [QEMU source code](http://git.qemu-project.org/?p=qemu.git;a=blob;f=docs/nvdimm.txt;hb=HEAD)
|
||||
|
||||
@@ -23,7 +23,7 @@ Be aware that increasing this value negatively impacts the virtual machine's
|
||||
boot time and memory footprint.
|
||||
In general, we recommend that you do not edit this variable, unless you know
|
||||
what are you doing. If your container needs more than one vCPU, use
|
||||
[docker `--cpus`][1], [docker update][4], or [kubernetes `cpu` limits][2] to
|
||||
[docker `--cpus`][1], [docker update][4], or [Kubernetes `cpu` limits][2] to
|
||||
assign more resources.
|
||||
|
||||
*Docker*
|
||||
@@ -86,7 +86,7 @@ constraints with each container trying to consume 100% of vCPU, the resources
|
||||
divide in two parts, 50% of vCPU for each container because your virtual
|
||||
machine does not have enough resources to satisfy containers needs. If you want
|
||||
to give access to a greater or lesser portion of vCPUs to a specific container,
|
||||
use [docker --cpu-shares][1] or [Kubernetes `cpu` requests][2].
|
||||
use [`docker --cpu-shares`][1] or [Kubernetes `cpu` requests][2].
|
||||
|
||||
*Docker*
|
||||
|
||||
@@ -176,8 +176,8 @@ docker run --cpus 4 -ti debian bash -c "nproc; cat /sys/fs/cgroup/cpu,cpuacct/cp
|
||||
Kata Containers runs over two layers of cgroups, the first layer is in the guest where
|
||||
only the workload is placed, the second layer is in the host that is more complex and
|
||||
might contain more than one process and task (thread) depending of the number of
|
||||
containers per POD and vCPUs per container. The following diagram represents a nginx container
|
||||
created with `docker` with the default number of vcpus.
|
||||
containers per POD and vCPUs per container. The following diagram represents a Nginx container
|
||||
created with `docker` with the default number of vCPUs.
|
||||
|
||||
|
||||
```
|
||||
@@ -185,7 +185,7 @@ $ docker run -dt --runtime=kata-runtime nginx
|
||||
|
||||
|
||||
.-------.
|
||||
| nginx |
|
||||
| Nginx |
|
||||
.--'-------'---. .------------.
|
||||
| Guest Cgroup | | Kata agent |
|
||||
.-'--------------'--'------------'. .-----------.
|
||||
@@ -202,13 +202,13 @@ vCPUs are constrained.
|
||||
|
||||
### cgroups in the guest
|
||||
|
||||
Only the workload process including all its threads are placed into cpu cgroups, this means
|
||||
Only the workload process including all its threads are placed into CPU cgroups, this means
|
||||
that `kata-agent` and `systemd` run without constraints in the guest.
|
||||
|
||||
#### CPU pinning
|
||||
|
||||
Kata Containers tries to apply and honor the cgroups but sometimes that is not possible.
|
||||
An example of this occurs with cpu cgroups when the number of virtual CPUs (in the guest)
|
||||
An example of this occurs with CPU cgroups when the number of virtual CPUs (in the guest)
|
||||
does not match the actual number of physical host CPUs.
|
||||
In Kata Containers to have a good performance and small memory footprint, the resources are
|
||||
hot added when they are needed, therefore the number of virtual resources is not the same
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Kata API Design
|
||||
To fulfill the [kata design requirements](kata-design-requirements.md), and based on the disscusion on [Virtcontainers API extentions](https://docs.google.com/presentation/d/1dbGrD1h9cpuqAPooiEgtiwWDGCYhVPdatq7owsKHDEQ), the kata runtime library features the following APIs:
|
||||
To fulfill the [Kata design requirements](kata-design-requirements.md), and based on the discussion on [Virtcontainers API extensions](https://docs.google.com/presentation/d/1dbGrD1h9cpuqAPooiEgtiwWDGCYhVPdatq7owsKHDEQ), the Kata runtime library features the following APIs:
|
||||
- Sandbox based top API
|
||||
- Storage and network hotplug API
|
||||
- Plugin frameworks for external proprietary Kata runtime extensions
|
||||
@@ -7,43 +7,45 @@ To fulfill the [kata design requirements](kata-design-requirements.md), and base
|
||||
|
||||
## Sandbox Based API
|
||||
### Sandbox Management API
|
||||
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|CreateSandbox(SandboxConfig)| Create and start a sandbox, and return the sandbox structure.|
|
||||
|FetchSandbox(ID)| Connect to an existing sandbox and return the sandbox structure.|
|
||||
|ListSandboxes()| List all existing standboxes with status. |
|
||||
|`CreateSandbox(SandboxConfig)`| Create and start a sandbox, and return the sandbox structure.|
|
||||
|`FetchSandbox(ID)`| Connect to an existing sandbox and return the sandbox structure.|
|
||||
|`ListSandboxes()`| List all existing sandboxes with status. |
|
||||
|
||||
### Sandbox Operation API
|
||||
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|sandbox.Pause()| Pause the sandbox.|
|
||||
|sandbox.Resume()| Resume the paused sandbox.|
|
||||
|sandbox.Release()| Release a sandbox data structure, close connections to the agent, and quit any goroutines associated with the sandbox. Mostly used for daemon restart.|
|
||||
|sandbox.Delete()| Destroy the sandbox and remove all persistent metadata.|
|
||||
|sandbox.Status()| Get the status of the sandbox and containers.|
|
||||
|sandbox.Monitor()| Return a context handler for caller to monitor sandbox callbacks such as error termination.|
|
||||
|sandbox.CreateContainer()| Create new container in the sandbox.|
|
||||
|sandbox.DeleteContainer()| Delete a container from the sandbox.|
|
||||
|sandbox.StartContainer()| Start a container in the sandbox.|
|
||||
|sandbox.StatusContainer()| Get the status of a container in the sandbox.|
|
||||
|sandbox.EnterContainer()| Run a new process in a container.|
|
||||
|sandbox.WaitProcess()| Wait on a process to terminate.|
|
||||
|`sandbox.Pause()`| Pause the sandbox.|
|
||||
|`sandbox.Resume()`| Resume the paused sandbox.|
|
||||
|`sandbox.Release()`| Release a sandbox data structure, close connections to the agent, and quit any goroutines associated with the sandbox. Mostly used for daemon restart.|
|
||||
|`sandbox.Delete()`| Destroy the sandbox and remove all persistent metadata.|
|
||||
|`sandbox.Status()`| Get the status of the sandbox and containers.|
|
||||
|`sandbox.Monitor()`| Return a context handler for caller to monitor sandbox callbacks such as error termination.|
|
||||
|`sandbox.CreateContainer()`| Create new container in the sandbox.|
|
||||
|`sandbox.DeleteContainer()`| Delete a container from the sandbox.|
|
||||
|`sandbox.StartContainer()`| Start a container in the sandbox.|
|
||||
|`sandbox.StatusContainer()`| Get the status of a container in the sandbox.|
|
||||
|`sandbox.EnterContainer()`| Run a new process in a container.|
|
||||
|`sandbox.WaitProcess()`| Wait on a process to terminate.|
|
||||
### Sandbox Hotplug API
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|sandbox.AddDevice()| Add new storage device to the sandbox.|
|
||||
|sandbox.AddInterface()| Add new nic to the sandbox.|
|
||||
|sandbox.RemoveInterface()| Remove a nic from the sandbox.|
|
||||
|sandbox.ListInterfaces()| List all nics and their configurations in the sandbox.|
|
||||
|sandbox.UpdateRoutes()| Update the sandbox route table (e.g. for portmapping support).|
|
||||
|sandbox.ListRoutes()| List the sandbox route table.|
|
||||
|`sandbox.AddDevice()`| Add new storage device to the sandbox.|
|
||||
|`sandbox.AddInterface()`| Add new NIC to the sandbox.|
|
||||
|`sandbox.RemoveInterface()`| Remove a NIC from the sandbox.|
|
||||
|`sandbox.ListInterfaces()`| List all NICs and their configurations in the sandbox.|
|
||||
|`sandbox.UpdateRoutes()`| Update the sandbox route table (e.g. for portmapping support).|
|
||||
|`sandbox.ListRoutes()`| List the sandbox route table.|
|
||||
|
||||
### Sandbox Relay API
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|sandbox.WinsizeProcess(containerID, processID, Height, Width)|Relay TTY resize request to a process.|
|
||||
|sandbox.SignalProcess(containerID, processID, signalID, signalALL)| Relay a signal to a process or all processes in a container.|
|
||||
|sandbox.IOStream(containerID, processID)| Relay a process stdio. Return stdin/stdout/stderr pipes to the process stdin/stdout/stderr streams.|
|
||||
|`sandbox.WinsizeProcess(containerID, processID, Height, Width)`|Relay TTY resize request to a process.|
|
||||
|`sandbox.SignalProcess(containerID, processID, signalID, signalALL)`| Relay a signal to a process or all processes in a container.|
|
||||
|`sandbox.IOStream(containerID, processID)`| Relay a process stdio. Return stdin/stdout/stderr pipes to the process stdin/stdout/stderr streams.|
|
||||
|
||||
## Plugin framework for external proprietary Kata runtime extensions
|
||||
### Hypervisor plugin
|
||||
@@ -55,9 +57,9 @@ All metadata storage plugins must implement the following API:
|
||||
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|storage.Save(key, value)| Save a record.|
|
||||
|storage.Load(key)| Load a record.|
|
||||
|storage.Delete(key)| Delete a record.|
|
||||
|`storage.Save(key, value)`| Save a record.|
|
||||
|`storage.Load(key)`| Load a record.|
|
||||
|`storage.Delete(key)`| Delete a record.|
|
||||
|
||||
Built-in implementations include:
|
||||
- Filesystem storage
|
||||
@@ -69,15 +71,15 @@ All VM factory plugins must implement following API:
|
||||
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|VMFactory.NewVM(HypervisorConfig)|Create a new VM based on `HypervisorConfig`.|
|
||||
|`VMFactory.NewVM(HypervisorConfig)`|Create a new VM based on `HypervisorConfig`.|
|
||||
|
||||
Built-in implementations include:
|
||||
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|CreateNew()| Create brand new VM based on 'HypervisorConfig'.|
|
||||
|CreateFromTemplate()| Create new VM from template.|
|
||||
|CreateFromCache()| Create new VM from VM caches.|
|
||||
|`CreateNew()`| Create brand new VM based on `HypervisorConfig`.|
|
||||
|`CreateFromTemplate()`| Create new VM from template.|
|
||||
|`CreateFromCache()`| Create new VM from VM caches.|
|
||||
|
||||
### Sandbox Creation Plugin Workflow
|
||||

|
||||
@@ -91,19 +93,19 @@ Built-in implementations include:
|
||||
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|noopshim|Do not start any shim process.|
|
||||
|ccshim| Start the cc-shim binary.|
|
||||
|katashim| Start the kata-shim binary.|
|
||||
|katashimbuiltin|No standalone shim process but shim functionality APIs are exported.|
|
||||
|`noopshim`|Do not start any shim process.|
|
||||
|`ccshim`| Start the cc-shim binary.|
|
||||
|`katashim`| Start the `kata-shim` binary.|
|
||||
|`katashimbuiltin`|No standalone shim process but shim functionality APIs are exported.|
|
||||
- Supported proxy configurations:
|
||||
|
||||
|Name|Description|
|
||||
|---|---|
|
||||
|noopProxy| a dummy proxy implementation of the proxy interface, only used for testing purpose.|
|
||||
|noProxy|generic implementation for any case where no actual proxy is needed.|
|
||||
|ccProxy|run ccProxy to proxy between runtime and agent.|
|
||||
|kataProxy|run kata-proxy to translate yamux connections between runtime and kata agent. |
|
||||
|kataProxyBuiltin| no standalone proxy process and connect to kata agent with internal yamux translation.|
|
||||
|`noopProxy`| a dummy proxy implementation of the proxy interface, only used for testing purpose.|
|
||||
|`noProxy`|generic implementation for any case where no actual proxy is needed.|
|
||||
|`ccProxy`|run `ccProxy` to proxy between runtime and agent.|
|
||||
|`kataProxy`|run `kata-proxy` to translate Yamux connections between runtime and Kata agent. |
|
||||
|`kataProxyBuiltin`| no standalone proxy process and connect to Kata agent with internal Yamux translation.|
|
||||
|
||||
### Built-in Shim Capability
|
||||
Built-in shim capability is implemented by removing standalone shim process, and
|
||||
@@ -111,5 +113,5 @@ supporting the shim related APIs.
|
||||
|
||||
### Built-in Proxy Capability
|
||||
Built-in proxy capability is achieved by removing standalone proxy process, and
|
||||
connecting to kata agent with a custom grpc dialer that is internal yamux translation.
|
||||
connecting to Kata agent with a custom gRPC dialer that is internal Yamux translation.
|
||||
The behavior is enabled when proxy is configured as `kataProxyBuiltin`.
|
||||
|
||||
@@ -1,22 +1,22 @@
|
||||
# How to use Kata Containers and Containerd
|
||||
|
||||
- [Concepts](#concepts)
|
||||
- [Kubernetes RuntimeClass](#kubernetes-runtimeclass)
|
||||
- [Kubernetes `RuntimeClass`](#kubernetes-runtimeclass)
|
||||
- [Containerd Runtime V2 API: Shim V2 API](#containerd-runtime-v2-api-shim-v2-api)
|
||||
- [Install](#install)
|
||||
- [Install Kata Containers](#install-kata-containers)
|
||||
- [Install containerd with cri plugin](#install-containerd-with-cri-plugin)
|
||||
- [Install containerd with CRI plugin](#install-containerd-with-cri-plugin)
|
||||
- [Install CNI plugins](#install-cni-plugins)
|
||||
- [Install cri-tools](#install-cri-tools)
|
||||
- [Install `cri-tools`](#install-cri-tools)
|
||||
- [Configuration](#configuration)
|
||||
- [Configure containerd to use Kata Containers](#configure-containerd-to-use-kata-containers)
|
||||
- [Kata Containers as a RuntimeClass](#kata-containers-as-a-runtimeclass)
|
||||
- [Kata Containers as a `RuntimeClass`](#kata-containers-as-a-runtimeclass)
|
||||
- [Kata Containers as the runtime for untrusted workload](#kata-containers-as-the-runtime-for-untrusted-workload)
|
||||
- [Kata Containers as the default runtime](#kata-containers-as-the-default-runtime)
|
||||
- [Configuration for cri-tools](#configuration-for-cri-tools)
|
||||
- [Configuration for `cri-tools`](#configuration-for-cri-tools)
|
||||
- [Run](#run)
|
||||
- [Launch containers with ctr command line](#launch-containers-with-ctr-command-line)
|
||||
- [Launch Pods with crictl command line](#launch-pods-with-crictl-command-line)
|
||||
- [Launch containers with `ctr` command line](#launch-containers-with-ctr-command-line)
|
||||
- [Launch Pods with `crictl` command line](#launch-pods-with-crictl-command-line)
|
||||
|
||||
This document covers the installation and configuration of [containerd](https://containerd.io/)
|
||||
and [Kata Containers](https://katacontainers.io). The containerd provides not only the `ctr`
|
||||
@@ -28,18 +28,18 @@ Previous versions are addressed here, but we suggest users upgrade to the newer
|
||||
|
||||
## Concepts
|
||||
|
||||
### Kubernetes RuntimeClass
|
||||
### Kubernetes `RuntimeClass`
|
||||
|
||||
[RuntimeClass](https://kubernetes.io/docs/concepts/containers/runtime-class/) is a Kubernetes feature first
|
||||
[`RuntimeClass`](https://kubernetes.io/docs/concepts/containers/runtime-class/) is a Kubernetes feature first
|
||||
introduced in Kubernetes 1.12 as alpha. It is the feature for selecting the container runtime configuration to
|
||||
use to run a pod’s containers. This feature is supported in `containerd` since [v1.2.0](https://github.com/containerd/containerd/releases/tag/v1.2.0).
|
||||
|
||||
Before the RuntimeClass was introduced, Kubernetes was not aware of the difference of runtimes on the node. `kubelet`
|
||||
Before the `RuntimeClass` was introduced, Kubernetes was not aware of the difference of runtimes on the node. `kubelet`
|
||||
creates Pod sandboxes and containers through CRI implementations, and treats all the Pods equally. However, there
|
||||
are requirements to run trusted Pods (i.e. kubernetes plugin) in a native container like runC, and to run untrusted
|
||||
are requirements to run trusted Pods (i.e. Kubernetes plugin) in a native container like runc, and to run untrusted
|
||||
workloads with isolated sandboxes (i.e. Kata Containers).
|
||||
|
||||
As a result, the CRI implementations extended their semanitics for the requirements:
|
||||
As a result, the CRI implementations extended their semantics for the requirements:
|
||||
|
||||
- At the beginning, [frakti](https://github.com/kubernetes/frakti) checks the network configuration of a Pod, and
|
||||
treat Pod with `host` network as trusted, while others are treated as untrusted.
|
||||
@@ -52,20 +52,20 @@ As a result, the CRI implementations extended their semanitics for the requireme
|
||||
|
||||
To eliminate the complexity of user configuration introduced by the non-standardized annotations and provide
|
||||
extensibility, `RuntimeClass` was introduced. This gives users the ability to affect the runtime behavior
|
||||
through 'RuntimeClass' without the knowledge of the CRI daemons. We suggest that users with multiple runtimes
|
||||
use 'RuntimeClass' instead of the deprecated annotations.
|
||||
through `RuntimeClass` without the knowledge of the CRI daemons. We suggest that users with multiple runtimes
|
||||
use `RuntimeClass` instead of the deprecated annotations.
|
||||
|
||||
### Containerd Runtime V2 API: Shim V2 API
|
||||
|
||||
The [`containerd-shim-kata-v2` (short as `shimv2` in this documentation)](https://github.com/kata-containers/runtime/tree/master/containerd-shim-v2)
|
||||
implements the [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2) for Kata.
|
||||
With `shimv2`, kubernetes can launch Pod and OCI-compatible containers with one shim per Pod. Prior to `shimv2`, `2N+1`
|
||||
With `shimv2`, Kubernetes can launch Pod and OCI-compatible containers with one shim per Pod. Prior to `shimv2`, `2N+1`
|
||||
shims (i.e. a `containerd-shim` and a `kata-shim` for each container and the Pod sandbox itself) and no standalone `kata-proxy`
|
||||
process were used, even with vsock not available.
|
||||
process were used, even with VSOCK not available.
|
||||
|
||||

|
||||

|
||||
|
||||
The shim v2 is introduced in containerd [v1.2.0](https://github.com/containerd/containerd/releases/tag/v1.2.0) and kata `shimv2`
|
||||
The shim v2 is introduced in containerd [v1.2.0](https://github.com/containerd/containerd/releases/tag/v1.2.0) and Kata `shimv2`
|
||||
is implemented in Kata Containers v1.5.0.
|
||||
|
||||
## Install
|
||||
@@ -74,7 +74,7 @@ is implemented in Kata Containers v1.5.0.
|
||||
|
||||
Follow the instructions to [install Kata Containers](https://github.com/kata-containers/documentation/blob/master/install/README.md).
|
||||
|
||||
### Install containerd with cri plugin
|
||||
### Install containerd with CRI plugin
|
||||
|
||||
> **Note:** `cri` is a native plugin of containerd 1.1 and above. It is built into containerd and enabled by default.
|
||||
> You do not need to install `cri` if you have containerd 1.1 or above. Just remove the `cri` plugin from the list of
|
||||
@@ -104,10 +104,10 @@ $ sudo cp -r bin /opt/cni/
|
||||
$ popd
|
||||
```
|
||||
|
||||
### Install cri-tools
|
||||
### Install `cri-tools`
|
||||
|
||||
> **Note:** `cri-tools` is a set of tools for CRI used for development and testing. Users who only want
|
||||
> to use containerd with kubernetes can skip the `cri-tools`.
|
||||
> to use containerd with Kubernetes can skip the `cri-tools`.
|
||||
|
||||
You can install the `cri-tools` from source code:
|
||||
|
||||
@@ -140,20 +140,20 @@ By default, the configuration of containerd is located at `/etc/containerd/confi
|
||||
|
||||
The following sections outline how to add Kata Containers to the configurations.
|
||||
|
||||
#### Kata Containers as a RuntimeClass
|
||||
#### Kata Containers as a `RuntimeClass`
|
||||
|
||||
For
|
||||
- Kata Containers v1.5.0 or above (including 1.5.0-rc)
|
||||
- Containerd v1.2.0 or above
|
||||
- Kubernetes v1.12.0 or above
|
||||
|
||||
The RuntimeClass is suggested.
|
||||
The `RuntimeClass` is suggested.
|
||||
|
||||
The following configuration includes three runtime classes:
|
||||
- `plugins.cri.containerd.runtimes.runc`: the runC, and it is the default runtime.
|
||||
- `plugins.cri.containerd.runtimes.kata`: The function in containerd (reference [the document here](https://github.com/containerd/containerd/tree/master/runtime/v2#binary-naming))
|
||||
where the dot-connected string `io.containerd.kata.v2` is translated to `containerd-shim-kata-v2` (i.e. the
|
||||
binary name of the kata implementation of [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2)).
|
||||
binary name of the Kata implementation of [Containerd Runtime V2 (Shim API)](https://github.com/containerd/containerd/tree/master/runtime/v2)).
|
||||
- `plugins.cri.containerd.runtimes.katacli`: the `containerd-shim-runc-v1` calls `kata-runtime`, which is the legacy process.
|
||||
|
||||
```toml
|
||||
@@ -216,7 +216,7 @@ Name it as `/usr/local/bin/containerd-shim-katafc-v2` and reference it in the co
|
||||
|
||||
#### Kata Containers as the runtime for untrusted workload
|
||||
|
||||
For cases without RuntimeClass support, we can use the legacy annotation method to support using Kata Containers
|
||||
For cases without `RuntimeClass` support, we can use the legacy annotation method to support using Kata Containers
|
||||
for an untrusted workload. With the following configuration, you can run trusted workloads with a runtime such as `runc`
|
||||
and then, run an untrusted workload with Kata Containers:
|
||||
|
||||
@@ -274,9 +274,9 @@ Alternatively, for the earlier versions of Kata Containers and containerd that d
|
||||
runtime_engine = "/usr/bin/kata-runtime"
|
||||
```
|
||||
|
||||
### Configuration for cri-tools
|
||||
### Configuration for `cri-tools`
|
||||
|
||||
> **Note:** If you skipped the [Install cri-tools](#install-cri-tools) section, you can skip this section too.
|
||||
> **Note:** If you skipped the [Install `cri-tools`](#install-cri-tools) section, you can skip this section too.
|
||||
|
||||
First, add the CNI configuration in the containerd configuration.
|
||||
|
||||
@@ -321,7 +321,7 @@ debug: true
|
||||
|
||||
## Run
|
||||
|
||||
### Launch containers with ctr command line
|
||||
### Launch containers with `ctr` command line
|
||||
|
||||
To run a container with Kata Containers through the containerd command line, you can run the following:
|
||||
|
||||
@@ -329,9 +329,9 @@ To run a container with Kata Containers through the containerd command line, you
|
||||
$ sudo ctr run --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/busybox:latest hello sh
|
||||
```
|
||||
|
||||
This launchs a busybox container named `hello`, and it will be removed by `--rm` after it quits.
|
||||
This launches a BusyBox container named `hello`, and it will be removed by `--rm` after it quits.
|
||||
|
||||
### Launch Pods with crictl command line
|
||||
### Launch Pods with `crictl` command line
|
||||
|
||||
With the `crictl` command line of `cri-tools`, you can specify runtime class with `-r` or `--runtime` flag.
|
||||
Use the following to launch Pod with `kata` runtime class with the pod in [the example](https://github.com/kubernetes-sigs/cri-tools/tree/master/docs/examples)
|
||||
@@ -356,10 +356,10 @@ $ sudo crictl start 1aab7585530e6
|
||||
1aab7585530e6
|
||||
```
|
||||
|
||||
In Kubernetes, you need to create a RuntimeClass resource and add the RuntimeClass field in the Pod Spec
|
||||
In Kubernetes, you need to create a `RuntimeClass` resource and add the `RuntimeClass` field in the Pod Spec
|
||||
(see this [document](https://kubernetes.io/docs/concepts/containers/runtime-class/) for more information).
|
||||
|
||||
If RuntimeClass is not supported, you can use the following annotation in a Kubernetes pod to identify as an untrusted workload:
|
||||
If `RuntimeClass` is not supported, you can use the following annotation in a Kubernetes pod to identify as an untrusted workload:
|
||||
|
||||
```yaml
|
||||
annotations:
|
||||
|
||||
@@ -23,13 +23,13 @@ to launch Kata Containers. For the previous version of Kata Containers, the Pods
|
||||
|
||||
## Requirements
|
||||
|
||||
- Kubernetes, kubelet, kubeadm
|
||||
- Kubernetes, Kubelet, `kubeadm`
|
||||
- containerd with `cri` plug-in
|
||||
- Kata Containers
|
||||
|
||||
> **Note:** For information about the supported versions of these components,
|
||||
> see the Kata Containers
|
||||
> [versions.yaml](https://github.com/kata-containers/runtime/blob/master/versions.yaml)
|
||||
> [`versions.yaml`](https://github.com/kata-containers/runtime/blob/master/versions.yaml)
|
||||
> file.
|
||||
|
||||
## Install and configure containerd
|
||||
@@ -52,7 +52,7 @@ Then, make sure the containerd works with the [examples in it](containerd-kata.m
|
||||
|
||||
### Configure Kubelet to use containerd
|
||||
|
||||
In order to allow kubelet to use containerd (using the CRI interface), configure the service to point to the `containerd` socket.
|
||||
In order to allow Kubelet to use containerd (using the CRI interface), configure the service to point to the `containerd` socket.
|
||||
|
||||
- Configure Kubernetes to use `containerd`
|
||||
|
||||
@@ -72,7 +72,7 @@ In order to allow kubelet to use containerd (using the CRI interface), configure
|
||||
|
||||
### Configure HTTP proxy - OPTIONAL
|
||||
|
||||
If you are behind a proxy, use the following script to configure your proxy for docker, kubelet, and containerd:
|
||||
If you are behind a proxy, use the following script to configure your proxy for docker, Kubelet, and containerd:
|
||||
|
||||
```bash
|
||||
$ services="
|
||||
|
||||
@@ -59,7 +59,7 @@ For additional documentation on setting sysctls with Docker please refer to [Doc
|
||||
|
||||
Kubernetes considers certain sysctls as safe and others as unsafe. For detailed
|
||||
information about what sysctls are considered unsafe, please refer to the [Kubernetes sysctl docs](https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/).
|
||||
For using unsafe systcls, the cluster admin would need to allow these as:
|
||||
For using unsafe sysctls, the cluster admin would need to allow these as:
|
||||
|
||||
```
|
||||
$ kubelet --allowed-unsafe-sysctls 'kernel.msg*,net.ipv4.route.min_pmtu' ...
|
||||
@@ -77,12 +77,12 @@ nodeRegistration:
|
||||
...
|
||||
```
|
||||
|
||||
The above yaml can then be passed to `kubeadm init` as:
|
||||
The above YAML can then be passed to `kubeadm init` as:
|
||||
```
|
||||
$ sudo -E kubeadm init --config=kubeadm.yaml
|
||||
```
|
||||
|
||||
Both safe and unsafe sysctls can be enabled in the same way in the Pod yaml:
|
||||
Both safe and unsafe sysctls can be enabled in the same way in the Pod YAML:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
|
||||
@@ -7,11 +7,11 @@
|
||||
|
||||
## Introduction
|
||||
|
||||
Container deployments utilize explicit or implicit file sharing between host filesystem and containers. From a trust perspective, avoiding a shared file-system between the trusted host and untrusted container is recommended. This is not always feasible. In Kata Containers, block-based volumes are prefered as they allow usage of either device pass through or virtio-blk for access within the virtual machine.
|
||||
Container deployments utilize explicit or implicit file sharing between host filesystem and containers. From a trust perspective, avoiding a shared file-system between the trusted host and untrusted container is recommended. This is not always feasible. In Kata Containers, block-based volumes are preferred as they allow usage of either device pass through or `virtio-blk` for access within the virtual machine.
|
||||
|
||||
As of the 1.7 release of Kata Containers, [9pfs](https://www.kernel.org/doc/Documentation/filesystems/9p.txt) is the default filesystem sharing mechanism. While this does allow for workload compatability, it does so with degraded performance and potential for POSIX compliance limitations.
|
||||
As of the 1.7 release of Kata Containers, [9pfs](https://www.kernel.org/doc/Documentation/filesystems/9p.txt) is the default filesystem sharing mechanism. While this does allow for workload compatibility, it does so with degraded performance and potential for POSIX compliance limitations.
|
||||
|
||||
To help address these limitations, [virtio-fs](https://virtio-fs.gitlab.io/) has been developed. virtio-fs is a shared file system that lets virtual machines access a directory tree on the host. In Kata Containers, virtio-fs can be used to share container volumes, secrets, config-maps, configuration files (hostname, hosts, resolv.conf) and the container rootfs on the host with the guest. virtio-fs provides significant performance and POSIX compliance improvements compared to 9pfs.
|
||||
To help address these limitations, [virtio-fs](https://virtio-fs.gitlab.io/) has been developed. virtio-fs is a shared file system that lets virtual machines access a directory tree on the host. In Kata Containers, virtio-fs can be used to share container volumes, secrets, config-maps, configuration files (hostname, hosts, `resolv.conf`) and the container rootfs on the host with the guest. virtio-fs provides significant performance and POSIX compliance improvements compared to 9pfs.
|
||||
|
||||
Enabling of virtio-fs requires changes in the guest kernel as well as the VMM. For Kata Containers, experimental virtio-fs support is enabled through the [NEMU VMM](https://github.com/intel/nemu).
|
||||
|
||||
@@ -25,24 +25,24 @@ This document describes how to get Kata Containers to work with virtio-fs.
|
||||
|
||||
## Install Kata Containers with virtio-fs support
|
||||
|
||||
The Kata Containers NEMU configuration, the NEMU VMM and the virtiofs daemon are available in the [Kata Container release](https://github.com/kata-containers/runtime/releases) artifacts starting with the 1.7 release. While the feature is experimental, distribution packages are not supported, but installation is available through [kata-deploy](https://github.com/kata-containers/packaging/tree/master/kata-deploy).
|
||||
The Kata Containers NEMU configuration, the NEMU VMM and the `virtiofs` daemon are available in the [Kata Container release](https://github.com/kata-containers/runtime/releases) artifacts starting with the 1.7 release. While the feature is experimental, distribution packages are not supported, but installation is available through [`kata-deploy`](https://github.com/kata-containers/packaging/tree/master/kata-deploy).
|
||||
|
||||
Install the latest release of Kata as follows:
|
||||
```
|
||||
docker run --runtime=runc -v /opt/kata:/opt/kata -v /var/run/dbus:/var/run/dbus -v /run/systemd:/run/systemd -v /etc/docker:/etc/docker -it katadocker/kata-deploy kata-deploy-docker install
|
||||
```
|
||||
|
||||
This will place the Kata release artifacts in `/opt/kata`, and update Docker's configuration to include a runtime target, `kata-nemu`. Learn more about kata-deploy and how to use kata-deploy in Kubernetes [here](https://github.com/kata-containers/packaging/tree/master/kata-deploy#kubernetes-quick-start).
|
||||
This will place the Kata release artifacts in `/opt/kata`, and update Docker's configuration to include a runtime target, `kata-nemu`. Learn more about `kata-deploy` and how to use `kata-deploy` in Kubernetes [here](https://github.com/kata-containers/packaging/tree/master/kata-deploy#kubernetes-quick-start).
|
||||
|
||||
|
||||
## Run a Kata Container utilizing virtio-fs
|
||||
|
||||
Once installed, start a new container, utilizing NEMU + virtiofs:
|
||||
Once installed, start a new container, utilizing NEMU + `virtiofs`:
|
||||
```bash
|
||||
$ docker run --runtime=kata-nemu -it busybox
|
||||
```
|
||||
|
||||
Verify the new container is running with the NEMU hypervisor as well as using virtiofsd. To do this look for the hypervisor path and the virtiofs daemon process on the host:
|
||||
Verify the new container is running with the NEMU hypervisor as well as using `virtiofsd`. To do this look for the hypervisor path and the `virtiofs` daemon process on the host:
|
||||
```bash
|
||||
$ ps -aux | grep virtiofs
|
||||
root ... /home/foo/build-x86_64_virt/x86_64_virt-softmmu/qemu-system-x86_64_virt
|
||||
|
||||
@@ -130,24 +130,24 @@ by each project.
|
||||
|
||||
### Sidecar Istio
|
||||
|
||||
Istio provides a [bookinfo](https://istio.io/docs/examples/bookinfo/)
|
||||
Istio provides a [`bookinfo`](https://istio.io/docs/examples/bookinfo/)
|
||||
sample, which you can rely on to inject their `envoy` proxy as a
|
||||
sidecar.
|
||||
|
||||
You need to use their tool called `istioctl kube-inject` to inject
|
||||
your YAML file. We use their bookinfo sample as an example:
|
||||
your YAML file. We use their `bookinfo` sample as an example:
|
||||
```
|
||||
$ istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml -o bookinfo-injected.yaml
|
||||
```
|
||||
|
||||
### Sidecar Linkerd
|
||||
|
||||
Linkerd provides an [emojivoto](https://linkerd.io/2/getting-started/index.html)
|
||||
Linkerd provides an [`emojivoto`](https://linkerd.io/2/getting-started/index.html)
|
||||
sample, which you can rely on to inject their `linkerd` proxy as a
|
||||
sidecar.
|
||||
|
||||
You need to use their tool called `linkerd inject` to inject your YAML
|
||||
file. We use their emojivoto sample as example:
|
||||
file. We use their `emojivoto` sample as example:
|
||||
```
|
||||
$ wget https://raw.githubusercontent.com/runconduit/conduit-examples/master/emojivoto/emojivoto.yml
|
||||
$ linkerd inject emojivoto.yml > emojivoto-injected.yaml
|
||||
|
||||
@@ -10,13 +10,13 @@
|
||||
VMCache is a new function that creates VMs as caches before using it.
|
||||
It helps speed up new container creation.
|
||||
The function consists of a server and some clients communicating
|
||||
through Unix socket. The protocol is gRPC in [protocols/cache/cache.proto](https://github.com/kata-containers/runtime/blob/master/protocols/cache/cache.proto).
|
||||
through Unix socket. The protocol is gRPC in [`protocols/cache/cache.proto`](https://github.com/kata-containers/runtime/blob/master/protocols/cache/cache.proto).
|
||||
The VMCache server will create some VMs and cache them by factory cache.
|
||||
It will convert the VM to gRPC format and transport it when gets
|
||||
requested from clients.
|
||||
Factory grpccache is the VMCache client. It will request gRPC format
|
||||
Factory `grpccache` is the VMCache client. It will request gRPC format
|
||||
VM and convert it back to a VM. If VMCache function is enabled,
|
||||
kata-runtime will request VM from factory grpccache when it creates
|
||||
`kata-runtime` will request VM from factory `grpccache` when it creates
|
||||
a new sandbox.
|
||||
|
||||
### How is this different to VM templating
|
||||
@@ -40,7 +40,7 @@ Then you can create a VM templating for later usage by calling:
|
||||
```
|
||||
$ sudo kata-runtime factory init
|
||||
```
|
||||
and purge it by ctrl-c it.
|
||||
and purge it by `ctrl-c` it.
|
||||
|
||||
### Limitations
|
||||
* Cannot work with VM templating.
|
||||
|
||||
@@ -39,10 +39,11 @@ in a system configured to run Kata Containers.
|
||||
> updated as new [releases](../Releases.md) are made available.
|
||||
|
||||
### Automatic Installation
|
||||
[Use kata-manager](installing-with-kata-manager.md) to automatically install Kata packages.
|
||||
|
||||
[Use `kata-manager`](installing-with-kata-manager.md) to automatically install Kata packages.
|
||||
|
||||
### Scripted Installation
|
||||
[Use kata-doc-to-script](installing-with-kata-doc-to-script.md) to generate installation scripts that can be reviewed before they are executed.
|
||||
[Use `kata-doc-to-script`](installing-with-kata-doc-to-script.md) to generate installation scripts that can be reviewed before they are executed.
|
||||
|
||||
### Manual Installation
|
||||
Manual installation instructions are available for [these distributions](#supported-distributions) and document how to:
|
||||
@@ -80,7 +81,7 @@ Manual installation instructions are available for [these distributions](#suppor
|
||||
|
||||
[](https://snapcraft.io/kata-containers)
|
||||
|
||||
[Use snap](snap-installation-guide.md) to install Kata Containers from snapcraft.io.
|
||||
[Use snap](snap-installation-guide.md) to install Kata Containers from https://snapcraft.io.
|
||||
|
||||
#### Supported Distributions
|
||||
|Distro specific installation instructions | Versions |
|
||||
|
||||
@@ -104,7 +104,7 @@ This command will produce output similar to the following:
|
||||
]
|
||||
```
|
||||
|
||||
Launch the EC2 instance and pick ip the `INSTANCEID`:
|
||||
Launch the EC2 instance and pick IP the `INSTANCEID`:
|
||||
|
||||
```bash
|
||||
$ aws ec2 run-instances --image-id ami-03d5270fcb641f79b --count 1 --instance-type i3.metal --key-name MyKeyPair --associate-public-ip-address > /tmp/aws.json
|
||||
|
||||
@@ -27,9 +27,9 @@
|
||||
|
||||
2. Configure Docker to use Kata Containers by default with ONE of the following methods:
|
||||
|
||||
a. sysVinit
|
||||
a. `sysVinit`
|
||||
|
||||
- with sysVinit, docker config is stored in `/etc/default/docker`, edit the options similar to the following:
|
||||
- with `sysVinit`, docker config is stored in `/etc/default/docker`, edit the options similar to the following:
|
||||
|
||||
```sh
|
||||
$ sudo sh -c "echo '# specify docker runtime for kata-containers
|
||||
@@ -64,7 +64,7 @@ c. Docker `daemon.json`
|
||||
|
||||
3. Restart the Docker systemd service with one of the following (depending on init choice):
|
||||
|
||||
a. sysVinit
|
||||
a. `sysVinit`
|
||||
|
||||
```sh
|
||||
$ sudo /etc/init.d/docker stop
|
||||
|
||||
@@ -18,17 +18,19 @@ $ gcloud info || { echo "ERROR: no Google Cloud SDK"; exit 1; }
|
||||
|
||||
VM images on GCE are grouped into families under projects. Officially supported images are automatically discoverable with `gcloud compute images list`. That command produces a list similar to the following (likely with different image names):
|
||||
|
||||
$ gcloud compute images list
|
||||
NAME PROJECT FAMILY DEPRECATED STATUS
|
||||
centos-7-v20180523 centos-cloud centos-7 READY
|
||||
coreos-stable-1745-5-0-v20180531 coreos-cloud coreos-stable READY
|
||||
cos-beta-67-10575-45-0 cos-cloud cos-beta READY
|
||||
cos-stable-66-10452-89-0 cos-cloud cos-stable READY
|
||||
debian-9-stretch-v20180510 debian-cloud debian-9 READY
|
||||
rhel-7-v20180522 rhel-cloud rhel-7 READY
|
||||
sles-11-sp4-v20180523 suse-cloud sles-11 READY
|
||||
ubuntu-1604-xenial-v20180522 ubuntu-os-cloud ubuntu-1604-lts READY
|
||||
ubuntu-1804-bionic-v20180522 ubuntu-os-cloud ubuntu-1804-lts READY
|
||||
```bash
|
||||
$ gcloud compute images list
|
||||
NAME PROJECT FAMILY DEPRECATED STATUS
|
||||
centos-7-v20180523 centos-cloud centos-7 READY
|
||||
coreos-stable-1745-5-0-v20180531 coreos-cloud coreos-stable READY
|
||||
cos-beta-67-10575-45-0 cos-cloud cos-beta READY
|
||||
cos-stable-66-10452-89-0 cos-cloud cos-stable READY
|
||||
debian-9-stretch-v20180510 debian-cloud debian-9 READY
|
||||
rhel-7-v20180522 rhel-cloud rhel-7 READY
|
||||
sles-11-sp4-v20180523 suse-cloud sles-11 READY
|
||||
ubuntu-1604-xenial-v20180522 ubuntu-os-cloud ubuntu-1604-lts READY
|
||||
ubuntu-1804-bionic-v20180522 ubuntu-os-cloud ubuntu-1804-lts READY
|
||||
```
|
||||
|
||||
Each distribution has its own project, and each project can host images for multiple versions of the distribution, typically grouped into families. We recommend you select images by project and family, rather than by name. This ensures any scripts or other automation always works with a non-deprecated image, including security updates, updates to GCE-specific scripts, etc.
|
||||
|
||||
@@ -50,21 +52,23 @@ $ gcloud compute images create \
|
||||
|
||||
If successful, `gcloud` reports that the image was created. Verify that the image has the nested virtualization license with `gcloud compute images describe $IMAGE_NAME`. This produces output like the following (some fields have been removed for clarity and to redact personal info):
|
||||
|
||||
diskSizeGb: '10'
|
||||
kind: compute#image
|
||||
licenseCodes:
|
||||
- '1002001'
|
||||
- '5926592092274602096'
|
||||
licenses:
|
||||
- https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
|
||||
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1804-lts
|
||||
name: ubuntu-1804-lts-nested
|
||||
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20180522
|
||||
sourceImageId: '3280575157699667619'
|
||||
sourceType: RAW
|
||||
status: READY
|
||||
```yaml
|
||||
diskSizeGb: '10'
|
||||
kind: compute#image
|
||||
licenseCodes:
|
||||
- '1002001'
|
||||
- '5926592092274602096'
|
||||
licenses:
|
||||
- https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
|
||||
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1804-lts
|
||||
name: ubuntu-1804-lts-nested
|
||||
sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20180522
|
||||
sourceImageId: '3280575157699667619'
|
||||
sourceType: RAW
|
||||
status: READY
|
||||
```
|
||||
|
||||
The primary criterion of interest here is the presence of the "enable-vmx" license. Without that licence Kata will not work. Without that license Kata does not work. The presence of that license instructs the Google Compute Engine hypervisor to enable Intel's VT-x instructions in virtual machines created from the image. Note that nested virtualization is only available in VMs running on Intel Haswell or later CPU microarchitectures.
|
||||
The primary criterion of interest here is the presence of the `enable-vmx` license. Without that licence Kata will not work. Without that license Kata does not work. The presence of that license instructs the Google Compute Engine hypervisor to enable Intel's VT-x instructions in virtual machines created from the image. Note that nested virtualization is only available in VMs running on Intel Haswell or later CPU micro-architectures.
|
||||
|
||||
### Verify VMX is Available
|
||||
|
||||
@@ -112,17 +116,18 @@ $ gcloud compute images create \
|
||||
|
||||
The result is an image that includes any changes made to the `kata-testing` instance as well as the `enable-vmx` flag. Verify this with `gcloud compute images describe kata-base`. The result, which omits some fields for clarity, should be similar to the following:
|
||||
|
||||
diskSizeGb: '10'
|
||||
kind: compute#image
|
||||
licenseCodes:
|
||||
- '1002001'
|
||||
- '5926592092274602096'
|
||||
licenses:
|
||||
- https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
|
||||
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1804-lts
|
||||
name: kata-base
|
||||
selfLink: https://www.googleapis.com/compute/v1/projects/my-kata-project/global/images/kata-base
|
||||
sourceDisk: https://www.googleapis.com/compute/v1/projects/my-kata-project/zones/us-west1-a/disks/kata-testing
|
||||
sourceType: RAW
|
||||
status: READY
|
||||
|
||||
```yaml
|
||||
diskSizeGb: '10'
|
||||
kind: compute#image
|
||||
licenseCodes:
|
||||
- '1002001'
|
||||
- '5926592092274602096'
|
||||
licenses:
|
||||
- https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx
|
||||
- https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1804-lts
|
||||
name: kata-base
|
||||
selfLink: https://www.googleapis.com/compute/v1/projects/my-kata-project/global/images/kata-base
|
||||
sourceDisk: https://www.googleapis.com/compute/v1/projects/my-kata-project/zones/us-west1-a/disks/kata-testing
|
||||
sourceType: RAW
|
||||
status: READY
|
||||
```
|
||||
|
||||
@@ -55,7 +55,7 @@ line.
|
||||
## Install and configure Kata Containers
|
||||
|
||||
To use this feature, you need Kata version 1.3.0 or above.
|
||||
Follow the [Kata Containers' setup instructions](https://github.com/kata-containers/documentation/blob/master/install/README.md)
|
||||
Follow the [Kata Containers setup instructions](https://github.com/kata-containers/documentation/blob/master/install/README.md)
|
||||
to install the latest version of Kata.
|
||||
|
||||
In order to pass a GPU to a Kata Container, you need to enable the `hotplug_vfio_on_root_bus`
|
||||
@@ -66,7 +66,7 @@ $ sudo sed -i -e 's/^# *\(hotplug_vfio_on_root_bus\).*=.*$/\1 = true/g' /usr/sha
|
||||
```
|
||||
|
||||
Make sure you are using the `pc` machine type by verifying `machine_type = "pc"` is
|
||||
set in the configuration.toml.
|
||||
set in the `configuration.toml`.
|
||||
|
||||
## Build Kata Containers kernel with GPU support
|
||||
|
||||
@@ -97,7 +97,7 @@ Use the following steps to pass an Intel Graphics device in GVT-d mode with Kata
|
||||
```
|
||||
|
||||
Run the previous command to determine the BDF for the GPU device on host.<br/>
|
||||
From the previous output, PCI addres "0000:00:02.0" is assigned to the hardware GPU device.<br/>
|
||||
From the previous output, PCI address `0000:00:02.0` is assigned to the hardware GPU device.<br/>
|
||||
This BDF is used later to unbind the GPU device from the host.<br/>
|
||||
"8086 1616" is the device ID of the hardware GPU device. It is used later to
|
||||
rebind the GPU device to `vfio-pci` driver.
|
||||
@@ -219,13 +219,13 @@ Use the following steps to pass an Intel Graphics device in GVT-g mode to a Kata
|
||||
|
||||
3. Create a VGPU:
|
||||
|
||||
* Generate a uuid:
|
||||
* Generate a UUID:
|
||||
|
||||
```
|
||||
$ gpu_uuid=$(uuid)
|
||||
```
|
||||
|
||||
* Write the uuid to the `create` file under the chosen mdev type:
|
||||
* Write the UUID to the `create` file under the chosen `mdev` type:
|
||||
|
||||
```
|
||||
$ echo $(gpu_uuid) | sudo tee /sys/devices/pci0000:00/0000:00:02.0/mdev_supported_types/i915-GVTg_V4_8/create
|
||||
|
||||
@@ -20,7 +20,7 @@ SR-IOV for network based devices.
|
||||
To create a network with associated VFs, which can be passed to
|
||||
Kata Containers, you must install a SR-IOV Docker plugin. The
|
||||
created network is based on a physical function (PF) device. The network can
|
||||
create 'n' containers, where 'n' is the number of VFs associated with the
|
||||
create `n` containers, where `n` is the number of VFs associated with the
|
||||
Physical Function (PF).
|
||||
|
||||
To install the plugin, follow the [plugin installation instructions](https://github.com/clearcontainers/sriov).
|
||||
@@ -242,7 +242,7 @@ set the number of VFs for a physical device just once.
|
||||
63
|
||||
```
|
||||
The previous commands show how many VFs you can create. The `sriov_totalvfs`
|
||||
file under sysfs for a PCI device specifies the total number of VFs that you
|
||||
file under `sysfs` for a PCI device specifies the total number of VFs that you
|
||||
can create.
|
||||
|
||||
4. Create the VFs:
|
||||
@@ -293,7 +293,7 @@ The following example launches a Kata Containers container using SR-IOV:
|
||||
ee2e5a594f9e4d3796eda972f3b46e52342aea04cbae8e5eac9b2dd6ff37b067
|
||||
```
|
||||
|
||||
The previous commands create the required SR-IOV docker network, subnet, vlanid,
|
||||
The previous commands create the required SR-IOV docker network, subnet, `vlanid`,
|
||||
and physical interface.
|
||||
|
||||
3. Start containers and test their connectivity:
|
||||
@@ -304,7 +304,7 @@ The following example launches a Kata Containers container using SR-IOV:
|
||||
|
||||
The previous example starts a container making use of SR-IOV.
|
||||
If two machines with SR-IOV enabled NICs are connected back-to-back and each
|
||||
has a network with matching vlan-id created, use the following two commands
|
||||
has a network with matching `vlanid` created, use the following two commands
|
||||
to test the connectivity:
|
||||
|
||||
Machine 1:
|
||||
|
||||
@@ -6,19 +6,19 @@ extensible framework that provides out-of-the-box production quality
|
||||
switch and router functionality. VPP is a high performance packet-processing
|
||||
stack that can run on commodity CPUs. Enabling VPP with DPDK support can
|
||||
yield significant performance improvements over a Linux\* bridge providing a
|
||||
switch with DPDK vhost-user ports.
|
||||
switch with DPDK VHOST-USER ports.
|
||||
|
||||
For more information about VPP visit their [wiki](https://wiki.fd.io/view/VPP).
|
||||
|
||||
## Install and configure Kata Containers
|
||||
|
||||
Follow the [Kata Containers' setup instructions](https://github.com/kata-containers/documentation/wiki/Developer-Guide).
|
||||
Follow the [Kata Containers setup instructions](https://github.com/kata-containers/documentation/wiki/Developer-Guide).
|
||||
|
||||
In order to make use of vhost-user based interfaces, the container needs to be backed
|
||||
by huge pages. Hugepage support is required for the large memory pool allocation used for
|
||||
In order to make use of VHOST-USER based interfaces, the container needs to be backed
|
||||
by huge pages. `HugePages` support is required for the large memory pool allocation used for
|
||||
DPDK packet buffers. This is a feature which must be configured within the Linux Kernel. See
|
||||
[the DPDK documentation](https://doc.dpdk.org/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment)
|
||||
for details on how to enable for the host. After enabling huge-pages support on the host system,
|
||||
for details on how to enable for the host. After enabling huge pages support on the host system,
|
||||
update the Kata configuration to enable huge page support in the guest kernel:
|
||||
|
||||
```
|
||||
|
||||
@@ -15,8 +15,8 @@ Currently, the instructions are based on the following links:
|
||||
|
||||
## Install Git to use with DevStack
|
||||
|
||||
```
|
||||
sudo apt install git
|
||||
```sh
|
||||
$ sudo apt install git
|
||||
```
|
||||
|
||||
## Setup OpenStack DevStack
|
||||
@@ -24,31 +24,32 @@ The following commands will sync DevStack from GitHub, create your
|
||||
`local.conf` file, assign your host IP to this file, enable Clear
|
||||
Containers, start DevStack, and set the environment variables to use
|
||||
`zun` on the command line.
|
||||
```
|
||||
sudo mkdir -p /opt/stack
|
||||
sudo chown $USER /opt/stack
|
||||
git clone https://github.com/openstack-dev/devstack /opt/stack/devstack
|
||||
HOST_IP="$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/')"
|
||||
git clone https://github.com/openstack/zun /opt/stack/zun
|
||||
cat /opt/stack/zun/devstack/local.conf.sample \
|
||||
| sed "s/HOST_IP=.*/HOST_IP=$HOST_IP/" \
|
||||
> /opt/stack/devstack/local.conf
|
||||
sed -i "s/KURYR_CAPABILITY_SCOPE=.*/KURYR_CAPABILITY_SCOPE=local/" /opt/stack/devstack/local.conf
|
||||
echo "ENABLE_CLEAR_CONTAINER=true" >> /opt/stack/devstack/local.conf
|
||||
echo "enable_plugin zun-ui https://git.openstack.org/openstack/zun-ui" >> /opt/stack/devstack/local.conf
|
||||
/opt/stack/devstack/stack.sh
|
||||
source /opt/stack/devstack/openrc admin admin
|
||||
|
||||
```sh
|
||||
$ sudo mkdir -p /opt/stack
|
||||
$ sudo chown $USER /opt/stack
|
||||
$ git clone https://github.com/openstack-dev/devstack /opt/stack/devstack
|
||||
$ HOST_IP="$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/')"
|
||||
$ git clone https://github.com/openstack/zun /opt/stack/zun
|
||||
$ cat /opt/stack/zun/devstack/local.conf.sample \
|
||||
$ | sed "s/HOST_IP=.*/HOST_IP=$HOST_IP/" \
|
||||
$ > /opt/stack/devstack/local.conf
|
||||
$ sed -i "s/KURYR_CAPABILITY_SCOPE=.*/KURYR_CAPABILITY_SCOPE=local/" /opt/stack/devstack/local.conf
|
||||
$ echo "ENABLE_CLEAR_CONTAINER=true" >> /opt/stack/devstack/local.conf
|
||||
$ echo "enable_plugin zun-ui https://git.openstack.org/openstack/zun-ui" >> /opt/stack/devstack/local.conf
|
||||
$ /opt/stack/devstack/stack.sh
|
||||
$ source /opt/stack/devstack/openrc admin admin
|
||||
```
|
||||
|
||||
The previous commands start OpenStack DevStack with Zun support. You can test
|
||||
it using `runc` as shown by the following commands to make sure everything
|
||||
installed correctly and is working.
|
||||
|
||||
```
|
||||
zun run --name test cirros ping -c 4 8.8.8.8
|
||||
zun list
|
||||
zun logs test
|
||||
zun delete test
|
||||
```sh
|
||||
$ zun run --name test cirros ping -c 4 8.8.8.8
|
||||
$ zun list
|
||||
$ zun logs test
|
||||
$ zun delete test
|
||||
```
|
||||
|
||||
## Install Kata Containers
|
||||
@@ -61,29 +62,28 @@ to install the Kata Containers components.
|
||||
The following commands replace the Clear Containers 2.x runtime setup with
|
||||
DevStack, with Kata Containers:
|
||||
|
||||
```
|
||||
sudo sed -i 's/"cor"/"kata-runtime"/' /etc/docker/daemon.json
|
||||
sudo sed -i 's/"\/usr\/bin\/cc-oci-runtime"/"\/usr\/bin\/kata-runtime"/' /etc/docker/daemon.json
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart docker
|
||||
```sh
|
||||
$ sudo sed -i 's/"cor"/"kata-runtime"/' /etc/docker/daemon.json
|
||||
$ sudo sed -i 's/"\/usr\/bin\/cc-oci-runtime"/"\/usr\/bin\/kata-runtime"/' /etc/docker/daemon.json
|
||||
$ sudo systemctl daemon-reload
|
||||
$ sudo systemctl restart docker
|
||||
```
|
||||
|
||||
## Test that everything works in both Docker and OpenStack Zun
|
||||
|
||||
```
|
||||
sudo docker run -ti --runtime kata-runtime busybox sh
|
||||
zun run --name kata --runtime kata-runtime cirros ping -c 4 8.8.8.8
|
||||
zun list
|
||||
zun logs kata
|
||||
|
||||
zun delete kata
|
||||
```sh
|
||||
$ sudo docker run -ti --runtime kata-runtime busybox sh
|
||||
$ zun run --name kata --runtime kata-runtime cirros ping -c 4 8.8.8.8
|
||||
$ zun list
|
||||
$ zun logs kata
|
||||
$ zun delete kata
|
||||
```
|
||||
|
||||
## Stop DevStack and clean up system (Optional)
|
||||
|
||||
```
|
||||
/opt/stack/devstack/unstack.sh
|
||||
/opt/stack/devstack/clean.sh
|
||||
```sh
|
||||
$ /opt/stack/devstack/unstack.sh
|
||||
$ /opt/stack/devstack/clean.sh
|
||||
```
|
||||
|
||||
## Restart DevStack and reset CC 2.x runtime to `kata-runtime`
|
||||
@@ -91,33 +91,33 @@ zun delete kata
|
||||
Run the following commands if you already setup Kata Containers and want to
|
||||
restart DevStack:
|
||||
|
||||
```
|
||||
/opt/stack/devstack/unstack.sh
|
||||
/opt/stack/devstack/clean.sh
|
||||
/opt/stack/devstack/stack.sh
|
||||
source /opt/stack/devstack/openrc admin admin
|
||||
sudo sed -i 's/"cor"/"kata-runtime"/' /etc/docker/daemon.json
|
||||
sudo sed -i 's/"\/usr\/bin\/cc-oci-runtime"/"\/usr\/bin\/kata-runtime"/' /etc/docker/daemon.json
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart docker
|
||||
```sh
|
||||
$ /opt/stack/devstack/unstack.sh
|
||||
$ /opt/stack/devstack/clean.sh
|
||||
$ /opt/stack/devstack/stack.sh
|
||||
$ source /opt/stack/devstack/openrc admin admin
|
||||
$ sudo sed -i 's/"cor"/"kata-runtime"/' /etc/docker/daemon.json
|
||||
$ sudo sed -i 's/"\/usr\/bin\/cc-oci-runtime"/"\/usr\/bin\/kata-runtime"/' /etc/docker/daemon.json
|
||||
$ sudo systemctl daemon-reload
|
||||
$ sudo systemctl restart docker
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
Figure 1: Create a busybox container image
|
||||
Figure 1: Create a BusyBox container image
|
||||
|
||||

|
||||

|
||||
|
||||
Figure 2: Select `kata-runtime` to use
|
||||
|
||||

|
||||

|
||||
|
||||
Figure 3: Two busybox containers successfully launched
|
||||
Figure 3: Two BusyBox containers successfully launched
|
||||
|
||||

|
||||

|
||||
|
||||
Figure 4: Test connectivity between Kata Containers
|
||||
|
||||

|
||||

|
||||
|
||||
Figure 5: CLI for Zun
|
||||
|
||||
Reference in New Issue
Block a user