Merge pull request #7205 from stevenhorsman/CCv0-merge-28th-june

CCv0: Merge main into CCv0 branch
This commit is contained in:
Steve Horsman
2023-07-03 19:17:15 +01:00
committed by GitHub
583 changed files with 246252 additions and 382 deletions

View File

@@ -3,14 +3,25 @@ on:
pull_request_target:
branches:
- 'main'
types:
# Adding 'labeled' to the list of activity types that trigger this event
# (default: opened, synchronize, reopened) so that we can run this
# workflow when the 'ok-to-test' label is added.
# Reference: https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target
- opened
- synchronize
- reopened
- labeled
jobs:
build-kata-static-tarball-amd64:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
uses: ./.github/workflows/build-kata-static-tarball-amd64.yaml
with:
tarball-suffix: -${{ github.event.pull_request.number}}-${{ github.event.pull_request.head.sha }}
publish-kata-deploy-payload-amd64:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
needs: build-kata-static-tarball-amd64
uses: ./.github/workflows/publish-kata-deploy-payload-amd64.yaml
with:
@@ -21,6 +32,7 @@ jobs:
secrets: inherit
run-k8s-tests-on-aks:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-aks.yaml
with:
@@ -30,6 +42,7 @@ jobs:
secrets: inherit
run-k8s-tests-on-sev:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-sev.yaml
with:
@@ -38,6 +51,7 @@ jobs:
tag: ${{ github.event.pull_request.number }}-${{ github.event.pull_request.head.sha }}-amd64
run-k8s-tests-on-snp:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-snp.yaml
with:
@@ -46,6 +60,7 @@ jobs:
tag: ${{ github.event.pull_request.number }}-${{ github.event.pull_request.head.sha }}-amd64
run-k8s-tests-on-tdx:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-tdx.yaml
with:
@@ -54,5 +69,8 @@ jobs:
tag: ${{ github.event.pull_request.number }}-${{ github.event.pull_request.head.sha }}-amd64
run-metrics-tests:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
needs: build-kata-static-tarball-amd64
uses: ./.github/workflows/run-launchtimes-metrics.yaml
uses: ./.github/workflows/run-metrics.yaml
with:
tarball-suffix: -${{ github.event.pull_request.number}}-${{ github.event.pull_request.head.sha }}

View File

@@ -1,9 +1,13 @@
name: CI | Run launch-times metrics
name: CI | Run test metrics
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
jobs:
launch-times-tests:
run-metrics:
runs-on: metrics
env:
GOPATH: ${{ github.workspace }}
@@ -12,6 +16,15 @@ jobs:
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: get-kata-tarball
uses: actions/download-artifact@v3
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/metrics/gha-run.sh install-kata kata-artifacts
- name: run launch times on qemu
run: bash tests/metrics/gha-run.sh run-test-launchtimes-qemu

View File

@@ -147,6 +147,10 @@ The table below lists the remaining parts of the project:
Kata Containers is now
[available natively for most distributions](docs/install/README.md#packaged-installation-methods).
## Metrics tests
See the [metrics documentation](tests/metrics/README.md).
## Glossary of Terms
See the [glossary of terms](https://github.com/kata-containers/kata-containers/wiki/Glossary) related to Kata Containers.

View File

@@ -20,13 +20,13 @@ The JSON file `mountinfo.json` placed in a sub-path `/kubelet/kata-test-vol-001/
And the full path looks like: `/run/kata-containers/shared/direct-volumes/kubelet/kata-test-vol-001/volume001`, But for some security reasons. it is
encoded as `/run/kata-containers/shared/direct-volumes/L2t1YmVsZXQva2F0YS10ZXN0LXZvbC0wMDEvdm9sdW1lMDAx`.
Finally, when running a Kata Containers witch `ctr run --mount type=X, src=Y, dst=Z,,options=rbind:rw`, the `type=X` should be specified a proprietary type specifically designed for some kind of volume.
Finally, when running a Kata Containers with `ctr run --mount type=X, src=Y, dst=Z,,options=rbind:rw`, the `type=X` should be specified a proprietary type specifically designed for some kind of volume.
Now, supported types:
- `directvol` for direct volume
- `spdkvol` for SPDK volume (TBD)
- `vfiovol` for VFIO device based volume (TBD)
- `vfiovol` for VFIO device based volume
- `spdkvol` for SPDK/vhost-user based volume
## Setup Device and Run a Kata-Containers
@@ -55,7 +55,7 @@ $ sudo mkfs.ext4 /tmp/stor/rawdisk01.20g
```
```bash
$ sudo ./kata-ctl direct-volume add /kubelet/kata-direct-vol-002/directvol002 "{\"device\": \"/tmp/stor/rawdisk01.20g\", \"volume_type\": \"directvol\", \"fs_type\": \"ext4\", \"metadata\":"{}", \"options\": []}"
$ sudo kata-ctl direct-volume add /kubelet/kata-direct-vol-002/directvol002 "{\"device\": \"/tmp/stor/rawdisk01.20g\", \"volume_type\": \"directvol\", \"fs_type\": \"ext4\", \"metadata\":"{}", \"options\": []}"
$# /kubelet/kata-direct-vol-002/directvol002 <==> /run/kata-containers/shared/direct-volumes/W1lMa2F0ZXQva2F0YS10a2F0DAxvbC0wMDEvdm9sdW1lMDAx
$ cat W1lMa2F0ZXQva2F0YS10a2F0DAxvbC0wMDEvdm9sdW1lMDAx/mountInfo.json
{"volume_type":"directvol","device":"/tmp/stor/rawdisk01.20g","fs_type":"ext4","metadata":{},"options":[]}
@@ -65,14 +65,162 @@ $ cat W1lMa2F0ZXQva2F0YS10a2F0DAxvbC0wMDEvdm9sdW1lMDAx/mountInfo.json
```bash
$ # type=disrectvol,src=/kubelet/kata-direct-vol-002/directvol002,dst=/disk002,options=rbind:rw
$sudo ctr run -t --rm --runtime io.containerd.kata.v2 --mount type=directvol,src=/kubelet/kata-direct-vol-002/directvol002,dst=/disk002,options=rbind:rw "$image" kata-direct-vol-xx05302045 /bin/bash
$ sudo ctr run -t --rm --runtime io.containerd.kata.v2 --mount type=directvol,src=/kubelet/kata-direct-vol-002/directvol002,dst=/disk002,options=rbind:rw "$image" kata-direct-vol-xx05302045 /bin/bash
```
### SPDK Device Based Volume
### VFIO Device Based Block Volume
TBD
#### create VFIO device based backend storage
### VFIO Device Based Volume
> **Tip:** It only supports `vfio-pci` based PCI device passthrough mode.
TBD
In this scenario, the device's host kernel driver will be replaced by `vfio-pci`, and IOMMU group ID generated.
And either device's BDF or its VFIO IOMMU group ID in `/dev/vfio/` is fine for "device" in `mountinfo.json`.
```bash
$ lspci -nn -k -s 45:00.1
45:00.1 SCSI storage controller
...
Kernel driver in use: vfio-pci
...
$ ls /dev/vfio/110
/dev/vfio/110
$ ls /sys/kernel/iommu_groups/110/devices/
0000:45:00.1
```
#### setup VFIO device for kata-containers
First, configure the `mountinfo.json`, as below:
- (1) device with `BB:DD:F`
```json
{
"device": "45:00.1",
"volume_type": "vfiovol",
"fs_type": "ext4",
"metadata":"{}",
"options": []
}
```
- (2) device with `DDDD:BB:DD:F`
```json
{
"device": "0000:45:00.1",
"volume_type": "vfiovol",
"fs_type": "ext4",
"metadata":"{}",
"options": []
}
```
- (3) device with `/dev/vfio/X`
```json
{
"device": "/dev/vfio/110",
"volume_type": "vfiovol",
"fs_type": "ext4",
"metadata":"{}",
"options": []
}
```
Second, run kata-containers with device(`/dev/vfio/110`) as an example:
```bash
$ sudo kata-ctl direct-volume add /kubelet/kata-vfio-vol-003/vfiovol003 "{\"device\": \"/dev/vfio/110\", \"volume_type\": \"vfiovol\", \"fs_type\": \"ext4\", \"metadata\":"{}", \"options\": []}"
$ # /kubelet/kata-vfio-vol-003/directvol003 <==> /run/kata-containers/shared/direct-volumes/F0va22F0ZvaS12F0YS10a2F0DAxvbC0F0ZXvdm9sdF0Z0YSx
$ cat F0va22F0ZvaS12F0YS10a2F0DAxvbC0F0ZXvdm9sdF0Z0YSx/mountInfo.json
{"volume_type":"vfiovol","device":"/dev/vfio/110","fs_type":"ext4","metadata":{},"options":[]}
```
#### Run a Kata container with VFIO block device based volume
```bash
$ # type=disrectvol,src=/kubelet/kata-vfio-vol-003/vfiovol003,dst=/disk003,options=rbind:rw
$ sudo ctr run -t --rm --runtime io.containerd.kata.v2 --mount type=vfiovol,src=/kubelet/kata-vfio-vol-003/vfiovol003,dst=/disk003,options=rbind:rw "$image" kata-vfio-vol-xx05302245 /bin/bash
```
### SPDK Device Based Block Volume
SPDK vhost-user devices in runtime-rs, unlike runtime (golang version), there is no need to `mknod` device node under `/dev/` any more.
Just using the `kata-ctl direct-volume add ..` to make a mount info config is enough.
#### Run SPDK vhost target and Expose vhost block device
Run a SPDK vhost target and get vhost-user block controller as an example:
First, run SPDK vhost target:
> **Tips:** If driver `vfio-pci` supported, you can run SPDK with `DRIVER_OVERRIDE=vfio-pci`
> Otherwise, Just run without it `sudo HUGEMEM=4096 ./scripts/setup.sh`.
```bash
$ SPDK_DEVEL=/xx/spdk
$ VHU_UDS_PATH=/tmp/vhu-targets
$ RAW_DISKS=/xx/rawdisks
$ # Reset first
$ ${SPDK_DEVEL}/scripts/setup.sh reset
$ sudo sysctl -w vm.nr_hugepages=2048
$ #4G Huge Memory for spdk
$ sudo HUGEMEM=4096 DRIVER_OVERRIDE=vfio-pci ${SPDK_DEVEL}/scripts/setup.sh
$ sudo ${SPDK_DEVEL}/build/bin/spdk_tgt -S $VHU_UDS_PATH -s 1024 -m 0x3 &
```
Second, create a vhost controller:
```bash
$ sudo dd if=/dev/zero of=${RAW_DISKS}/rawdisk01.20g bs=1M count=20480
$ sudo ${SPDK_DEVEL}/scripts/rpc.py bdev_aio_create ${RAW_DISKS}/rawdisk01.20g vhu-rawdisk01.20g 512
$ sudo ${SPDK_DEVEL}/scripts/rpc.py vhost_create_blk_controller vhost-blk-rawdisk01.sock vhu-rawdisk01.20g
```
Here, a vhost controller `vhost-blk-rawdisk01.sock` is created, and the controller will
be passed to Hypervisor, such as Dragonball, Cloud-Hypervisor, Firecracker or QEMU.
#### setup vhost-user block device for kata-containers
First, `mkdir` a sub-path `kubelet/kata-test-vol-001/` under `/run/kata-containers/shared/direct-volumes/`.
Second, fill fields in `mountinfo.json`, it looks like as below:
```json
{
"device": "/tmp/vhu-targets/vhost-blk-rawdisk01.sock",
"volume_type": "spdkvol",
"fs_type": "ext4",
"metadata":"{}",
"options": []
}
```
Third, with the help of `kata-ctl direct-volume` to add block device to generate `mountinfo.json`, and run a kata container with `--mount`.
```bash
$ # kata-ctl direct-volume add
$ sudo kata-ctl direct-volume add /kubelet/kata-test-vol-001/volume001 "{\"device\": \"/tmp/vhu-targets/vhost-blk-rawdisk01.sock\", \"volume_type\":\"spdkvol\", \"fs_type\": \"ext4\", \"metadata\":"{}", \"options\": []}"
$ # /kubelet/kata-test-vol-001/volume001 <==> /run/kata-containers/shared/direct-volumes/L2t1YmVsZXQva2F0YS10ZXN0LXZvbC0wMDEvdm9sdW1lMDAx
$ cat L2t1YmVsZXQva2F0YS10ZXN0LXZvbC0wMDEvdm9sdW1lMDAx/mountInfo.json
$ {"volume_type":"spdkvol","device":"/tmp/vhu-targets/vhost-blk-rawdisk01.sock","fs_type":"ext4","metadata":{},"options":[]}
```
As `/run/kata-containers/shared/direct-volumes/` is a fixed path , we will be able to run a kata pod with `--mount` and set
`src` sub-path. And the `--mount` argument looks like: `--mount type=spdkvol,src=/kubelet/kata-test-vol-001/volume001,dst=/disk001`.
#### Run a Kata container with SPDK vhost-user block device
In the case, `ctr run --mount type=X, src=source, dst=dest`, the X will be set `spdkvol` which is a proprietary type specifically designed for SPDK volumes.
```bash
$ # ctr run with --mount type=spdkvol,src=/kubelet/kata-test-vol-001/volume001,dst=/disk001
$ sudo ctr run -t --rm --runtime io.containerd.kata.v2 --mount type=spdkvol,src=/kubelet/kata-test-vol-001/volume001,dst=/disk001,options=rbind:rw "$image" kata-spdk-vol-xx0530 /bin/bash
```

View File

@@ -114,6 +114,8 @@ pub enum BlockDeviceType {
/// SPOOL is a reliable NVMe virtualization system for the cloud environment.
/// You could learn more SPOOL here: https://www.usenix.org/conference/atc20/presentation/xue
Spool,
/// The standard vhost-user-blk based device such as Spdk device.
Spdk,
/// Local disk/file based low level device.
RawBlock,
}
@@ -124,6 +126,8 @@ impl BlockDeviceType {
// SPOOL path should be started with "spool", e.g. "spool:/device1"
if path.starts_with("spool:/") {
BlockDeviceType::Spool
} else if path.starts_with("spdk:/") {
BlockDeviceType::Spdk
} else {
BlockDeviceType::RawBlock
}
@@ -400,6 +404,10 @@ impl BlockDeviceMgr {
BlockDeviceError::DeviceManager(e)
})
}
BlockDeviceType::Spool | BlockDeviceType::Spdk => {
// TBD
todo!()
}
_ => Err(BlockDeviceError::InvalidBlockDeviceType),
}
}

View File

@@ -79,16 +79,17 @@ impl ConsoleManager {
.unwrap()
.set_output_stream(Some(Box::new(std::io::stdout())));
let stdin_handle = std::io::stdin();
stdin_handle
.lock()
.set_raw_mode()
.map_err(|e| DeviceMgrError::ConsoleManager(ConsoleManagerError::StdinHandle(e)))?;
stdin_handle
.lock()
.set_non_block(true)
.map_err(ConsoleManagerError::StdinHandle)
.map_err(DeviceMgrError::ConsoleManager)?;
{
let guard = stdin_handle.lock();
guard
.set_raw_mode()
.map_err(ConsoleManagerError::StdinHandle)
.map_err(DeviceMgrError::ConsoleManager)?;
guard
.set_non_block(true)
.map_err(ConsoleManagerError::StdinHandle)
.map_err(DeviceMgrError::ConsoleManager)?;
}
let handler = ConsoleEpollHandler::new(device, Some(stdin_handle), None, &self.logger);
self.subscriber_id = Some(self.epoll_mgr.add_subscriber(Box::new(handler)));
self.backend = Some(Backend::StdinHandle(std::io::stdin()));

View File

@@ -1354,9 +1354,11 @@ dependencies = [
"go-flag",
"kata-sys-util",
"kata-types",
"lazy_static",
"libc",
"logging",
"nix 0.24.3",
"path-clean",
"persist",
"rand 0.8.5",
"rust-ini",
@@ -2124,6 +2126,12 @@ version = "1.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d01a5bd0424d00070b0098dd17ebca6f961a959dead1dbcbbbc1d1cd8d3deeba"
[[package]]
name = "path-clean"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17359afc20d7ab31fdb42bb844c8b3bb1dabd7dcf7e68428492da7f16966fcef"
[[package]]
name = "percent-encoding"
version = "2.2.0"

View File

@@ -26,6 +26,8 @@ thiserror = "1.0"
tokio = { version = "1.28.1", features = ["sync", "fs"] }
vmm-sys-util = "0.11.0"
rand = "0.8.4"
path-clean = "1.0.1"
lazy_static = "1.4"
kata-sys-util = { path = "../../../libs/kata-sys-util" }
kata-types = { path = "../../../libs/kata-types" }

View File

@@ -10,33 +10,49 @@ use anyhow::{anyhow, Context, Result};
use kata_sys_util::rand::RandomBytes;
use tokio::sync::{Mutex, RwLock};
use super::{
util::{get_host_path, get_virt_drive_name},
Device, DeviceConfig, DeviceType,
};
use crate::{
BlockConfig, BlockDevice, Hypervisor, KATA_BLK_DEV_TYPE, KATA_MMIO_BLK_DEV_TYPE,
VIRTIO_BLOCK_MMIO, VIRTIO_BLOCK_PCI,
device::VhostUserBlkDevice, BlockConfig, BlockDevice, Hypervisor, VfioDevice, VhostUserConfig,
KATA_BLK_DEV_TYPE, KATA_MMIO_BLK_DEV_TYPE, VIRTIO_BLOCK_MMIO, VIRTIO_BLOCK_PCI,
};
use super::{
util::{get_host_path, get_virt_drive_name, DEVICE_TYPE_BLOCK},
Device, DeviceConfig, DeviceType,
};
pub type ArcMutexDevice = Arc<Mutex<dyn Device>>;
const DEVICE_TYPE_BLOCK: &str = "b";
/// block_index and released_block_index are used to search an available block index
/// in Sandbox.
///
/// @block_driver to be used for block device;
/// @block_index generally default is 1 for <vdb>;
/// @released_block_index for blk devices removed and indexes will released at the same time.
#[derive(Clone, Debug, Default)]
struct SharedInfo {
block_driver: String,
block_index: u64,
released_block_index: Vec<u64>,
}
impl SharedInfo {
fn new() -> Self {
async fn new(hypervisor: Arc<dyn Hypervisor>) -> Self {
// get hypervisor block driver
let block_driver = match hypervisor
.hypervisor_config()
.await
.blockdev_info
.block_device_driver
.as_str()
{
// convert the block driver to kata type
VIRTIO_BLOCK_MMIO => KATA_MMIO_BLK_DEV_TYPE.to_string(),
VIRTIO_BLOCK_PCI => KATA_BLK_DEV_TYPE.to_string(),
_ => "".to_string(),
};
SharedInfo {
block_driver,
block_index: 1,
released_block_index: vec![],
}
@@ -61,6 +77,7 @@ impl SharedInfo {
}
// Device manager will manage the lifecycle of sandbox device
#[derive(Debug)]
pub struct DeviceManager {
devices: HashMap<String, ArcMutexDevice>,
hypervisor: Arc<dyn Hypervisor>,
@@ -68,33 +85,50 @@ pub struct DeviceManager {
}
impl DeviceManager {
pub fn new(hypervisor: Arc<dyn Hypervisor>) -> Result<Self> {
pub async fn new(hypervisor: Arc<dyn Hypervisor>) -> Result<Self> {
let devices = HashMap::<String, ArcMutexDevice>::new();
Ok(DeviceManager {
devices,
hypervisor,
shared_info: SharedInfo::new(),
hypervisor: hypervisor.clone(),
shared_info: SharedInfo::new(hypervisor.clone()).await,
})
}
async fn try_add_device(&mut self, device_id: &str) -> Result<()> {
pub async fn try_add_device(&mut self, device_id: &str) -> Result<()> {
// find the device
let device = self
.devices
.get(device_id)
.context("failed to find device")?;
// attach device
let mut device_guard = device.lock().await;
// attach device
let result = device_guard.attach(self.hypervisor.as_ref()).await;
// handle attach error
if let Err(e) = result {
if let DeviceType::Block(device) = device_guard.get_device_info().await {
self.shared_info.release_device_index(device.config.index);
};
match device_guard.get_device_info().await {
DeviceType::Block(device) => {
self.shared_info.release_device_index(device.config.index);
}
DeviceType::Vfio(device) => {
// safe here:
// Only when vfio dev_type is `b`, virt_path MUST be Some(X),
// and needs do release_device_index. otherwise, let it go.
if device.config.dev_type == DEVICE_TYPE_BLOCK {
self.shared_info
.release_device_index(device.config.virt_path.unwrap().0);
}
}
DeviceType::VhostUserBlk(device) => {
self.shared_info.release_device_index(device.config.index);
}
_ => {
debug!(sl!(), "no need to do release device index.");
}
}
drop(device_guard);
self.devices.remove(device_id);
return Err(e);
}
@@ -149,6 +183,16 @@ impl DeviceManager {
return Some(device_id.to_string());
}
}
DeviceType::Vfio(device) => {
if device.config.host_path == host_path {
return Some(device_id.to_string());
}
}
DeviceType::VhostUserBlk(device) => {
if device.config.socket_path == host_path {
return Some(device_id.to_string());
}
}
_ => {
// TODO: support find other device type
continue;
@@ -168,7 +212,7 @@ impl DeviceManager {
Some((current_index, virt_path_name))
} else {
// only dev_type is block, otherwise, it's useless.
// only dev_type is block, otherwise, it's None.
None
};
@@ -181,21 +225,47 @@ impl DeviceManager {
let device_id = self.new_device_id()?;
let dev: ArcMutexDevice = match device_config {
DeviceConfig::BlockCfg(config) => {
// try to find the device, if found and just return id.
if let Some(device_matched_id) = self.find_device(config.path_on_host.clone()).await
{
return Ok(device_matched_id);
}
self.create_block_device(config, device_id.clone())
.await
.context("failed to create device")?
}
DeviceConfig::VfioCfg(config) => {
let mut vfio_dev_config = config.clone();
let dev_host_path = vfio_dev_config.host_path.clone();
if let Some(device_matched_id) = self.find_device(dev_host_path).await {
return Ok(device_matched_id);
}
let virt_path = self.get_dev_virt_path(vfio_dev_config.dev_type.as_str())?;
vfio_dev_config.virt_path = virt_path;
Arc::new(Mutex::new(VfioDevice::new(
device_id.clone(),
&vfio_dev_config,
)))
}
DeviceConfig::VhostUserBlkCfg(config) => {
// try to find the device, found and just return id.
if let Some(dev_id_matched) = self.find_device(config.path_on_host.clone()).await {
if let Some(dev_id_matched) = self.find_device(config.socket_path.clone()).await {
info!(
sl!(),
"device with host path:{:?} found. just return device id: {:?}",
config.path_on_host.clone(),
"vhost blk device with path:{:?} found. just return device id: {:?}",
config.socket_path.clone(),
dev_id_matched
);
return Ok(dev_id_matched);
}
self.create_block_device(config, device_id.clone())
self.create_vhost_blk_device(config, device_id.clone())
.await
.context("failed to create device")?
.context("failed to create vhost blk device")?
}
_ => {
return Err(anyhow!("invliad device type"));
@@ -208,30 +278,36 @@ impl DeviceManager {
Ok(device_id)
}
async fn create_vhost_blk_device(
&mut self,
config: &VhostUserConfig,
device_id: String,
) -> Result<ArcMutexDevice> {
let mut vhu_blk_config = config.clone();
vhu_blk_config.driver_option = self.shared_info.block_driver.clone();
// generate block device index and virt path
// safe here, Block device always has virt_path.
if let Some(virt_path) = self.get_dev_virt_path(DEVICE_TYPE_BLOCK)? {
vhu_blk_config.index = virt_path.0;
vhu_blk_config.virt_path = virt_path.1;
}
Ok(Arc::new(Mutex::new(VhostUserBlkDevice::new(
device_id,
vhu_blk_config,
))))
}
async fn create_block_device(
&mut self,
config: &BlockConfig,
device_id: String,
) -> Result<ArcMutexDevice> {
let mut block_config = config.clone();
// get hypervisor block driver
let block_driver = match self
.hypervisor
.hypervisor_config()
.await
.blockdev_info
.block_device_driver
.as_str()
{
// convert the block driver to kata type
VIRTIO_BLOCK_MMIO => KATA_MMIO_BLK_DEV_TYPE.to_string(),
VIRTIO_BLOCK_PCI => KATA_BLK_DEV_TYPE.to_string(),
_ => "".to_string(),
};
block_config.driver_option = block_driver;
block_config.driver_option = self.shared_info.block_driver.clone();
// generate block device index and virt path
// safe here, Block device always has virt_path.
// generate virt path
if let Some(virt_path) = self.get_dev_virt_path(DEVICE_TYPE_BLOCK)? {
block_config.index = virt_path.0;
block_config.virt_path = virt_path.1;
@@ -239,10 +315,10 @@ impl DeviceManager {
// if the path on host is empty, we need to get device host path from the device major and minor number
// Otherwise, it might be rawfile based block device, the host path is already passed from the runtime,
// so we don't need to do anything here
// so we don't need to do anything here.
if block_config.path_on_host.is_empty() {
block_config.path_on_host =
get_host_path(DEVICE_TYPE_BLOCK.to_owned(), config.major, config.minor)
get_host_path(DEVICE_TYPE_BLOCK, config.major, config.minor)
.context("failed to get host path")?;
}

View File

@@ -1,23 +1,216 @@
// Copyright (c) 2019-2022 Alibaba Cloud
// Copyright (c) 2019-2022 Ant Group
// Copyright (c) 2019-2023 Alibaba Cloud
// Copyright (c) 2019-2023 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
mod vfio;
mod vhost_user;
mod virtio_blk;
mod virtio_fs;
mod virtio_net;
mod virtio_vsock;
pub use vfio::{
bind_device_to_host, bind_device_to_vfio, get_host_guest_map, get_vfio_device, HostDevice,
VfioBusMode, VfioConfig, VfioDevice,
};
pub use virtio_blk::{
BlockConfig, BlockDevice, KATA_BLK_DEV_TYPE, KATA_MMIO_BLK_DEV_TYPE, VIRTIO_BLOCK_MMIO,
VIRTIO_BLOCK_PCI,
};
mod virtio_net;
pub use virtio_net::{Address, NetworkConfig, NetworkDevice};
mod vfio;
pub use vfio::{bind_device_to_host, bind_device_to_vfio, VfioBusMode, VfioConfig, VfioDevice};
mod virtio_fs;
pub use virtio_fs::{
ShareFsDevice, ShareFsDeviceConfig, ShareFsMountConfig, ShareFsMountDevice, ShareFsMountType,
ShareFsOperation,
};
mod virtio_vsock;
pub use virtio_net::{Address, NetworkConfig, NetworkDevice};
pub use virtio_vsock::{HybridVsockConfig, HybridVsockDevice, VsockConfig, VsockDevice};
pub mod vhost_user_blk;
pub use vhost_user::{VhostUserConfig, VhostUserDevice, VhostUserType};
use anyhow::{anyhow, Context, Result};
// Tips:
// The Re-write `PciSlot` and `PciPath` with rust that it origins from `pcipath.go`:
//
// The PCI spec reserves 5 bits for slot number (a.k.a. device
// number), giving slots 0..31
const PCI_SLOT_BITS: u32 = 5;
const MAX_PCI_SLOTS: u32 = (1 << PCI_SLOT_BITS) - 1;
// A PciSlot describes where a PCI device sits on a single bus
//
// This encapsulates the PCI slot number a.k.a device number, which is
// limited to a 5 bit value [0x00..0x1f] by the PCI specification
//
// To support multifunction device's, It's needed to extend
// this to include the PCI 3-bit function number as well.
#[derive(Clone, Debug, Default, PartialEq)]
pub struct PciSlot(pub u8);
impl PciSlot {
pub fn convert_from_string(s: &str) -> Result<PciSlot> {
if s.is_empty() || s.len() > 2 {
return Err(anyhow!("string given is invalid."));
}
let base = 16;
let n = u64::from_str_radix(s, base).context("convert string to number failed")?;
if n >> PCI_SLOT_BITS > 0 {
return Err(anyhow!(
"number {:?} exceeds MAX:{:?}, failed.",
n,
MAX_PCI_SLOTS
));
}
Ok(PciSlot(n as u8))
}
pub fn convert_from_u32(v: u32) -> Result<PciSlot> {
if v > MAX_PCI_SLOTS {
return Err(anyhow!("value {:?} exceeds MAX: {:?}", v, MAX_PCI_SLOTS));
}
Ok(PciSlot(v as u8))
}
pub fn convert_to_string(&self) -> String {
format!("{:02x}", self.0)
}
}
// A PciPath describes where a PCI sits in a PCI hierarchy.
//
// Consists of a list of PCI slots, giving the slot of each bridge
// that must be traversed from the PCI root to reach the device,
// followed by the slot of the device itself.
//
// When formatted into a string is written as "xx/.../yy/zz". Here,
// zz is the slot of the device on its PCI bridge, yy is the slot of
// the bridge on its parent bridge and so forth until xx is the slot
// of the "most upstream" bridge on the root bus.
//
// If a device is directly connected to the root bus, which used in
// lightweight hypervisors, such as dragonball/firecracker/clh, and
// its PciPath.slots will contains only one PciSlot.
#[derive(Clone, Debug, Default, PartialEq)]
pub struct PciPath {
// list of PCI slots
slots: Vec<PciSlot>,
}
impl PciPath {
// method to format the PciPath into a string
pub fn convert_to_string(&self) -> String {
self.slots
.iter()
.map(|pci_slot| format!("{:02x}", pci_slot.0))
.collect::<Vec<String>>()
.join("/")
}
// method to parse a PciPath from a string
pub fn convert_from_string(path: &str) -> Result<PciPath> {
if path.is_empty() {
return Err(anyhow!("path given is empty."));
}
let mut pci_slots: Vec<PciSlot> = Vec::new();
let slots: Vec<&str> = path.split('/').collect();
for slot in slots {
match PciSlot::convert_from_string(slot) {
Ok(s) => pci_slots.push(s),
Err(e) => return Err(anyhow!("slot is invalid with: {:?}", e)),
}
}
Ok(PciPath { slots: pci_slots })
}
pub fn from_pci_slots(slots: Vec<PciSlot>) -> Option<PciPath> {
if slots.is_empty() {
return None;
}
Some(PciPath { slots })
}
// device_slot to get the slot of the device on its PCI bridge
pub fn get_device_slot(&self) -> Option<PciSlot> {
self.slots.last().cloned()
}
// root_slot to get the slot of the "most upstream" bridge on the root bus
pub fn get_root_slot(&self) -> Option<PciSlot> {
self.slots.first().cloned()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_pci_slot() {
// min
let pci_slot_01 = PciSlot::convert_from_string("00");
assert!(pci_slot_01.is_ok());
// max
let pci_slot_02 = PciSlot::convert_from_string("1f");
assert!(pci_slot_02.is_ok());
// exceed
let pci_slot_03 = PciSlot::convert_from_string("20");
assert!(pci_slot_03.is_err());
// valid number
let pci_slot_04 = PciSlot::convert_from_u32(1_u32);
assert!(pci_slot_04.is_ok());
assert_eq!(pci_slot_04.as_ref().unwrap().0, 1_u8);
let pci_slot_str = pci_slot_04.as_ref().unwrap().convert_to_string();
assert_eq!(pci_slot_str, format!("{:02x}", pci_slot_04.unwrap().0));
// max number
let pci_slot_05 = PciSlot::convert_from_u32(31_u32);
assert!(pci_slot_05.is_ok());
assert_eq!(pci_slot_05.unwrap().0, 31_u8);
// exceed and error
let pci_slot_06 = PciSlot::convert_from_u32(32_u32);
assert!(pci_slot_06.is_err());
}
#[test]
fn test_pci_patch() {
let pci_path_0 = PciPath::convert_from_string("01/0a/05");
assert!(pci_path_0.is_ok());
let pci_path_unwrap = pci_path_0.unwrap();
assert_eq!(pci_path_unwrap.slots[0].0, 1);
assert_eq!(pci_path_unwrap.slots[1].0, 10);
assert_eq!(pci_path_unwrap.slots[2].0, 5);
let pci_path_01 = PciPath::from_pci_slots(vec![PciSlot(1), PciSlot(10), PciSlot(5)]);
assert!(pci_path_01.is_some());
let pci_path = pci_path_01.unwrap();
let pci_path_02 = pci_path.convert_to_string();
assert_eq!(pci_path_02, "01/0a/05".to_string());
let dev_slot = pci_path.get_device_slot();
assert!(dev_slot.is_some());
assert_eq!(dev_slot.unwrap().0, 5);
let root_slot = pci_path.get_root_slot();
assert!(root_slot.is_some());
assert_eq!(root_slot.unwrap().0, 1);
}
#[test]
fn test_get_host_guest_map() {
// test unwrap is fine, no panic occurs.
let hg_map = get_host_guest_map("".to_owned());
assert!(hg_map.is_none());
}
}

View File

@@ -1,18 +1,100 @@
// Copyright (c) 2019-2022 Alibaba Cloud
// Copyright (c) 2019-2022 Ant Group
// Copyright (c) 2022-2023 Alibaba Cloud
// Copyright (c) 2022-2023 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
use std::{fs, path::Path, process::Command};
use std::{
collections::HashMap,
fs,
path::{Path, PathBuf},
process::Command,
sync::{
atomic::{AtomicU8, Ordering},
Arc, RwLock,
},
};
use crate::device::Device;
use crate::device::DeviceType;
use crate::Hypervisor as hypervisor;
#[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
use anyhow::anyhow;
use anyhow::{Context, Result};
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use lazy_static::lazy_static;
use path_clean::PathClean;
use crate::{
device::{hypervisor, Device, DeviceType},
PciPath, PciSlot,
};
use kata_sys_util::fs::get_base_name;
pub const SYS_BUS_PCI_DRIVER_PROBE: &str = "/sys/bus/pci/drivers_probe";
pub const SYS_BUS_PCI_DEVICES: &str = "/sys/bus/pci/devices";
pub const SYS_KERN_IOMMU_GROUPS: &str = "/sys/kernel/iommu_groups";
pub const VFIO_PCI_DRIVER: &str = "vfio-pci";
pub const DRIVER_MMIO_BLK_TYPE: &str = "mmioblk";
pub const DRIVER_VFIO_PCI_TYPE: &str = "vfio-pci";
pub const MAX_DEV_ID_SIZE: usize = 31;
const VFIO_PCI_DRIVER_NEW_ID: &str = "/sys/bus/pci/drivers/vfio-pci/new_id";
const VFIO_PCI_DRIVER_UNBIND: &str = "/sys/bus/pci/drivers/vfio-pci/unbind";
const SYS_CLASS_IOMMU: &str = "/sys/class/iommu";
const INTEL_IOMMU_PREFIX: &str = "dmar";
const AMD_IOMMU_PREFIX: &str = "ivhd";
const ARM_IOMMU_PREFIX: &str = "smmu";
lazy_static! {
static ref GUEST_DEVICE_ID: Arc<AtomicU8> = Arc::new(AtomicU8::new(0_u8));
static ref HOST_GUEST_MAP: Arc<RwLock<HashMap<String, String>>> =
Arc::new(RwLock::new(HashMap::new()));
}
// map host/guest bdf and the mapping saved into `HOST_GUEST_MAP`,
// and return PciPath.
pub fn generate_guest_pci_path(bdf: String) -> Result<PciPath> {
let hg_map = HOST_GUEST_MAP.clone();
let current_id = GUEST_DEVICE_ID.clone();
current_id.fetch_add(1, Ordering::SeqCst);
let slot = current_id.load(Ordering::SeqCst);
// In some Hypervisors, dragonball, cloud-hypervisor or firecracker,
// the device is directly connected to the bus without intermediary bus.
// FIXME: Qemu's pci path needs to be implemented;
let host_bdf = normalize_device_bdf(bdf.as_str());
let guest_bdf = format!("0000:00:{:02x}.0", slot);
// safe, just do unwrap as `HOST_GUEST_MAP` is always valid.
hg_map.write().unwrap().insert(host_bdf, guest_bdf);
Ok(PciPath {
slots: vec![PciSlot::convert_from_u32(slot.into()).context("pci slot convert failed.")?],
})
}
// get host/guest mapping for info
pub fn get_host_guest_map(host_bdf: String) -> Option<String> {
// safe, just do unwrap as `HOST_GUEST_MAP` is always valid.
HOST_GUEST_MAP.read().unwrap().get(&host_bdf).cloned()
}
pub fn do_check_iommu_on() -> Result<bool> {
let element = std::fs::read_dir(SYS_CLASS_IOMMU)?
.filter_map(|e| e.ok())
.last();
if element.is_none() {
return Err(anyhow!("iommu is not enabled"));
}
// safe here, the result of map is always be Some(true) or Some(false).
Ok(element
.map(|e| {
let x = e.file_name().to_string_lossy().into_owned();
x.starts_with(INTEL_IOMMU_PREFIX)
|| x.starts_with(AMD_IOMMU_PREFIX)
|| x.starts_with(ARM_IOMMU_PREFIX)
})
.unwrap())
}
fn override_driver(bdf: &str, driver: &str) -> Result<()> {
let driver_override = format!("/sys/bus/pci/devices/{}/driver_override", bdf);
@@ -22,56 +104,470 @@ fn override_driver(bdf: &str, driver: &str) -> Result<()> {
Ok(())
}
const SYS_PCI_DEVICES_PATH: &str = "/sys/bus/pci/devices";
const PCI_DRIVER_PROBE: &str = "/sys/bus/pci/drivers_probe";
const VFIO_NEW_ID_PATH: &str = "/sys/bus/pci/drivers/vfio-pci/new_id";
const VFIO_UNBIND_PATH: &str = "/sys/bus/pci/drivers/vfio-pci/unbind";
pub const VFIO_PCI: &str = "vfio-pci";
#[derive(Debug, Clone)]
#[derive(Clone, Debug, Default, PartialEq)]
pub enum VfioBusMode {
PCI,
#[default]
MMIO,
PCI,
}
impl VfioBusMode {
pub fn new(mode: &str) -> Result<Self> {
Ok(match mode {
pub fn new(mode: &str) -> Self {
match mode {
"mmio" => VfioBusMode::MMIO,
_ => VfioBusMode::PCI,
})
}
}
pub fn to_string(mode: VfioBusMode) -> String {
match mode {
VfioBusMode::MMIO => "mmio".to_owned(),
_ => "pci".to_owned(),
}
}
// driver_type used for kata-agent
// (1) vfio-pci for add device handler,
// (2) mmioblk for add storage handler,
pub fn driver_type(mode: &str) -> &str {
match mode {
"b" => DRIVER_MMIO_BLK_TYPE,
_ => DRIVER_VFIO_PCI_TYPE,
}
}
}
#[derive(Debug, Clone)]
pub struct VfioConfig {
#[derive(Clone, Debug, Default)]
pub enum VfioDeviceType {
/// error type of VFIO device
Error,
/// normal VFIO device type
#[default]
Normal,
/// mediated VFIO device type
Mediated,
}
// DeviceVendor represents a PCI device's device id and vendor id
// DeviceVendor: (device, vendor)
#[derive(Clone, Debug)]
pub struct DeviceVendor(String, String);
impl DeviceVendor {
pub fn get_device_vendor(&self) -> Result<(u32, u32)> {
// default value is 0 when vendor_id or device_id is empty
if self.0.is_empty() || self.1.is_empty() {
return Ok((0, 0));
}
let do_convert = |id: &String| {
u32::from_str_radix(
id.trim_start_matches("0x")
.trim_matches(char::is_whitespace),
16,
)
.with_context(|| anyhow!("invalid id {:?}", id))
};
let device = do_convert(&self.0).context("convert device failed")?;
let vendor = do_convert(&self.1).context("convert vendor failed")?;
Ok((device, vendor))
}
pub fn get_device_vendor_id(&self) -> Result<u32> {
let (device, vendor) = self
.get_device_vendor()
.context("get device and vendor failed")?;
Ok(((device & 0xffff) << 16) | (vendor & 0xffff))
}
}
// HostDevice represents a VFIO drive used to hotplug
#[derive(Clone, Debug, Default)]
pub struct HostDevice {
/// unique identifier of the device
pub hostdev_id: String,
/// Sysfs path for mdev bus type device
pub sysfs_path: String,
/// PCI device information: "bus:slot:function"
/// PCI device information (BDF): "bus:slot:function"
pub bus_slot_func: String,
/// Bus Mode, PCI or MMIO
pub mode: VfioBusMode,
/// device_vendor: device id and vendor id
pub device_vendor: Option<DeviceVendor>,
/// type of vfio device
pub vfio_type: VfioDeviceType,
/// guest PCI path of device
pub guest_pci_path: Option<PciPath>,
/// vfio_vendor for vendor's some special cases.
#[cfg(feature = "enable-vendor")]
pub vfio_vendor: VfioVendor,
}
#[derive(Debug, Clone)]
// VfioConfig represents a VFIO drive used for hotplugging
#[derive(Clone, Debug, Default)]
pub struct VfioConfig {
/// usually host path will be /dev/vfio/N
pub host_path: String,
/// device as block or char
pub dev_type: String,
/// hostdev_prefix for devices, such as:
/// (1) phisycial endpoint: "physical_nic_"
/// (2) vfio mdev: "vfio_mdev_"
/// (3) vfio pci: "vfio_device_"
/// (4) vfio volume: "vfio_vol_"
/// (5) vfio nvme: "vfio_nvme_"
pub hostdev_prefix: String,
/// device in guest which it appears inside the VM,
/// outside of the container mount namespace
/// virt_path: Option<(index, virt_path_name)>
pub virt_path: Option<(u64, String)>,
}
#[derive(Clone, Debug, Default)]
pub struct VfioDevice {
/// Unique identifier of the device
pub id: String,
pub device_id: String,
pub attach_count: u64,
/// Config info for Vfio Device
/// Bus Mode, PCI or MMIO
pub bus_mode: VfioBusMode,
/// driver type
pub driver_type: String,
/// vfio config from business
pub config: VfioConfig,
// host device with multi-funtions
pub devices: Vec<HostDevice>,
// options for vfio pci handler in kata-agent
pub device_options: Vec<String>,
}
/// binds the device to vfio driver after unbinding from host.
/// Will be called by a network interface or a generic pcie device.
impl VfioDevice {
// new with VfioConfig
pub fn new(device_id: String, dev_info: &VfioConfig) -> Self {
// devices and device_options are in a 1-1 mapping, used in
// vfio-pci handler for kata-agent.
let devices: Vec<HostDevice> = Vec::with_capacity(MAX_DEV_ID_SIZE);
let device_options: Vec<String> = Vec::with_capacity(MAX_DEV_ID_SIZE);
// get bus mode and driver type based on the device type
let dev_type = dev_info.dev_type.as_str();
let driver_type = VfioBusMode::driver_type(dev_type).to_owned();
Self {
device_id,
attach_count: 0,
bus_mode: VfioBusMode::PCI,
driver_type,
config: dev_info.clone(),
devices,
device_options,
}
}
fn get_host_path(&self) -> String {
self.config.host_path.clone()
}
fn get_vfio_prefix(&self) -> String {
self.config.hostdev_prefix.clone()
}
// nornaml VFIO BDF: 0000:04:00.0
// mediated VFIO BDF: 83b8f4f2-509f-382f-3c1e-e6bfe0fa1001
fn get_vfio_device_type(&self, device_sys_path: String) -> Result<VfioDeviceType> {
let mut tokens: Vec<&str> = device_sys_path.as_str().split(':').collect();
let vfio_type = match tokens.len() {
3 => VfioDeviceType::Normal,
_ => {
tokens = device_sys_path.split('-').collect();
if tokens.len() == 5 {
VfioDeviceType::Mediated
} else {
VfioDeviceType::Error
}
}
};
Ok(vfio_type)
}
// get_sysfs_device returns the sysfsdev of mediated device
// expected input string format is absolute path to the sysfs dev node
// eg. /sys/kernel/iommu_groups/0/devices/f79944e4-5a3d-11e8-99ce-479cbab002e4
fn get_sysfs_device(&self, sysfs_dev_path: PathBuf) -> Result<String> {
let mut buf =
fs::canonicalize(sysfs_dev_path.clone()).context("sysfs device path not exist")?;
let mut resolved = false;
// resolve symbolic links until there's no more to resolve
while buf.symlink_metadata()?.file_type().is_symlink() {
let link = fs::read_link(&buf)?;
buf.pop();
buf.push(link);
resolved = true;
}
// If a symbolic link was resolved, the resulting path may be relative to the original path
if resolved {
// If the original path is relative and the resolved path is not, the resolved path
// should be returned as absolute.
if sysfs_dev_path.is_relative() && buf.is_absolute() {
buf = fs::canonicalize(&buf)?;
}
}
Ok(buf.clean().display().to_string())
}
// vfio device details: (device BDF, device SysfsDev, vfio Device Type)
fn get_vfio_device_details(
&self,
dev_file_name: String,
iommu_dev_path: PathBuf,
) -> Result<(Option<String>, String, VfioDeviceType)> {
let vfio_type = self.get_vfio_device_type(dev_file_name.clone())?;
match vfio_type {
VfioDeviceType::Normal => {
let dev_bdf = get_device_bdf(dev_file_name.clone());
let dev_sys = [SYS_BUS_PCI_DEVICES, dev_file_name.as_str()].join("/");
Ok((dev_bdf, dev_sys, vfio_type))
}
VfioDeviceType::Mediated => {
// sysfsdev eg. /sys/devices/pci0000:00/0000:00:02.0/f79944e4-5a3d-11e8-99ce-479cbab002e4
let sysfs_dev = Path::new(&iommu_dev_path).join(dev_file_name);
let dev_sys = self
.get_sysfs_device(sysfs_dev)
.context("get sysfs device failed")?;
let dev_bdf = if let Some(dev_s) = get_mediated_device_bdf(dev_sys.clone()) {
get_device_bdf(dev_s)
} else {
None
};
Ok((dev_bdf, dev_sys, vfio_type))
}
_ => Err(anyhow!("unsupported vfio type : {:?}", vfio_type)),
}
}
// read vendor and deviceor from /sys/bus/pci/devices/BDF/X
fn get_vfio_device_vendor(&self, bdf: &str) -> Result<DeviceVendor> {
let device =
get_device_property(bdf, "device").context("get device from syspath failed")?;
let vendor =
get_device_property(bdf, "vendor").context("get vendor from syspath failed")?;
Ok(DeviceVendor(device, vendor))
}
async fn set_vfio_config(
&mut self,
iommu_devs_path: PathBuf,
device_name: &str,
) -> Result<HostDevice> {
let vfio_dev_details = self
.get_vfio_device_details(device_name.to_owned(), iommu_devs_path)
.context("get vfio device details failed")?;
// It's safe as BDF really exists.
let dev_bdf = vfio_dev_details.0.unwrap();
let dev_vendor = self
.get_vfio_device_vendor(&dev_bdf)
.context("get property device and vendor failed")?;
let mut vfio_dev = HostDevice {
bus_slot_func: dev_bdf.clone(),
device_vendor: Some(dev_vendor),
sysfs_path: vfio_dev_details.1,
vfio_type: vfio_dev_details.2,
..Default::default()
};
// when vfio pci, kata-agent handles with device_options, and its
// format: "DDDD:BB:DD.F=<pcipath>"
// DDDD:BB:DD.F is the device's PCI address on host
// <pcipath> is the device's PCI path in the guest
if self.bus_mode == VfioBusMode::PCI {
let pci_path =
generate_guest_pci_path(dev_bdf.clone()).context("generate pci path failed")?;
vfio_dev.guest_pci_path = Some(pci_path.clone());
self.device_options
.push(format!("0000:{}={}", dev_bdf, pci_path.convert_to_string()));
}
Ok(vfio_dev)
}
// filter Host or PCI Bridges that are in the same IOMMU group as the
// passed-through devices. One CANNOT pass-through a PCI bridge or Host
// bridge. Class 0x0604 is PCI bridge, 0x0600 is Host bridge
fn filter_bridge_device(&self, bdf: &str, bitmask: u64) -> Option<u64> {
let device_class = match get_device_property(bdf, "class") {
Ok(dev_class) => dev_class,
Err(_) => "".to_string(),
};
if device_class.is_empty() {
return None;
}
match device_class.parse::<u32>() {
Ok(cid_u32) => {
// class code is 16 bits, remove the two trailing zeros
let class_code = u64::from(cid_u32) >> 8;
if class_code & bitmask == bitmask {
Some(class_code)
} else {
None
}
}
_ => None,
}
}
}
#[async_trait]
impl Device for VfioDevice {
async fn attach(&mut self, h: &dyn hypervisor) -> Result<()> {
// host path: /dev/vfio/X
let host_path = self.get_host_path();
// vfio group: X
let vfio_group = get_base_name(host_path.clone())?
.into_string()
.map_err(|e| anyhow!("failed to get base name {:?}", e))?;
// /sys/kernel/iommu_groups/X/devices
let iommu_devs_path = Path::new(SYS_KERN_IOMMU_GROUPS)
.join(vfio_group.as_str())
.join("devices");
// /sys/kernel/iommu_groups/X/devices
// DDDD:BB:DD.F0 DDDD:BB:DD.F1
let iommu_devices = fs::read_dir(iommu_devs_path.clone())?
.filter_map(|e| {
let x = e.ok()?.file_name().to_string_lossy().into_owned();
Some(x)
})
.collect::<Vec<String>>();
if iommu_devices.len() > 1 {
warn!(sl!(), "vfio device {} with multi-function", host_path);
}
// pass all devices in iommu group, and use index to identify device.
for (index, device) in iommu_devices.iter().enumerate() {
// filter host or PCI bridge
if self.filter_bridge_device(device, 0x0600).is_some() {
continue;
}
let mut hostdev: HostDevice = self
.set_vfio_config(iommu_devs_path.clone(), device)
.await
.context("set vfio config failed")?;
let dev_prefix = self.get_vfio_prefix();
hostdev.hostdev_id = make_device_nameid(&dev_prefix, index, MAX_DEV_ID_SIZE);
self.devices.push(hostdev);
}
if self
.increase_attach_count()
.await
.context("failed to increase attach count")?
{
return Err(anyhow!("attach count increased failed as some reason."));
}
// do add device for vfio deivce
if let Err(e) = h.add_device(DeviceType::Vfio(self.clone())).await {
self.decrease_attach_count().await?;
return Err(e);
}
Ok(())
}
async fn detach(&mut self, h: &dyn hypervisor) -> Result<Option<u64>> {
if self
.decrease_attach_count()
.await
.context("failed to decrease attach count")?
{
return Ok(None);
}
if let Err(e) = h.remove_device(DeviceType::Vfio(self.clone())).await {
self.increase_attach_count().await?;
return Err(e);
}
// only virt_path is Some, there's a device index
let device_index = if let Some(virt_path) = self.config.virt_path.clone() {
Some(virt_path.0)
} else {
None
};
Ok(device_index)
}
async fn increase_attach_count(&mut self) -> Result<bool> {
match self.attach_count {
0 => {
// do real attach
self.attach_count += 1;
Ok(false)
}
std::u64::MAX => Err(anyhow!("device was attached too many times")),
_ => {
self.attach_count += 1;
Ok(true)
}
}
}
async fn decrease_attach_count(&mut self) -> Result<bool> {
match self.attach_count {
0 => Err(anyhow!("detaching a device that wasn't attached")),
1 => {
// do real wrok
self.attach_count -= 1;
Ok(false)
}
_ => {
self.attach_count -= 1;
Ok(true)
}
}
}
async fn get_device_info(&self) -> DeviceType {
DeviceType::Vfio(self.clone())
}
}
// binds the device to vfio driver after unbinding from host.
// Will be called by a network interface or a generic pcie device.
pub fn bind_device_to_vfio(bdf: &str, host_driver: &str, _vendor_device_id: &str) -> Result<()> {
// modprobe vfio-pci
if !Path::new(VFIO_NEW_ID_PATH).exists() {
if !Path::new(VFIO_PCI_DRIVER_NEW_ID).exists() {
Command::new("modprobe")
.arg(VFIO_PCI)
.arg(VFIO_PCI_DRIVER)
.output()
.expect("Failed to run modprobe vfio-pci");
}
@@ -86,17 +582,20 @@ pub fn bind_device_to_vfio(bdf: &str, host_driver: &str, _vendor_device_id: &str
}
}
if !do_check_iommu_on().context("check iommu on failed")? {
return Err(anyhow!("IOMMU not enabled yet."));
}
// if it's already bound to vfio
if is_equal_driver(bdf, VFIO_PCI) {
if is_equal_driver(bdf, VFIO_PCI_DRIVER) {
info!(sl!(), "bdf : {} was already bound to vfio-pci", bdf);
return Ok(());
}
info!(sl!(), "host driver : {}", host_driver);
override_driver(bdf, VFIO_PCI).context("override driver")?;
override_driver(bdf, VFIO_PCI_DRIVER).context("override driver")?;
let unbind_path = format!("/sys/bus/pci/devices/{}/driver/unbind", bdf);
// echo bdf > /sys/bus/pci/drivers/virtio-pci/unbind"
fs::write(&unbind_path, bdf)
.with_context(|| format!("Failed to echo {} > {}", bdf, &unbind_path))?;
@@ -104,15 +603,16 @@ pub fn bind_device_to_vfio(bdf: &str, host_driver: &str, _vendor_device_id: &str
info!(sl!(), "{} is unbound from {}", bdf, host_driver);
// echo bdf > /sys/bus/pci/drivers_probe
fs::write(PCI_DRIVER_PROBE, bdf)
.with_context(|| format!("Failed to echo {} > {}", bdf, PCI_DRIVER_PROBE))?;
fs::write(SYS_BUS_PCI_DRIVER_PROBE, bdf)
.with_context(|| format!("Failed to echo {} > {}", bdf, SYS_BUS_PCI_DRIVER_PROBE))?;
info!(sl!(), "echo {} > /sys/bus/pci/drivers_probe", bdf);
Ok(())
}
pub fn is_equal_driver(bdf: &str, host_driver: &str) -> bool {
let sys_pci_devices_path = Path::new(SYS_PCI_DEVICES_PATH);
let sys_pci_devices_path = Path::new(SYS_BUS_PCI_DEVICES);
let driver_file = sys_pci_devices_path.join(bdf).join("driver");
if driver_file.exists() {
@@ -126,10 +626,9 @@ pub fn is_equal_driver(bdf: &str, host_driver: &str) -> bool {
false
}
/// bind_device_to_host binds the device to the host driver after unbinding from vfio-pci.
// bind_device_to_host binds the device to the host driver after unbinding from vfio-pci.
pub fn bind_device_to_host(bdf: &str, host_driver: &str, _vendor_device_id: &str) -> Result<()> {
// Unbind from vfio-pci driver to the original host driver
info!(sl!(), "bind {} to {}", bdf, host_driver);
// if it's already bound to host_driver
@@ -144,37 +643,136 @@ pub fn bind_device_to_host(bdf: &str, host_driver: &str, _vendor_device_id: &str
override_driver(bdf, host_driver).context("override driver")?;
// echo bdf > /sys/bus/pci/drivers/vfio-pci/unbind"
std::fs::write(VFIO_UNBIND_PATH, bdf)
.with_context(|| format!("echo {}> {}", bdf, VFIO_UNBIND_PATH))?;
info!(sl!(), "echo {} > {}", bdf, VFIO_UNBIND_PATH);
std::fs::write(VFIO_PCI_DRIVER_UNBIND, bdf)
.with_context(|| format!("echo {}> {}", bdf, VFIO_PCI_DRIVER_UNBIND))?;
info!(sl!(), "echo {} > {}", bdf, VFIO_PCI_DRIVER_UNBIND);
// echo bdf > /sys/bus/pci/drivers_probe
std::fs::write(PCI_DRIVER_PROBE, bdf)
.with_context(|| format!("echo {} > {}", bdf, PCI_DRIVER_PROBE))?;
info!(sl!(), "echo {} > {}", bdf, PCI_DRIVER_PROBE);
std::fs::write(SYS_BUS_PCI_DRIVER_PROBE, bdf)
.with_context(|| format!("echo {} > {}", bdf, SYS_BUS_PCI_DRIVER_PROBE))?;
info!(sl!(), "echo {} > {}", bdf, SYS_BUS_PCI_DRIVER_PROBE);
Ok(())
}
#[async_trait]
impl Device for VfioConfig {
async fn attach(&mut self, _h: &dyn hypervisor) -> Result<()> {
todo!()
// get_vfio_device_bdf returns the BDF of pci device
// expected format <bus>:<slot>.<func> eg. 02:10.0
fn get_device_bdf(dev_sys_str: String) -> Option<String> {
let dev_sys = dev_sys_str;
if !dev_sys.starts_with("0000:") {
return Some(dev_sys);
}
async fn detach(&mut self, _h: &dyn hypervisor) -> Result<Option<u64>> {
todo!()
let parts: Vec<&str> = dev_sys.as_str().splitn(2, ':').collect();
if parts.len() < 2 {
return None;
}
async fn get_device_info(&self) -> DeviceType {
todo!()
}
parts.get(1).copied().map(|bdf| bdf.to_owned())
}
async fn increase_attach_count(&mut self) -> Result<bool> {
todo!()
}
async fn decrease_attach_count(&mut self) -> Result<bool> {
todo!()
// expected format <domain>:<bus>:<slot>.<func> eg. 0000:02:10.0
fn normalize_device_bdf(bdf: &str) -> String {
if !bdf.starts_with("0000") {
format!("0000:{}", bdf)
} else {
bdf.to_string()
}
}
// make_device_nameid: generate a ID for the hypervisor commandline
fn make_device_nameid(name_type: &str, id: usize, max_len: usize) -> String {
let name_id = format!("{}_{}", name_type, id);
if name_id.len() > max_len {
name_id[0..max_len].to_string()
} else {
name_id
}
}
// get_mediated_device_bdf returns the MDEV BDF
// expected input string /sys/devices/pci0000:d7/BDF0/BDF1/.../MDEVBDF/UUID
fn get_mediated_device_bdf(dev_sys_str: String) -> Option<String> {
let dev_sys = dev_sys_str;
let parts: Vec<&str> = dev_sys.as_str().split('/').collect();
if parts.len() < 4 {
return None;
}
parts
.get(parts.len() - 2)
.copied()
.map(|bdf| bdf.to_owned())
}
// dev_sys_path: /sys/bus/pci/devices/DDDD:BB:DD.F
// cfg_path: : /sys/bus/pci/devices/DDDD:BB:DD.F/xxx
fn get_device_property(bdf: &str, property: &str) -> Result<String> {
let device_name = normalize_device_bdf(bdf);
let dev_sys_path = Path::new(SYS_BUS_PCI_DEVICES).join(device_name);
let cfg_path = fs::read_to_string(dev_sys_path.join(property)).with_context(|| {
format!(
"failed to read {}",
dev_sys_path.join(property).to_str().unwrap()
)
})?;
Ok(cfg_path.as_str().trim_end_matches('\n').to_string())
}
pub fn get_vfio_iommu_group(bdf: String) -> Result<String> {
// /sys/bus/pci/devices/DDDD:BB:DD.F/iommu_group
let dbdf = normalize_device_bdf(bdf.as_str());
let iommugrp_path = Path::new(SYS_BUS_PCI_DEVICES)
.join(dbdf.as_str())
.join("iommu_group");
if !iommugrp_path.exists() {
warn!(
sl!(),
"IOMMU group path: {:?} not found, do bind device to vfio first.", iommugrp_path
);
return Err(anyhow!("please do bind device to vfio"));
}
// iommu group symlink: ../../../../../../kernel/iommu_groups/X
let iommugrp_symlink = fs::read_link(&iommugrp_path)
.map_err(|e| anyhow!("read iommu group symlink failed {:?}", e))?;
// get base name from iommu group symlink: X
let iommu_group = get_base_name(iommugrp_symlink)?
.into_string()
.map_err(|e| anyhow!("failed to get iommu group {:?}", e))?;
// we'd better verify the path to ensure it dose exist.
if !Path::new(SYS_KERN_IOMMU_GROUPS)
.join(&iommu_group)
.join("devices")
.join(dbdf.as_str())
.exists()
{
return Err(anyhow!(
"device dbdf {:?} dosn't exist in {}/{}/devices.",
dbdf.as_str(),
SYS_KERN_IOMMU_GROUPS,
iommu_group
));
}
Ok(format!("/dev/vfio/{}", iommu_group))
}
pub fn get_vfio_device(device: String) -> Result<String> {
// support both /dev/vfio/X and BDF<DDDD:BB:DD.F> or BDF<BB:DD.F2>
let mut vfio_device = device;
let bdf_vec: Vec<&str> = vfio_device.as_str().split(&[':', '.'][..]).collect();
if bdf_vec.len() >= 3 && bdf_vec.len() < 5 {
// DDDD:BB:DD.F -> /dev/vfio/X
vfio_device =
get_vfio_iommu_group(vfio_device.clone()).context("get vfio iommu group failed")?;
}
Ok(vfio_device)
}

View File

@@ -1,34 +1,69 @@
// Copyright (c) 2019-2023 Alibaba Cloud
// Copyright (c) 2019-2023 Ant Group
// Copyright (c) 2022-2023 Alibaba Cloud
// Copyright (c) 2022-2023 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
use crate::device::Device;
use crate::device::DeviceType;
use crate::Hypervisor as hypervisor;
use anyhow::Result;
use async_trait::async_trait;
#[derive(Debug, Clone)]
pub enum VhostUserType {
/// Blk - represents a block vhostuser device type
/// "vhost-user-blk-pci"
Blk(String),
/// SCSI - represents SCSI based vhost-user type
/// "vhost-user-scsi-pci"
SCSI(String),
/// Net - represents Net based vhost-user type
/// "virtio-net-pci"
Net(String),
/// FS - represents a virtio-fs vhostuser device type
/// "vhost-user-fs-pci"
FS(String),
}
impl Default for VhostUserType {
fn default() -> Self {
VhostUserType::Blk("vhost-user-blk-pci".to_owned())
}
}
#[derive(Debug, Clone, Default)]
/// VhostUserConfig represents data shared by most vhost-user devices
pub struct VhostUserConfig {
/// Device id
/// device id
pub dev_id: String,
/// Socket path
/// socket path
pub socket_path: String,
/// Mac_address is only meaningful for vhost user net device
/// mac_address is only meaningful for vhost user net device
pub mac_address: String,
/// These are only meaningful for vhost user fs devices
/// vhost-user-fs is only meaningful for vhost-user-fs device
pub tag: String,
pub cache: String,
pub device_type: String,
/// Pci_addr is the PCI address used to identify the slot at which the drive is attached.
pub pci_addr: Option<String>,
/// Block index of the device if assigned
pub index: u8,
/// vhost-user-fs cache mode
pub cache_mode: String,
/// vhost-user-fs cache size in MB
pub cache_size: u32,
pub queue_siez: u32,
/// vhost user device type
pub device_type: VhostUserType,
/// guest block driver
pub driver_option: String,
/// pci_addr is the PCI address used to identify the slot at which the drive is attached.
pub pci_addr: Option<String>,
/// Block index of the device if assigned
/// type u64 is not OK
pub index: u64,
/// Virtio queue size. Size: byte
pub queue_size: u32,
/// Block device multi-queue
pub num_queues: usize,
/// device path in guest
pub virt_path: String,
}
#[derive(Debug, Clone, Default)]
@@ -36,26 +71,3 @@ pub struct VhostUserDevice {
pub device_id: String,
pub config: VhostUserConfig,
}
#[async_trait]
impl Device for VhostUserConfig {
async fn attach(&mut self, _h: &dyn hypervisor) -> Result<()> {
todo!()
}
async fn detach(&mut self, _h: &dyn hypervisor) -> Result<Option<u64>> {
todo!()
}
async fn get_device_info(&self) -> DeviceType {
todo!()
}
async fn increase_attach_count(&mut self) -> Result<bool> {
todo!()
}
async fn decrease_attach_count(&mut self) -> Result<bool> {
todo!()
}
}

View File

@@ -0,0 +1,122 @@
// Copyright (c) 2023 Alibaba Cloud
// Copyright (c) 2023 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use super::VhostUserConfig;
use crate::{
device::{Device, DeviceType},
Hypervisor as hypervisor,
};
#[derive(Debug, Clone, Default)]
pub struct VhostUserBlkDevice {
pub device_id: String,
/// If set to true, the drive is opened in read-only mode. Otherwise, the
/// drive is opened as read-write.
pub is_readonly: bool,
/// Don't close `path_on_host` file when dropping the device.
pub no_drop: bool,
/// driver type for block device
pub driver_option: String,
pub attach_count: u64,
pub config: VhostUserConfig,
}
impl VhostUserBlkDevice {
// new creates a new VhostUserBlkDevice
pub fn new(device_id: String, config: VhostUserConfig) -> Self {
VhostUserBlkDevice {
device_id,
attach_count: 0,
config,
..Default::default()
}
}
}
#[async_trait]
impl Device for VhostUserBlkDevice {
async fn attach(&mut self, h: &dyn hypervisor) -> Result<()> {
// increase attach count, skip attach the device if the device is already attached
if self
.increase_attach_count()
.await
.context("failed to increase attach count")?
{
return Ok(());
}
if let Err(e) = h.add_device(DeviceType::VhostUserBlk(self.clone())).await {
self.decrease_attach_count().await?;
return Err(e);
}
return Ok(());
}
async fn detach(&mut self, h: &dyn hypervisor) -> Result<Option<u64>> {
// get the count of device detached, and detach once it reaches 0
if self
.decrease_attach_count()
.await
.context("failed to decrease attach count")?
{
return Ok(None);
}
if let Err(e) = h
.remove_device(DeviceType::VhostUserBlk(self.clone()))
.await
{
self.increase_attach_count().await?;
return Err(e);
}
Ok(Some(self.config.index))
}
async fn get_device_info(&self) -> DeviceType {
DeviceType::VhostUserBlk(self.clone())
}
async fn increase_attach_count(&mut self) -> Result<bool> {
match self.attach_count {
0 => {
// do real attach
self.attach_count += 1;
Ok(false)
}
std::u64::MAX => Err(anyhow!("device was attached too many times")),
_ => {
self.attach_count += 1;
Ok(true)
}
}
}
async fn decrease_attach_count(&mut self) -> Result<bool> {
match self.attach_count {
0 => Err(anyhow!("detaching a device that wasn't attached")),
1 => {
// do real wrok
self.attach_count -= 1;
Ok(false)
}
_ => {
self.attach_count -= 1;
Ok(true)
}
}
}
}

View File

@@ -1,17 +1,18 @@
// Copyright (c) 2019-2022 Alibaba Cloud
// Copyright (c) 2019-2022 Ant Group
// Copyright (c) 2022-2023 Alibaba Cloud
// Copyright (c) 2022-2023 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
pub const VIRTIO_BLOCK_MMIO: &str = "virtio-blk-mmio";
use crate::device::Device;
use crate::device::DeviceType;
use crate::Hypervisor as hypervisor;
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
/// VIRTIO_BLOCK_PCI indicates block driver is virtio-pci based
pub const VIRTIO_BLOCK_PCI: &str = "virtio-blk-pci";
pub const VIRTIO_BLOCK_MMIO: &str = "virtio-blk-mmio";
pub const KATA_MMIO_BLK_DEV_TYPE: &str = "mmioblk";
pub const KATA_BLK_DEV_TYPE: &str = "blk";

View File

@@ -6,10 +6,11 @@
use std::fmt;
use crate::device::driver::vhost_user_blk::VhostUserBlkDevice;
use crate::{
BlockConfig, BlockDevice, HybridVsockConfig, HybridVsockDevice, Hypervisor as hypervisor,
NetworkConfig, NetworkDevice, ShareFsDevice, ShareFsDeviceConfig, ShareFsMountConfig,
ShareFsMountDevice, VfioConfig, VfioDevice, VsockConfig, VsockDevice,
ShareFsMountDevice, VfioConfig, VfioDevice, VhostUserConfig, VsockConfig, VsockDevice,
};
use anyhow::Result;
use async_trait::async_trait;
@@ -21,6 +22,7 @@ pub mod util;
#[derive(Debug)]
pub enum DeviceConfig {
BlockCfg(BlockConfig),
VhostUserBlkCfg(VhostUserConfig),
NetworkCfg(NetworkConfig),
ShareFsCfg(ShareFsDeviceConfig),
VfioCfg(VfioConfig),
@@ -32,6 +34,7 @@ pub enum DeviceConfig {
#[derive(Debug)]
pub enum DeviceType {
Block(BlockDevice),
VhostUserBlk(VhostUserBlkDevice),
Vfio(VfioDevice),
Network(NetworkDevice),
ShareFs(ShareFsDevice),
@@ -47,7 +50,7 @@ impl fmt::Display for DeviceType {
}
#[async_trait]
pub trait Device: Send + Sync {
pub trait Device: std::fmt::Debug + Send + Sync {
// attach is to plug device into VM
async fn attach(&mut self, h: &dyn hypervisor) -> Result<()>;
// detach is to unplug device from VM

View File

@@ -9,11 +9,14 @@ use ini::Ini;
const SYS_DEV_PREFIX: &str = "/sys/dev";
pub const DEVICE_TYPE_BLOCK: &str = "b";
pub const DEVICE_TYPE_CHAR: &str = "c";
// get_host_path is used to fetch the host path for the device.
// The path passed in the spec refers to the path that should appear inside the container.
// We need to find the actual device path on the host based on the major-minor numbers of the device.
pub fn get_host_path(dev_type: String, major: i64, minor: i64) -> Result<String> {
let path_comp = match dev_type.as_str() {
pub fn get_host_path(dev_type: &str, major: i64, minor: i64) -> Result<String> {
let path_comp = match dev_type {
"c" | "u" => "char",
"b" => "block",
// for device type p will return an empty string

View File

@@ -28,6 +28,7 @@ use std::{collections::HashSet, fs::create_dir_all, path::PathBuf};
const DRAGONBALL_KERNEL: &str = "vmlinux";
const DRAGONBALL_ROOT_FS: &str = "rootfs";
#[derive(Debug)]
pub struct DragonballInner {
/// sandbox id
pub(crate) id: String,

View File

@@ -8,15 +8,18 @@ use std::path::PathBuf;
use anyhow::{anyhow, Context, Result};
use dbs_utils::net::MacAddr;
use dragonball::api::v1::{
BlockDeviceConfigInfo, FsDeviceConfigInfo, FsMountConfigInfo, VirtioNetDeviceConfigInfo,
VsockDeviceConfigInfo,
use dragonball::{
api::v1::{
BlockDeviceConfigInfo, FsDeviceConfigInfo, FsMountConfigInfo, VirtioNetDeviceConfigInfo,
VsockDeviceConfigInfo,
},
device_manager::blk_dev_mgr::BlockDeviceType,
};
use super::DragonballInner;
use crate::{
device::DeviceType, HybridVsockConfig, NetworkConfig, ShareFsDeviceConfig, ShareFsMountConfig,
ShareFsMountType, ShareFsOperation, VmmState,
ShareFsMountType, ShareFsOperation, VfioBusMode, VfioDevice, VmmState,
};
const MB_TO_B: u32 = 1024 * 1024;
@@ -47,9 +50,7 @@ impl DragonballInner {
DeviceType::Network(network) => self
.add_net_device(&network.config, network.id)
.context("add net device"),
DeviceType::Vfio(_) => {
todo!()
}
DeviceType::Vfio(hostdev) => self.add_vfio_device(&hostdev).context("add vfio device"),
DeviceType::Block(block) => self
.add_block_device(
block.config.path_on_host.as_str(),
@@ -58,6 +59,14 @@ impl DragonballInner {
block.config.no_drop,
)
.context("add block device"),
DeviceType::VhostUserBlk(block) => self
.add_block_device(
block.config.socket_path.as_str(),
block.device_id.as_str(),
block.is_readonly,
block.no_drop,
)
.context("add vhost user based block device"),
DeviceType::HybridVsock(hvsock) => self.add_hvsock(&hvsock.config).context("add vsock"),
DeviceType::ShareFs(sharefs) => self
.add_share_fs_device(&sharefs.config)
@@ -80,13 +89,77 @@ impl DragonballInner {
self.remove_block_drive(drive_id.as_str())
.context("remove block drive")
}
DeviceType::Vfio(_config) => {
todo!()
DeviceType::Vfio(hostdev) => {
let primary_device = hostdev.devices.first().unwrap().clone();
let hostdev_id = primary_device.hostdev_id;
self.remove_vfio_device(hostdev_id)
}
_ => Err(anyhow!("unsupported device {:?}", device)),
}
}
fn add_vfio_device(&mut self, device: &VfioDevice) -> Result<()> {
let vfio_device = device.clone();
// FIXME:
// A device with multi-funtions, or a IOMMU group with one more
// devices, the Primary device is selected to be passed to VM.
// And the the first one is Primary device.
// safe here, devices is not empty.
let primary_device = vfio_device.devices.first().unwrap().clone();
let vendor_device_id = if let Some(vd) = primary_device.device_vendor {
vd.get_device_vendor_id()?
} else {
0
};
let guest_dev_id = if let Some(pci_path) = primary_device.guest_pci_path {
// safe here, dragonball's pci device directly connects to root bus.
// usually, it has been assigned in vfio device manager.
pci_path.get_device_slot().unwrap().0
} else {
0
};
let bus_mode = VfioBusMode::to_string(vfio_device.bus_mode);
info!(sl!(), "Mock for dragonball insert host device.");
info!(
sl!(),
" Mock for dragonball insert host device.
host device id: {:?},
bus_slot_func: {:?},
bus mod: {:?},
guest device id: {:?},
vendor/device id: {:?}",
primary_device.hostdev_id,
primary_device.bus_slot_func,
bus_mode,
guest_dev_id,
vendor_device_id,
);
// FIXME:
// interface implementation to be done when dragonball supports
// self.vmm_instance.insert_host_device(host_cfg)?;
Ok(())
}
fn remove_vfio_device(&mut self, hostdev_id: String) -> Result<()> {
info!(
sl!(),
"Mock for dragonball remove host_device with hostdev id {:?}", hostdev_id
);
// FIXME:
// interface implementation to be done when dragonball supports
// self.vmm_instance.remove_host_device(hostdev_id)?;
Ok(())
}
fn add_block_device(
&mut self,
path: &str,
@@ -99,6 +172,7 @@ impl DragonballInner {
let blk_cfg = BlockDeviceConfigInfo {
drive_id: id.to_string(),
device_type: BlockDeviceType::get_type(path),
path_on_host: PathBuf::from(jailed_drive),
is_direct: self.config.blockdev_info.block_device_cache_direct,
no_drop,

View File

@@ -22,6 +22,7 @@ use tokio::sync::RwLock;
use crate::{DeviceType, Hypervisor, VcpuThreadIds};
#[derive(Debug)]
pub struct Dragonball {
inner: Arc<RwLock<DragonballInner>>,
}

View File

@@ -36,6 +36,7 @@ const DRAGONBALL_VERSION: &str = env!("CARGO_PKG_VERSION");
const REQUEST_RETRY: u32 = 500;
const KVM_DEVICE: &str = "/dev/kvm";
#[derive(Debug)]
pub struct VmmInstance {
/// VMM instance info directly accessible from runtime
vmm_shared_info: Arc<RwLock<InstanceInfo>>,

View File

@@ -70,7 +70,7 @@ pub struct VcpuThreadIds {
}
#[async_trait]
pub trait Hypervisor: Send + Sync {
pub trait Hypervisor: std::fmt::Debug + Send + Sync {
// vm manager
async fn prepare_vm(&self, id: &str, netns: Option<String>) -> Result<()>;
async fn start_vm(&self, timeout: i32) -> Result<()>;

View File

@@ -11,7 +11,7 @@ use kata_types::capabilities::{Capabilities, CapabilityBits};
const VSOCK_SCHEME: &str = "vsock";
const VSOCK_AGENT_CID: u32 = 3;
const VSOCK_AGENT_PORT: u32 = 1024;
#[derive(Debug)]
pub struct QemuInner {
config: HypervisorConfig,
}

View File

@@ -18,6 +18,7 @@ use async_trait::async_trait;
use std::sync::Arc;
use tokio::sync::RwLock;
#[derive(Debug)]
pub struct Qemu {
inner: Arc<RwLock<QemuInner>>,
}

View File

@@ -35,19 +35,16 @@ pub struct ResourceManager {
}
impl ResourceManager {
pub fn new(
pub async fn new(
sid: &str,
agent: Arc<dyn Agent>,
hypervisor: Arc<dyn Hypervisor>,
toml_config: Arc<TomlConfig>,
) -> Result<Self> {
Ok(Self {
inner: Arc::new(RwLock::new(ResourceManagerInner::new(
sid,
agent,
hypervisor,
toml_config,
)?)),
inner: Arc::new(RwLock::new(
ResourceManagerInner::new(sid, agent, hypervisor, toml_config).await?,
)),
})
}

View File

@@ -12,9 +12,10 @@ use async_trait::async_trait;
use hypervisor::{
device::{
device_manager::{do_handle_device, DeviceManager},
util::{get_host_path, DEVICE_TYPE_CHAR},
DeviceConfig, DeviceType,
},
BlockConfig, Hypervisor,
BlockConfig, Hypervisor, VfioConfig,
};
use kata_types::config::TomlConfig;
use kata_types::mount::Mount;
@@ -50,15 +51,16 @@ pub(crate) struct ResourceManagerInner {
}
impl ResourceManagerInner {
pub(crate) fn new(
pub(crate) async fn new(
sid: &str,
agent: Arc<dyn Agent>,
hypervisor: Arc<dyn Hypervisor>,
toml_config: Arc<TomlConfig>,
) -> Result<Self> {
// create device manager
let dev_manager =
DeviceManager::new(hypervisor.clone()).context("failed to create device manager")?;
let dev_manager = DeviceManager::new(hypervisor.clone())
.await
.context("failed to create device manager")?;
let cgroups_resource = CgroupsResource::new(sid, &toml_config)?;
let cpu_resource = CpuResource::new(toml_config.clone())?;
@@ -140,10 +142,11 @@ impl ResourceManagerInner {
// The solution is to block the future on the current thread, it is enabled by spawn an os thread, create a
// tokio runtime, and block the task on it.
let hypervisor = self.hypervisor.clone();
let device_manager = self.device_manager.clone();
let network = thread::spawn(move || -> Result<Arc<dyn Network>> {
let rt = runtime::Builder::new_current_thread().enable_io().build()?;
let d = rt
.block_on(network::new(&network_config))
.block_on(network::new(&network_config, device_manager))
.context("new network")?;
rt.block_on(d.setup(hypervisor.as_ref()))
.context("setup network")?;
@@ -277,14 +280,15 @@ impl ResourceManagerInner {
..Default::default()
});
let device_info = do_handle_device(&self.device_manager, &dev_info)
let device_info = do_handle_device(&self.device_manager.clone(), &dev_info)
.await
.context("do handle device")?;
// create agent device
// create block device for kata agent,
// if driver is virtio-blk-pci, the id will be pci address.
if let DeviceType::Block(device) = device_info {
let agent_device = Device {
id: device.device_id.clone(),
id: device.config.virt_path.clone(),
container_path: d.path.clone(),
field_type: device.config.driver_option,
vm_path: device.config.virt_path,
@@ -293,6 +297,45 @@ impl ResourceManagerInner {
devices.push(agent_device);
}
}
"c" => {
let host_path = get_host_path(DEVICE_TYPE_CHAR, d.major, d.minor)
.context("get host path failed")?;
// First of all, filter vfio devices.
if !host_path.starts_with("/dev/vfio") {
continue;
}
let dev_info = DeviceConfig::VfioCfg(VfioConfig {
host_path,
dev_type: "c".to_string(),
hostdev_prefix: "vfio_device".to_owned(),
..Default::default()
});
let device_info = do_handle_device(&self.device_manager.clone(), &dev_info)
.await
.context("do handle device")?;
// vfio mode: vfio-pci and vfio-pci-gk for x86_64
// - vfio-pci, devices appear as VFIO character devices under /dev/vfio in container.
// - vfio-pci-gk, devices are managed by whatever driver in Guest kernel.
let vfio_mode = match self.toml_config.runtime.vfio_mode.as_str() {
"vfio" => "vfio-pci".to_string(),
_ => "vfio-pci-gk".to_string(),
};
// create agent device
if let DeviceType::Vfio(device) = device_info {
let agent_device = Device {
id: device.device_id, // just for kata-agent
container_path: d.path.clone(),
field_type: vfio_mode,
options: device.device_options,
..Default::default()
};
devices.push(agent_device);
}
}
_ => {
// TODO enable other devices type
continue;
@@ -432,7 +475,9 @@ impl Persist for ResourceManagerInner {
sid: resource_args.sid,
agent: resource_args.agent,
hypervisor: resource_args.hypervisor.clone(),
device_manager: Arc::new(RwLock::new(DeviceManager::new(resource_args.hypervisor)?)),
device_manager: Arc::new(RwLock::new(
DeviceManager::new(resource_args.hypervisor).await?,
)),
network: None,
share_fs: None,
rootfs_resource: RootFsResource::new(),

View File

@@ -5,12 +5,15 @@
//
use std::path::Path;
use std::sync::Arc;
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use hypervisor::device::DeviceType;
use hypervisor::device::device_manager::{do_handle_device, DeviceManager};
use hypervisor::device::DeviceConfig;
use hypervisor::{device::driver, Hypervisor};
use hypervisor::{VfioConfig, VfioDevice};
use hypervisor::{get_vfio_device, VfioConfig};
use tokio::sync::RwLock;
use super::endpoint_persist::{EndpointState, PhysicalEndpointState};
use super::Endpoint;
@@ -50,10 +53,11 @@ pub struct PhysicalEndpoint {
bdf: String,
driver: String,
vendor_device_id: VendorDevice,
d: Arc<RwLock<DeviceManager>>,
}
impl PhysicalEndpoint {
pub fn new(name: &str, hardware_addr: &[u8]) -> Result<Self> {
pub fn new(name: &str, hardware_addr: &[u8], d: Arc<RwLock<DeviceManager>>) -> Result<Self> {
let driver_info = link::get_driver_info(name).context("get driver info")?;
let bdf = driver_info.bus_info;
let sys_pci_devices_path = Path::new(SYS_PCI_DEVICES_PATH);
@@ -80,6 +84,7 @@ impl PhysicalEndpoint {
.context("new vendor device")?,
driver,
bdf,
d,
})
}
}
@@ -94,7 +99,7 @@ impl Endpoint for PhysicalEndpoint {
self.hard_addr.clone()
}
async fn attach(&self, hypervisor: &dyn Hypervisor) -> Result<()> {
async fn attach(&self, _hypervisor: &dyn Hypervisor) -> Result<()> {
// bind physical interface from host driver and bind to vfio
driver::bind_device_to_vfio(
&self.bdf,
@@ -103,23 +108,19 @@ impl Endpoint for PhysicalEndpoint {
)
.with_context(|| format!("bind physical endpoint from {} to vfio", &self.driver))?;
// set vfio's bus type, pci or mmio. Mostly use pci by default.
let mode = match self.driver.as_str() {
"virtio-pci" => "mmio",
_ => "pci",
let vfio_device = get_vfio_device(self.bdf.clone()).context("get vfio device failed.")?;
let vfio_dev_config = &mut VfioConfig {
host_path: vfio_device.clone(),
dev_type: "pci".to_string(),
hostdev_prefix: "physical_nic_".to_owned(),
..Default::default()
};
// add vfio device
let d = DeviceType::Vfio(VfioDevice {
id: format!("physical_nic_{}", self.name().await),
config: VfioConfig {
sysfs_path: "".to_string(),
bus_slot_func: self.bdf.clone(),
mode: driver::VfioBusMode::new(mode)
.with_context(|| format!("new vfio bus mode {:?}", mode))?,
},
});
hypervisor.add_device(d).await.context("add device")?;
// create and insert VFIO device into Kata VM
do_handle_device(&self.d, &DeviceConfig::VfioCfg(vfio_dev_config.clone()))
.await
.context("do handle device failed.")?;
Ok(())
}

View File

@@ -5,6 +5,8 @@
//
mod endpoint;
use std::sync::Arc;
pub use endpoint::endpoint_persist::EndpointState;
pub use endpoint::Endpoint;
mod network_entity;
@@ -20,11 +22,11 @@ use network_pair::NetworkPair;
mod utils;
pub use utils::netns::{generate_netns_name, NetnsGuard};
use std::sync::Arc;
use tokio::sync::RwLock;
use anyhow::{Context, Result};
use async_trait::async_trait;
use hypervisor::Hypervisor;
use hypervisor::{device::device_manager::DeviceManager, Hypervisor};
#[derive(Debug)]
pub enum NetworkConfig {
@@ -41,10 +43,13 @@ pub trait Network: Send + Sync {
async fn remove(&self, h: &dyn Hypervisor) -> Result<()>;
}
pub async fn new(config: &NetworkConfig) -> Result<Arc<dyn Network>> {
pub async fn new(
config: &NetworkConfig,
d: Arc<RwLock<DeviceManager>>,
) -> Result<Arc<dyn Network>> {
match config {
NetworkConfig::NetworkResourceWithNetNs(c) => Ok(Arc::new(
NetworkWithNetns::new(c)
NetworkWithNetns::new(c, d)
.await
.context("new network with netns")?,
)),

View File

@@ -16,7 +16,7 @@ use super::endpoint::endpoint_persist::EndpointState;
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use futures::stream::TryStreamExt;
use hypervisor::Hypervisor;
use hypervisor::{device::device_manager::DeviceManager, Hypervisor};
use netns_rs::get_from_path;
use scopeguard::defer;
use tokio::sync::RwLock;
@@ -47,13 +47,13 @@ struct NetworkWithNetnsInner {
}
impl NetworkWithNetnsInner {
async fn new(config: &NetworkWithNetNsConfig) -> Result<Self> {
async fn new(config: &NetworkWithNetNsConfig, d: Arc<RwLock<DeviceManager>>) -> Result<Self> {
let entity_list = if config.netns_path.is_empty() {
warn!(sl!(), "skip to scan for empty netns");
vec![]
} else {
// get endpoint
get_entity_from_netns(config)
get_entity_from_netns(config, d)
.await
.context("get entity from netns")?
};
@@ -70,9 +70,12 @@ pub(crate) struct NetworkWithNetns {
}
impl NetworkWithNetns {
pub(crate) async fn new(config: &NetworkWithNetNsConfig) -> Result<Self> {
pub(crate) async fn new(
config: &NetworkWithNetNsConfig,
d: Arc<RwLock<DeviceManager>>,
) -> Result<Self> {
Ok(Self {
inner: Arc::new(RwLock::new(NetworkWithNetnsInner::new(config).await?)),
inner: Arc::new(RwLock::new(NetworkWithNetnsInner::new(config, d).await?)),
})
}
}
@@ -149,10 +152,13 @@ impl Network for NetworkWithNetns {
}
}
async fn get_entity_from_netns(config: &NetworkWithNetNsConfig) -> Result<Vec<NetworkEntity>> {
async fn get_entity_from_netns(
config: &NetworkWithNetNsConfig,
d: Arc<RwLock<DeviceManager>>,
) -> Result<Vec<NetworkEntity>> {
info!(
sl!(),
"get network entity for config {:?} tid {:?}",
"get network entity from config {:?} tid {:?}",
config,
nix::unistd::gettid()
);
@@ -178,9 +184,10 @@ async fn get_entity_from_netns(config: &NetworkWithNetNsConfig) -> Result<Vec<Ne
}
let idx = idx.fetch_add(1, Ordering::Relaxed);
let (endpoint, network_info) = create_endpoint(&handle, link.as_ref(), idx, config)
.await
.context("create endpoint")?;
let (endpoint, network_info) =
create_endpoint(&handle, link.as_ref(), idx, config, d.clone())
.await
.context("create endpoint")?;
entity_list.push(NetworkEntity::new(endpoint, network_info));
}
@@ -193,6 +200,7 @@ async fn create_endpoint(
link: &dyn link::Link,
idx: u32,
config: &NetworkWithNetNsConfig,
d: Arc<RwLock<DeviceManager>>,
) -> Result<(Arc<dyn Endpoint>, Arc<dyn NetworkInfo>)> {
let _netns_guard = netns::NetnsGuard::new(&config.netns_path)
.context("net netns guard")
@@ -206,7 +214,7 @@ async fn create_endpoint(
&attrs.name,
nix::unistd::gettid()
);
let t = PhysicalEndpoint::new(&attrs.name, &attrs.hardware_addr)
let t = PhysicalEndpoint::new(&attrs.name, &attrs.hardware_addr, d)
.context("new physical endpoint")?;
Arc::new(t)
} else {

View File

@@ -11,8 +11,8 @@ use tokio::sync::RwLock;
use super::Volume;
use crate::volume::utils::{
generate_shared_path, volume_mount_info, DEFAULT_VOLUME_FS_TYPE, KATA_DIRECT_VOLUME_TYPE,
KATA_MOUNT_BIND_TYPE,
generate_shared_path, get_direct_volume_path, volume_mount_info, DEFAULT_VOLUME_FS_TYPE,
KATA_DIRECT_VOLUME_TYPE, KATA_MOUNT_BIND_TYPE,
};
use hypervisor::{
device::{
@@ -182,8 +182,14 @@ pub(crate) fn is_block_volume(m: &oci::Mount) -> Result<bool> {
return Ok(false);
}
let source = if m.r#type.as_str() == KATA_DIRECT_VOLUME_TYPE {
get_direct_volume_path(&m.source).context("get direct volume path failed")?
} else {
m.source.clone()
};
let fstat =
stat::stat(m.source.as_str()).context(format!("stat mount source {} failed.", m.source))?;
stat::stat(source.as_str()).context(format!("stat mount source {} failed.", source))?;
let s_flag = SFlag::from_bits_truncate(fstat.st_mode);
match m.r#type.as_str() {

View File

@@ -11,6 +11,12 @@ mod share_fs_volume;
mod shm_volume;
pub mod utils;
pub mod vfio_volume;
use vfio_volume::is_vfio_volume;
pub mod spdk_volume;
use spdk_volume::is_spdk_volume;
use std::{sync::Arc, vec::Vec};
use anyhow::{Context, Result};
@@ -75,6 +81,18 @@ impl VolumeResource {
.await
.with_context(|| format!("new share fs volume {:?}", m))?,
)
} else if is_vfio_volume(m) {
Arc::new(
vfio_volume::VfioVolume::new(d, m, read_only, cid, sid)
.await
.with_context(|| format!("new vfio volume {:?}", m))?,
)
} else if is_spdk_volume(m) {
Arc::new(
spdk_volume::SPDKVolume::new(d, m, read_only, cid, sid)
.await
.with_context(|| format!("create spdk volume {:?}", m))?,
)
} else if let Some(options) =
get_huge_page_option(m).context("failed to check huge page")?
{

View File

@@ -0,0 +1,189 @@
// Copyright (c) 2023 Alibaba Cloud
// Copyright (c) 2023 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use nix::sys::{stat, stat::SFlag};
use tokio::sync::RwLock;
use super::Volume;
use crate::volume::utils::{
generate_shared_path, volume_mount_info, DEFAULT_VOLUME_FS_TYPE, KATA_SPDK_VOLUME_TYPE,
KATA_SPOOL_VOLUME_TYPE,
};
use hypervisor::{
device::{
device_manager::{do_handle_device, DeviceManager},
DeviceConfig, DeviceType,
},
VhostUserConfig, VhostUserType,
};
/// SPDKVolume: spdk block device volume
#[derive(Clone)]
pub(crate) struct SPDKVolume {
storage: Option<agent::Storage>,
mount: oci::Mount,
device_id: String,
}
impl SPDKVolume {
pub(crate) async fn new(
d: &RwLock<DeviceManager>,
m: &oci::Mount,
read_only: bool,
cid: &str,
sid: &str,
) -> Result<Self> {
let mnt_src: &str = &m.source;
// deserde Information from mountinfo.json
let v = volume_mount_info(mnt_src).context("deserde information from mountinfo.json")?;
let device = match v.volume_type.as_str() {
KATA_SPDK_VOLUME_TYPE => {
if v.device.starts_with("spdk://") {
v.device.clone()
} else {
format!("spdk://{}", v.device.as_str())
}
}
KATA_SPOOL_VOLUME_TYPE => {
if v.device.starts_with("spool://") {
v.device.clone()
} else {
format!("spool://{}", v.device.as_str())
}
}
_ => return Err(anyhow!("mountinfo.json is invalid")),
};
// device format: X:///x/y/z.sock,so just unwrap it.
// if file is not S_IFSOCK, return error.
{
// device tokens: (Type, Socket)
let device_tokens = device.split_once("://").unwrap();
let fstat = stat::stat(device_tokens.1).context("stat socket failed")?;
let s_flag = SFlag::from_bits_truncate(fstat.st_mode);
if s_flag != SFlag::S_IFSOCK {
return Err(anyhow!("device {:?} is not valid", device));
}
}
let mut vhu_blk_config = &mut VhostUserConfig {
socket_path: device,
device_type: VhostUserType::Blk("vhost-user-blk-pci".to_owned()),
..Default::default()
};
if let Some(num) = v.metadata.get("num_queues") {
vhu_blk_config.num_queues = num
.parse::<usize>()
.context("num queues parse usize failed.")?;
}
if let Some(size) = v.metadata.get("queue_size") {
vhu_blk_config.queue_size = size
.parse::<u32>()
.context("num queues parse u32 failed.")?;
}
// create and insert block device into Kata VM
let device_info =
do_handle_device(d, &DeviceConfig::VhostUserBlkCfg(vhu_blk_config.clone()))
.await
.context("do handle device failed.")?;
// generate host guest shared path
let guest_path = generate_shared_path(m.destination.clone(), read_only, cid, sid)
.await
.context("generate host-guest shared path failed")?;
// storage
let mut storage = agent::Storage {
mount_point: guest_path.clone(),
..Default::default()
};
storage.options = if read_only {
vec!["ro".to_string()]
} else {
Vec::new()
};
let mut device_id = String::new();
if let DeviceType::VhostUserBlk(device) = device_info {
// blk, mmioblk
storage.driver = device.config.driver_option;
// /dev/vdX
storage.source = device.config.virt_path;
device_id = device.device_id;
}
if m.r#type != "bind" {
storage.fs_type = v.fs_type.clone();
} else {
storage.fs_type = DEFAULT_VOLUME_FS_TYPE.to_string();
}
if m.destination.clone().starts_with("/dev") {
storage.fs_type = "bind".to_string();
storage.options.append(&mut m.options.clone());
}
storage.fs_group = None;
let mount = oci::Mount {
destination: m.destination.clone(),
r#type: storage.fs_type.clone(),
source: guest_path,
options: m.options.clone(),
};
Ok(Self {
storage: Some(storage),
mount,
device_id,
})
}
}
#[async_trait]
impl Volume for SPDKVolume {
fn get_volume_mount(&self) -> Result<Vec<oci::Mount>> {
Ok(vec![self.mount.clone()])
}
fn get_storage(&self) -> Result<Vec<agent::Storage>> {
let s = if let Some(s) = self.storage.as_ref() {
vec![s.clone()]
} else {
vec![]
};
Ok(s)
}
async fn cleanup(&self, device_manager: &RwLock<DeviceManager>) -> Result<()> {
device_manager
.write()
.await
.try_remove_device(&self.device_id)
.await
}
fn get_device_id(&self) -> Result<Option<String>> {
Ok(Some(self.device_id.clone()))
}
}
pub(crate) fn is_spdk_volume(m: &oci::Mount) -> bool {
// spdkvol or spoolvol will share the same implementation
let vol_types = vec![KATA_SPDK_VOLUME_TYPE, KATA_SPOOL_VOLUME_TYPE];
if vol_types.contains(&m.r#type.as_str()) {
return true;
}
false
}

View File

@@ -13,19 +13,30 @@ use crate::{
volume::share_fs_volume::generate_mount_path,
};
use kata_sys_util::eother;
use kata_types::mount::{get_volume_mount_info, DirectVolumeMountInfo};
use kata_types::mount::{
get_volume_mount_info, join_path, DirectVolumeMountInfo, KATA_DIRECT_VOLUME_ROOT_PATH,
};
pub const DEFAULT_VOLUME_FS_TYPE: &str = "ext4";
pub const KATA_MOUNT_BIND_TYPE: &str = "bind";
pub const KATA_DIRECT_VOLUME_TYPE: &str = "directvol";
pub const KATA_VFIO_VOLUME_TYPE: &str = "vfiovol";
pub const KATA_SPDK_VOLUME_TYPE: &str = "spdkvol";
pub const KATA_SPOOL_VOLUME_TYPE: &str = "spoolvol";
// volume mount info load infomation from mountinfo.json
pub fn volume_mount_info(volume_path: &str) -> Result<DirectVolumeMountInfo> {
get_volume_mount_info(volume_path)
}
// get direct volume path whose volume_path encoded with base64
pub fn get_direct_volume_path(volume_path: &str) -> Result<String> {
let volume_full_path =
join_path(KATA_DIRECT_VOLUME_ROOT_PATH, volume_path).context("failed to join path.")?;
Ok(volume_full_path.display().to_string())
}
pub fn get_file_name<P: AsRef<Path>>(src: P) -> Result<String> {
let file_name = src
.as_ref()

View File

@@ -0,0 +1,141 @@
// Copyright (c) 2023 Alibaba Cloud
// Copyright (c) 2023 Ant Group
//
// SPDX-License-Identifier: Apache-2.0
//
use anyhow::{anyhow, Context, Result};
use async_trait::async_trait;
use tokio::sync::RwLock;
use super::Volume;
use crate::volume::utils::{
generate_shared_path, volume_mount_info, DEFAULT_VOLUME_FS_TYPE, KATA_VFIO_VOLUME_TYPE,
};
use hypervisor::{
device::{
device_manager::{do_handle_device, DeviceManager},
DeviceConfig, DeviceType,
},
get_vfio_device, VfioConfig,
};
pub(crate) struct VfioVolume {
storage: Option<agent::Storage>,
mount: oci::Mount,
device_id: String,
}
// VfioVolume: vfio device based block volume
impl VfioVolume {
pub(crate) async fn new(
d: &RwLock<DeviceManager>,
m: &oci::Mount,
read_only: bool,
cid: &str,
sid: &str,
) -> Result<Self> {
let mnt_src: &str = &m.source;
// deserde Information from mountinfo.json
let v = volume_mount_info(mnt_src).context("deserde information from mountinfo.json")?;
if v.volume_type != KATA_VFIO_VOLUME_TYPE {
return Err(anyhow!("volume type is invalid"));
}
// support both /dev/vfio/X and BDF<DDDD:BB:DD.F> or BDF<BB:DD.F>
let vfio_device = get_vfio_device(v.device).context("get vfio device failed.")?;
let vfio_dev_config = &mut VfioConfig {
host_path: vfio_device.clone(),
dev_type: "b".to_string(),
hostdev_prefix: "vfio_vol".to_owned(),
..Default::default()
};
// create and insert block device into Kata VM
let device_info = do_handle_device(d, &DeviceConfig::VfioCfg(vfio_dev_config.clone()))
.await
.context("do handle device failed.")?;
// generate host guest shared path
let guest_path = generate_shared_path(m.destination.clone(), read_only, cid, sid)
.await
.context("generate host-guest shared path failed")?;
let storage_options = if read_only {
vec!["ro".to_string()]
} else {
Vec::new()
};
let mut storage = agent::Storage {
options: storage_options,
mount_point: guest_path.clone(),
..Default::default()
};
let mut device_id = String::new();
if let DeviceType::Vfio(device) = device_info {
device_id = device.device_id;
storage.driver = device.driver_type;
// safe here, device_info is correct and only unwrap it.
storage.source = device.config.virt_path.unwrap().1;
}
if m.r#type != "bind" {
storage.fs_type = v.fs_type.clone();
} else {
storage.fs_type = DEFAULT_VOLUME_FS_TYPE.to_string();
}
let mount = oci::Mount {
destination: m.destination.clone(),
r#type: v.fs_type,
source: guest_path,
options: m.options.clone(),
};
Ok(Self {
storage: Some(storage),
mount,
device_id,
})
}
}
#[async_trait]
impl Volume for VfioVolume {
fn get_volume_mount(&self) -> Result<Vec<oci::Mount>> {
Ok(vec![self.mount.clone()])
}
fn get_storage(&self) -> Result<Vec<agent::Storage>> {
let s = if let Some(s) = self.storage.as_ref() {
vec![s.clone()]
} else {
vec![]
};
Ok(s)
}
async fn cleanup(&self, device_manager: &RwLock<DeviceManager>) -> Result<()> {
device_manager
.write()
.await
.try_remove_device(&self.device_id)
.await
}
fn get_device_id(&self) -> Result<Option<String>> {
Ok(Some(self.device_id.clone()))
}
}
pub(crate) fn is_vfio_volume(m: &oci::Mount) -> bool {
if m.r#type == KATA_VFIO_VOLUME_TYPE {
return true;
}
false
}

View File

@@ -74,12 +74,8 @@ impl RuntimeHandler for VirtContainer {
// get uds from hypervisor and get config from toml_config
let agent = new_agent(&config).context("new agent")?;
let resource_manager = Arc::new(ResourceManager::new(
sid,
agent.clone(),
hypervisor.clone(),
config,
)?);
let resource_manager =
Arc::new(ResourceManager::new(sid, agent.clone(), hypervisor.clone(), config).await?);
let pid = std::process::id();
let sandbox = sandbox::VirtSandbox::new(

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 104 KiB

View File

@@ -347,7 +347,7 @@ ifneq (,$(QEMUCMD))
CONFIG_PATHS += $(CONFIG_PATH_QEMU_SEV)
SYSCONFIG_QEMU_SEV = $(abspath $(SYSCONFDIR)/$(CONFIG_FILE_QEMU_SEV))
SYSCONFIG_PATHS += $(SYSCONFIG_QEMU_SEV)
SYSCONFIG_PATHS_SEV += $(SYSCONFIG_QEMU_SEV)
CONFIGS += $(CONFIG_QEMU_SEV)
@@ -371,7 +371,7 @@ ifneq (,$(QEMUCMD))
CONFIG_PATHS += $(CONFIG_PATH_QEMU_SNP)
SYSCONFIG_QEMU_SNP = $(abspath $(SYSCONFDIR)/$(CONFIG_FILE_QEMU_SNP))
SYSCONFIG_PATHS += $(SYSCONFIG_QEMU_SNP)
SYSCONFIG_PATHS_SNP += $(SYSCONFIG_QEMU_SNP)
CONFIGS += $(CONFIG_QEMU_SNP)

View File

@@ -80,6 +80,11 @@ mod arch_specific {
Some(CHECK_LIST)
}
pub fn host_is_vmcontainer_capable() -> Result<bool> {
// TODO: Not implemented
Ok(true)
}
#[allow(dead_code)]
// Guest protection is not supported on ARM64.
pub fn available_guest_protection() -> Result<check::GuestProtection, check::ProtectionError> {

View File

@@ -33,6 +33,11 @@ mod arch_specific {
// to the goloang implementation of function getCPUDetails()
}
pub fn host_is_vmcontainer_capable() -> Result<bool> {
// TODO: Not implemented
Ok(true)
}
pub fn available_guest_protection() -> Result<check::GuestProtection, check::ProtectionError> {
if !Uid::effective().is_root() {
return Err(check::ProtectionError::NoPerms);

View File

@@ -78,6 +78,21 @@ mod arch_specific {
Some(CHECK_LIST)
}
pub fn host_is_vmcontainer_capable() -> Result<bool> {
let mut count = 0;
if check_cpu().is_err() {
count += 1;
};
// TODO: Add additional checks for kernel modules
if count == 0 {
return Ok(true);
};
Err(anyhow!("System is not capable of running a VM"))
}
#[allow(dead_code)]
fn retrieve_cpu_facilities() -> Result<HashMap<i32, bool>> {
let f = std::fs::File::open(check::PROC_CPUINFO)?;

View File

@@ -59,17 +59,29 @@ mod arch_specific {
static MODULE_LIST: &[KernelModule] = &[
KernelModule {
name: "kvm",
parameter: KernelParam {
params: &[KernelParam {
name: "kvmclock_periodic_sync",
value: KernelParamType::Simple("Y"),
},
}],
},
KernelModule {
name: "kvm_intel",
parameter: KernelParam {
params: &[KernelParam {
name: "unrestricted_guest",
value: KernelParamType::Predicate(unrestricted_guest_param_check),
},
}],
},
KernelModule {
name: "vhost",
params: &[],
},
KernelModule {
name: "vhost_net",
params: &[],
},
KernelModule {
name: "vhost_vsock",
params: &[],
},
];
@@ -226,13 +238,9 @@ mod arch_specific {
let running_on_vmm_alt = running_on_vmm()?;
// Kernel param "unrestricted_guest" is not required when running under a hypervisor
if running_on_vmm_alt {
let msg = format!("You are running in a VM, where the kernel module '{}' parameter '{:}' has a value '{:}'. This causes conflict when running kata.",
module,
param_name,
param_value_host
);
return Err(anyhow!(msg));
return Ok(());
}
if param_value_host == expected_param_value.to_string() {
@@ -253,6 +261,38 @@ mod arch_specific {
}
}
fn check_kernel_params(kernel_module: &KernelModule) -> Result<()> {
const MODULES_PATH: &str = "/sys/module";
for param in kernel_module.params {
let module_param_path = format!(
"{}/{}/parameters/{}",
MODULES_PATH, kernel_module.name, param.name
);
// Here the currently loaded kernel parameter value
// is retrieved and returned on success
let param_value_host = std::fs::read_to_string(module_param_path)
.map(|val| val.replace('\n', ""))
.map_err(|_err| {
anyhow!(
"'{:}' kernel module parameter `{:}` not found.",
kernel_module.name,
param.name
)
})?;
check_kernel_param(
kernel_module.name,
param.name,
&param_value_host,
param.value.clone(),
)
.map_err(|e| anyhow!(e.to_string()))?;
}
Ok(())
}
fn check_kernel_param(
module: &str,
param_name: &str,
@@ -282,19 +322,12 @@ mod arch_specific {
info!(sl!(), "check kernel modules for: x86_64");
for module in MODULE_LIST {
let module_loaded =
check::check_kernel_module_loaded(module.name, module.parameter.name);
let module_loaded = check::check_kernel_module_loaded(module);
match module_loaded {
Ok(param_value_host) => {
let parameter_check = check_kernel_param(
module.name,
module.parameter.name,
&param_value_host,
module.parameter.value.clone(),
);
match parameter_check {
Ok(_) => {
let check = check_kernel_params(module);
match check {
Ok(_v) => info!(sl!(), "{} Ok", module.name),
Err(e) => return Err(e),
}
@@ -306,6 +339,23 @@ mod arch_specific {
}
Ok(())
}
pub fn host_is_vmcontainer_capable() -> Result<bool> {
let mut count = 0;
if check_cpu("check_cpu").is_err() {
count += 1;
};
if check_kernel_modules("check_modules").is_err() {
count += 1;
};
if count == 0 {
return Ok(true);
};
Err(anyhow!("System is not capable of running a VM"))
}
}
#[cfg(target_arch = "x86_64")]

View File

@@ -5,6 +5,9 @@
// Contains checks that are not architecture-specific
#[cfg(any(target_arch = "x86_64"))]
use crate::types::KernelModule;
use anyhow::{anyhow, Result};
use nix::fcntl::{open, OFlag};
use nix::sys::stat::Mode;
@@ -324,17 +327,16 @@ pub fn check_official_releases() -> Result<()> {
}
#[cfg(any(target_arch = "x86_64"))]
pub fn check_kernel_module_loaded(module: &str, parameter: &str) -> Result<String, String> {
pub fn check_kernel_module_loaded(kernel_module: &KernelModule) -> Result<(), String> {
const MODPROBE_PARAMETERS_DRY_RUN: &str = "--dry-run";
const MODPROBE_PARAMETERS_FIRST_TIME: &str = "--first-time";
const MODULES_PATH: &str = "/sys/module";
let status_modinfo_success;
// Partial check w/ modinfo
// verifies that the module exists
match Command::new(MODINFO_PATH)
.arg(module)
.arg(kernel_module.name)
.stdout(Stdio::piped())
.output()
{
@@ -361,7 +363,7 @@ pub fn check_kernel_module_loaded(module: &str, parameter: &str) -> Result<Strin
match Command::new(MODPROBE_PATH)
.arg(MODPROBE_PARAMETERS_DRY_RUN)
.arg(MODPROBE_PARAMETERS_FIRST_TIME)
.arg(module)
.arg(kernel_module.name)
.stdout(Stdio::piped())
.output()
{
@@ -371,8 +373,8 @@ pub fn check_kernel_module_loaded(module: &str, parameter: &str) -> Result<Strin
if status_modprobe_success && status_modinfo_success {
// This condition is true in the case that the module exist, but is not already loaded
let msg = format!("The kernel module `{:}` exist but is not already loaded. Try reloading it using 'modprobe {:}=Y'",
module, module
let msg = format!("The kernel module `{:}` exist but is not already loaded. Try reloading it using 'modprobe {:}'",
kernel_module.name, kernel_module.name
);
return Err(msg);
}
@@ -386,27 +388,15 @@ pub fn check_kernel_module_loaded(module: &str, parameter: &str) -> Result<Strin
return Err(msg);
}
}
let module_path = format!("{}/{}/parameters/{}", MODULES_PATH, module, parameter);
// Here the currently loaded kernel parameter value
// is retrieved and returned on success
match read_file_contents(&module_path) {
Ok(result) => Ok(result.replace('\n', "")),
Err(_e) => {
let msg = format!(
"'{:}' kernel module parameter `{:}` not found.",
module, parameter
);
Err(msg)
}
}
Ok(())
}
#[cfg(any(target_arch = "s390x", target_arch = "x86_64"))]
#[cfg(test)]
mod tests {
use super::*;
#[cfg(any(target_arch = "x86_64"))]
use crate::types::{KernelModule, KernelParam, KernelParamType};
use semver::Version;
use slog::warn;
use std::fs;
@@ -612,12 +602,13 @@ mod tests {
#[test]
fn check_module_loaded() {
#[allow(dead_code)]
#[derive(Debug)]
struct TestData<'a> {
module_name: &'a str,
param_name: &'a str,
kernel_module: &'a KernelModule<'a>,
param_value: &'a str,
result: Result<String>,
result: Result<()>,
}
let tests = &[
@@ -625,45 +616,58 @@ mod tests {
TestData {
module_name: "",
param_name: "",
kernel_module: &KernelModule {
name: "",
params: &[KernelParam {
name: "",
value: KernelParamType::Simple("Y"),
}],
},
param_value: "",
result: Err(anyhow!("modinfo: ERROR: Module {} not found.", "")),
},
TestData {
module_name: "kvm",
param_name: "",
param_value: "",
result: Err(anyhow!(
"'{:}' kernel module parameter `{:}` not found.",
"kvm",
""
)),
},
// Success scenarios
TestData {
module_name: "kvm",
param_name: "",
kernel_module: &KernelModule {
name: "kvm",
params: &[KernelParam {
name: "nonexistantparam",
value: KernelParamType::Simple("Y"),
}],
},
param_value: "",
result: Ok(()),
},
TestData {
module_name: "kvm",
param_name: "kvmclock_periodic_sync",
kernel_module: &KernelModule {
name: "kvm",
params: &[KernelParam {
name: "kvmclock_periodic_sync",
value: KernelParamType::Simple("Y"),
}],
},
param_value: "Y",
result: Ok("Y".to_string()),
result: Ok(()),
},
];
for (i, d) in tests.iter().enumerate() {
let msg = format!("test[{}]: {:?}", i, d);
let result = check_kernel_module_loaded(d.module_name, d.param_name);
let msg = format!("test[{}]", i);
let result = check_kernel_module_loaded(d.kernel_module);
let msg = format!("{}, result: {:?}", msg, result);
if d.result.is_ok() {
assert_eq!(
result.as_ref().unwrap(),
d.result.as_ref().unwrap(),
"{}",
msg
);
assert_eq!(result, Ok(()));
continue;
}
let expected_error = format!("{}", &d.result.as_ref().unwrap_err());
let actual_error = result.unwrap_err().to_string();
println!("testing for {}", d.module_name);
assert!(actual_error == expected_error, "{}", msg);
}
}

View File

@@ -255,6 +255,12 @@ fn get_host_info() -> Result<HostInfo> {
let guest_protection = guest_protection.to_string();
let mut vm_container_capable = true;
if arch_specific::host_is_vmcontainer_capable().is_err() {
vm_container_capable = false;
}
let support_vsocks = utils::supports_vsocks(utils::VHOST_VSOCK_DEVICE)?;
Ok(HostInfo {
@@ -264,8 +270,7 @@ fn get_host_info() -> Result<HostInfo> {
cpu: host_cpu,
memory: memory_info,
available_guest_protection: guest_protection,
// TODO: See https://github.com/kata-containers/kata-containers/issues/6727
vm_container_capable: true,
vm_container_capable,
support_vsocks,
})
}

View File

@@ -69,5 +69,5 @@ pub struct KernelParam<'a> {
#[allow(dead_code)]
pub struct KernelModule<'a> {
pub name: &'a str,
pub parameter: KernelParam<'a>,
pub params: &'a [KernelParam<'a>],
}

View File

@@ -7,6 +7,22 @@
# This file contains common functions that
# are being used by our metrics and integration tests
# Kata tests directory used for storing various test-related artifacts.
KATA_TESTS_BASEDIR="${KATA_TESTS_BASEDIR:-/var/log/kata-tests}"
# Directory that can be used for storing test logs.
KATA_TESTS_LOGDIR="${KATA_TESTS_LOGDIR:-${KATA_TESTS_BASEDIR}/logs}"
# Directory that can be used for storing test data.
KATA_TESTS_DATADIR="${KATA_TESTS_DATADIR:-${KATA_TESTS_BASEDIR}/data}"
# Directory that can be used for storing cache kata components
KATA_TESTS_CACHEDIR="${KATA_TESTS_CACHEDIR:-${KATA_TESTS_BASEDIR}/cache}"
KATA_HYPERVISOR="${KATA_HYPERVISOR:-qemu}"
RUNTIME="${RUNTIME:-containerd-shim-kata-v2}"
die() {
local msg="$*"
echo -e "[$(basename $0):${BASH_LINENO[0]}] ERROR: $msg" >&2
@@ -45,3 +61,182 @@ waitForProcess() {
done
return 1
}
# Check if the $1 argument is the name of a 'known'
# Kata runtime. Of course, the end user can choose any name they
# want in reality, but this function knows the names of the default
# and recommended Kata docker runtime install names.
is_a_kata_runtime() {
if [ "$1" = "containerd-shim-kata-v2" ] || [ "$1" = "io.containerd.kata.v2" ]; then
echo "1"
else
echo "0"
fi
}
# Gets versions and paths of all the components
# list in kata-env
extract_kata_env() {
RUNTIME_CONFIG_PATH=$(kata-runtime kata-env --json | jq -r .Runtime.Config.Path)
RUNTIME_VERSION=$(kata-runtime kata-env --json | jq -r .Runtime.Version | grep Semver | cut -d'"' -f4)
RUNTIME_COMMIT=$(kata-runtime kata-env --json | jq -r .Runtime.Version | grep Commit | cut -d'"' -f4)
RUNTIME_PATH=$(kata-runtime kata-env --json | jq -r .Runtime.Path)
# Shimv2 path is being affected by https://github.com/kata-containers/kata-containers/issues/1151
SHIM_PATH=$(readlink $(command -v containerd-shim-kata-v2))
SHIM_VERSION=${RUNTIME_VERSION}
HYPERVISOR_PATH=$(kata-runtime kata-env --json | jq -r .Hypervisor.Path)
# TODO: there is no kata-runtime of rust version currently
if [ "${KATA_HYPERVISOR}" != "dragonball" ]; then
HYPERVISOR_VERSION=$(sudo -E ${HYPERVISOR_PATH} --version | head -n1)
fi
VIRTIOFSD_PATH=$(kata-runtime kata-env --json | jq -r .Hypervisor.VirtioFSDaemon)
INITRD_PATH=$(kata-runtime kata-env --json | jq -r .Initrd.Path)
}
# Checks that processes are not running
check_processes() {
extract_kata_env
# Only check the kata-env if we have managed to find the kata executable...
if [ -x "$RUNTIME_PATH" ]; then
local vsock_configured=$($RUNTIME_PATH kata-env | awk '/UseVSock/ {print $3}')
local vsock_supported=$($RUNTIME_PATH kata-env | awk '/SupportVSock/ {print $3}')
else
local vsock_configured="false"
local vsock_supported="false"
fi
general_processes=( ${HYPERVISOR_PATH} ${SHIM_PATH} )
for i in "${general_processes[@]}"; do
if pgrep -f "$i"; then
die "Found unexpected ${i} present"
fi
done
}
# Clean environment, this function will try to remove all
# stopped/running containers.
clean_env()
{
# If the timeout has not been set, default it to 30s
# Docker has a built in 10s default timeout, so make ours
# longer than that.
KATA_DOCKER_TIMEOUT=${KATA_DOCKER_TIMEOUT:-30}
containers_running=$(sudo timeout ${KATA_DOCKER_TIMEOUT} docker ps -q)
if [ ! -z "$containers_running" ]; then
# First stop all containers that are running
# Use kill, as the containers are generally benign, and most
# of the time our 'stop' request ends up doing a `kill` anyway
sudo timeout ${KATA_DOCKER_TIMEOUT} docker kill $containers_running
# Remove all containers
sudo timeout ${KATA_DOCKER_TIMEOUT} docker rm -f $(docker ps -qa)
fi
}
clean_env_ctr()
{
local count_running="$(sudo ctr c list -q | wc -l)"
local remaining_attempts=10
declare -a running_tasks=()
local count_tasks=0
local sleep_time=1
local time_out=10
[ "$count_running" -eq "0" ] && return 0
readarray -t running_tasks < <(sudo ctr t list -q)
info "Wait until the containers gets removed"
for task_id in "${running_tasks[@]}"; do
sudo ctr t kill -a -s SIGTERM ${task_id} >/dev/null 2>&1
sleep 0.5
done
# do not stop if the command fails, it will be evaluated by waitForProcess
local cmd="[[ $(sudo ctr tasks list | grep -c "STOPPED") == "$count_running" ]]" || true
local res="ok"
waitForProcess "${time_out}" "${sleep_time}" "$cmd" || res="fail"
[ "$res" == "ok" ] || sudo systemctl restart containerd
while (( remaining_attempts > 0 )); do
[ "${RUNTIME}" == "runc" ] && sudo ctr tasks rm -f $(sudo ctr task list -q)
sudo ctr c rm $(sudo ctr c list -q) >/dev/null 2>&1
count_running="$(sudo ctr c list -q | wc -l)"
[ "$count_running" -eq 0 ] && break
remaining_attempts=$((remaining_attempts-1))
done
count_tasks="$(sudo ctr t list -q | wc -l)"
if (( count_tasks > 0 )); then
die "Can't remove running contaienrs."
fi
}
# Restarts a systemd service while ensuring the start-limit-burst is set to 0.
# Outputs warnings to stdio if something has gone wrong.
#
# Returns 0 on success, 1 otherwise
restart_systemd_service_with_no_burst_limit() {
local service=$1
info "restart $service service"
local active=$(systemctl show "$service.service" -p ActiveState | cut -d'=' -f2)
[ "$active" == "active" ] || warn "Service $service is not active"
local start_burst=$(systemctl show "$service".service -p StartLimitBurst | cut -d'=' -f2)
if [ "$start_burst" -ne 0 ]
then
local unit_file=$(systemctl show "$service.service" -p FragmentPath | cut -d'=' -f2)
[ -f "$unit_file" ] || { warn "Can't find $service's unit file: $unit_file"; return 1; }
local start_burst_set=$(sudo grep StartLimitBurst $unit_file | wc -l)
if [ "$start_burst_set" -eq 0 ]
then
sudo sed -i '/\[Service\]/a StartLimitBurst=0' "$unit_file"
else
sudo sed -i 's/StartLimitBurst.*$/StartLimitBurst=0/g' "$unit_file"
fi
sudo systemctl daemon-reload
fi
sudo systemctl restart "$service"
local state=$(systemctl show "$service.service" -p SubState | cut -d'=' -f2)
[ "$state" == "running" ] || { warn "Can't restart the $service service"; return 1; }
start_burst=$(systemctl show "$service.service" -p StartLimitBurst | cut -d'=' -f2)
[ "$start_burst" -eq 0 ] || { warn "Can't set start burst limit for $service service"; return 1; }
return 0
}
restart_containerd_service() {
restart_systemd_service_with_no_burst_limit containerd || return 1
local retries=5
local counter=0
until [ "$counter" -ge "$retries" ] || sudo ctr --connect-timeout 1s version > /dev/null 2>&1
do
info "Waiting for containerd socket..."
((counter++))
done
[ "$counter" -ge "$retries" ] && { warn "Can't connect to containerd socket"; return 1; }
clean_env_ctr
return 0
}

198
tests/metrics/README.md Normal file
View File

@@ -0,0 +1,198 @@
# Kata Containers metrics
> **_Warning:_** Migration of metrics tests is WIP and you may not find all tests available here, but you can take a look at the [tests repo](https://github.com/kata-containers/tests/tree/main/metrics).
This directory contains the metrics tests for Kata Containers.
The tests within this directory have a number of potential use cases:
- CI checks for regressions on PRs
- CI data gathering for main branch merges
- Developer use for pre-checking code changes before raising a PR
## Goals
This section details some of the goals for the potential use cases.
### PR regression checks
The goal for the PR CI regression checking is to provide a relatively quick
CI metrics check and feedback directly back to the GitHub PR.
Due to the relatively fast feedback requirement, there is generally a compromise
that has to be made with the metrics - precision vs time.
### Developer pre-checking
The PR regression check scripts can be executed "by hand", and thus are available
for developers to use as a "pre-check" before submitting a PR. It might be prudent for
developers to follow this procedure particularly for large architectural or version changes
of components.
## Requirements
To try and maintain the quality of the metrics data gathered and the accuracy of the CI
regression checking, we try to define and stick to some "quality measures" for our metrics.
## Categories
Kata Container metrics tend to fall into a set of categories, and we organise the tests
within this folder as such.
Each sub-folder contains its own `README` detailing its own tests.
### Time (Speed)
Generally tests that measure the "speed" of the runtime itself, such as time to
boot into a workload or kill a container.
This directory does *not* contain "speed" tests that measure network or storage
for instance.
For further details see the [time tests documentation](time).
### Density
Tests that measure the size and overheads of the runtime. Generally this is looking at
memory footprint sizes, but could also cover disk space or even CPU consumption.
### Networking
Tests relating to networking. General items could include:
- bandwidth
- latency
- jitter
- parallel bandwidth
- write and read percentiles
### Storage
Tests relating to the storage (graph, volume) drivers.
### Disk
Test relating to measure reading and writing against clusters.
### Machine Learning
Tests relating with TensorFlow and Pytorch implementations of several popular
convolutional models.
## Saving Results
In order to ensure continuity, and thus testing and historical tracking of results,
we provide a bash API to aid storing results in a uniform manner.
### JSON API
The preferred API to store results is through the provided JSON API.
The API provides the following groups of functions:
- A set of functions to init/save the data and add "top level" JSON fragments.
- A set of functions to construct arrays of JSON fragments, which are then added as a top level fragment when complete.
- A set of functions to construct elements of an array from sub-fragments, and then finalize that element when all fragments are added.
Construction of JSON data under bash could be relatively complex. This API does not pretend
to support all possible data constructs or features, and individual tests may find they need
to do some JSON handling themselves before injecting their JSON into the API.
> If you find a common use case that many tests are implementing themselves, then please
> factor out that functionality and consider extending this API.
#### `metrics_json_init()`
Initialise the API. Must be called before all other JSON API calls.
Should be matched by a final call to `metrics_json_save`.
Relies upon the `TEST_NAME` variable to derive the file name the final JSON
data is stored in (under the `metrics/results` directory). If your test generates
multiple `TEST_NAME` sets of data then:
- Ensure you have a matching JSON init/save call pair for each of those sets.
- These sets could be a hangover from a previous CSV based test - consider using a single JSON file if possible to store all the results.
This function may add system level information to the results file as a top level
fragment, for example:
- `env` - A fragment containing system level environment information
- "time" - A fragment containing a nanosecond timestamp of when the test was executed
Consider these top level JSON section names to be reserved by the API.
#### `metrics_json_save()`
This function saves all registered JSON fragments out to the JSON results file.
> Note: this function will not save any part-registered array fragments. They will
> be lost.
#### `metrics_json_add_fragment(json)`
Add a JSON formatted fragment at the top level.
| Arg | Description |
| ------ | ----------- |
| `json` | A fully formed JSON fragment |
#### `metrics_json_start_array()`
Initialise the JSON array API subsystem, ready to accept JSON fragments via
`metrics_json_add_array_element`.
This JSON array API subset allows accumulation of multiple entries into a
JSON `[]` array, to later be added as a top level fragment.
#### `metrics_json_add_array_element(json)`
Add a fully formed JSON fragment to the JSON array store.
| Arg | Description |
| ------ | ----------- |
| `json` | A fully formed JSON fragment |
#### `metrics_json_add_array_fragment(json)`
Add a fully formed JSON fragment to the current array element.
| Arg | Description |
| ------ | ----------- |
| `json` | A fully formed JSON fragment |
#### `metrics_json_close_array_element()`
Finalize (close) the current array element. This incorporates
any array_fragment parts into the current array element, closes that
array element, and reset the in-flight array_fragment store.
#### `metrics_json_end_array(name)`
Save the stored JSON array store as a top level fragment, with the
name `name`.
| Arg | Description |
| ------ | ----------- |
| `name` | The name to be given to the generated top level fragment array |
## Preserving results
The JSON library contains a hook that enables results to be injected to a
data store at the same time they are saved to the results files.
The hook supports transmission via [`curl`](https://curl.haxx.se/) or
[`socat`](http://www.dest-unreach.org/socat/). Configuration is via environment
variables.
| Variable | Description |
| -------- | ----------- |
| JSON_HOST | Destination host path for use with `socat` |
| JSON_SOCKET | Destination socket number for use with `socat` |
| JSON_URL | Destination URL for use with `curl` |
| JSON_TX_ONELINE | If set, the JSON will be sent as a single line (CR and tabs stripped) |
`socat` transmission will only happen if `JSON_HOST` is set. `curl` transmission will only
happen if `JSON_URL` is set. The settings are not mutually exclusive, and both can be
set if necessary.
`JSON_TX_ONELINE` applies to both types of transmission.
## `checkmetrics`
`checkmetrics` is a CLI tool to check a metrics CI results file. For further reference see the [`checkmetrics`](cmd/checkmetrics).

View File

@@ -0,0 +1,19 @@
# Copyright (c) 2023 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
TARGET := checkmetrics
PREFIX := /usr
BINDIR := $(PREFIX)/bin
DESTTARGET := $(BINDIR)/$(TARGET)
all:
go build -ldflags "-X main.sysBaseFile=$(DESTBASE)" -o $(TARGET)
install:
install -D $(TARGET) $(DESTTARGET)
clean:
rm -f $(DESTTARGET)
.PHONY: install clean

View File

@@ -0,0 +1,171 @@
# `checkmetrics`
## Overview
The `checkmetrics` tool is used to check the metrics results files in
JSON format. Results files are checked against configs stored in a TOML
file that contains baseline expectations for the results.
`checkmetrics` checks for a matching results file for each entry in the
TOML file with an appropriate `json` file extension. Failure to find a matching
file is classified as a failure for that individual TOML entry.
`checkmetrics` continues to process all entries in the TOML file and prints its
final results in a summary table to stdout.
`checkmetrics` exits with a failure code if any of the TOML entries did not
complete successfully.
### JSON file format
JSON results files only need to be valid JSON, and contain some form of numeric results
that can be extracted into a string or list of numeric results using the jq JSON query tool.
## baseline TOML layout
The baseline TOML file is composed of one `[[metric]]` section per result that is processed.
Each section contains a number of parameters, some optional:
```
|name | type | description |
|----------------------------------------------------------------------------
|`name` | string | Filename containing results (minus .json ext.)|
|`type` | string | json (optional, json is the default) |
|`description` | string | Description of test (optional) |
|`checkvar` | string | jq query string to extract results from JSON |
|`checktype` | string | Property to check ("mean", "max" etc.) |
|`minval` | float | Minimum value the checked property should be |
|`maxval` | float | Maximum value the checked property should be |
|`midval` | float | Middle value used for percentage range check |
|`minpercent` | float | Minimum percentage from midval check boundary |
|`maxpercent` | float | Maximum percentage from midval check boundary |
```
### Supported file types
At this time only JSON formatted results files are supported.
### Supported `checktypes`
The following `checktypes` are supported. All are tested to fall within the bounds set by the `minval`
and `maxval`. That is:
> `minval <= Result <= maxval`
```
|check |description |
|-----------------------------------------------------------------|
|mean |the mean of all the results extracted by the jq query |
|min |the minimum (smallest) result |
|max |the maximum (largest) result |
|sd |the standard deviation of the results |
|cov the coefficient of variation (relative standard deviation)|
```
## Options
`checkmetrics` takes a number of options. Some are mandatory.
### TOML base file path (mandatory)
```
--basefile value path to baseline TOML metrics file
```
### Debug mode
```
--debug enable debug output in the log
```
### Log file path
```
--log value set the log file path
```
### Metrics results directory path (mandatory)
```
--metricsdir value directory containing results files
```
### Percentage presentation mode
```
--percentage present results as percentage differences
```
### Help
```
--help, -h show help
```
### Version
```
--version, -v print the version
```
## Output
The `checkmetrics` tool outputs a summary table after processing all metrics sections, and returns
a non-zero return code if any of the metrics checks fail.
Example output:
```
Report Summary:
+-----+----------------------+-----------+-----------+-----------+-------+-----------+-----------+------+------+-----+
| P/F | NAME | FLR | MEAN | CEIL | GAP | MIN | MAX | RNG | COV | ITS |
+-----+----------------------+-----------+-----------+-----------+-------+-----------+-----------+------+------+-----+
| F | boot-times | 0.50 | 1.36 | 0.70 | 40.0% | 1.34 | 1.38 | 2.7% | 1.3% | 2 |
| F | memory-footprint | 100000.00 | 284570.56 | 110000.00 | 10.0% | 284570.56 | 284570.56 | 0.0% | 0.0% | 1 |
| P | memory-footprint-ksm | 100000.00 | 101770.22 | 110000.00 | 10.0% | 101770.22 | 101770.22 | 0.0% | 0.0% | 1 |
+-----+----------------------+-----------+-----------+-----------+-------+-----------+-----------+------+------+-----+
Fails: 2, Passes 1
```
Example percentage mode output:
```
Report Summary:
+-----+----------------------+-------+--------+--------+-------+--------+--------+------+------+-----+
| P/F | NAME | FLR | MEAN | CEIL | GAP | MIN | MAX | RNG | COV | ITS |
+-----+----------------------+-------+--------+--------+-------+--------+--------+------+------+-----+
| *F* | boot-times | 83.3% | 226.8% | 116.7% | 33.3% | 223.8% | 229.8% | 2.7% | 1.3% | 2 |
| *F* | memory-footprint | 95.2% | 271.0% | 104.8% | 9.5% | 271.0% | 271.0% | 0.0% | 0.0% | 1 |
| P | memory-footprint-ksm | 92.7% | 99.3% | 107.3% | 14.6% | 99.3% | 99.3% | 0.0% | 0.0% | 1 |
+-----+----------------------+-------+--------+--------+-------+--------+--------+------+------+-----+
Fails: 2, Passes 1
```
### Output Columns
```
|name |description |
|-----------------------------------------------------------------------|
|P/F |Pass/Fail |
|NAME |Name of the test/check |
|FLR |Floor - the minval to check against |
|MEAN |The mean of the results |
|CEIL |Ceiling - the maxval to check against |
|GAP |The range (gap) between the minval and maxval, as a % of minval|
|MIN |The minimum result in the data set |
|MAX |The maximum result in the data set |
|RNG |The % range (spread) between the min and max result, WRT min |
|COV |The coefficient of variation of the results |
|ITS |The number of results (iterations) |
```
## Example invocation
For example, to invoke the `checkmetrics` tool, enter the following:
```
BASEFILE=`pwd`/../../metrics/baseline/baseline.toml
METRICSDIR=`pwd`/../../metrics/results
$ ./checkmetrics --basefile ${BASEFILE} --metricsdir ${METRICSDIR}
```

View File

@@ -0,0 +1,43 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"fmt"
"os"
"github.com/BurntSushi/toml"
log "github.com/sirupsen/logrus"
)
type baseFile struct {
// metrics is the slice of Metrics imported from the TOML config file
Metric []metrics
}
// newBasefile imports the TOML file passed from the path passed in the file
// argument and returns the baseFile slice containing the import if successful
func newBasefile(file string) (*baseFile, error) {
if file == "" {
log.Error("Missing basefile argument")
return nil, fmt.Errorf("missing baseline reference file")
}
configuration, err := os.ReadFile(file)
if err != nil {
return nil, err
}
var basefile baseFile
if err := toml.Unmarshal(configuration, &basefile); err != nil {
return nil, err
}
if len(basefile.Metric) == 0 {
log.Warningf("No entries found in basefile [%s]\n", file)
}
return &basefile, nil
}

View File

@@ -0,0 +1,89 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"os"
"testing"
"github.com/stretchr/testify/assert"
)
const badFileContents = `
this is not a valid toml file
`
func createBadFile(filename string) error {
return os.WriteFile(filename, []byte(badFileContents), os.FileMode(0640))
}
const goodFileContents = `
# This file contains baseline expectations
# for checked results by checkmetrics tool.
[[metric]]
# The name of the metrics test, must match
# that of the generated CSV file
name = "boot-times"
type = "json"
description = "measure container lifecycle timings"
# Min and Max values to set a 'range' that
# the median of the CSV Results data must fall
# within (inclusive)
checkvar = ".Results | .[] | .\"to-workload\".Result"
checktype = "mean"
minval = 1.3
maxval = 1.5
# ... repeat this for each metric ...
`
func createGoodFile(filename string) error {
return os.WriteFile(filename, []byte(goodFileContents), os.FileMode(0640))
}
func TestNewBasefile(t *testing.T) {
assert := assert.New(t)
tmpdir, err := os.MkdirTemp("", "cm-")
assert.NoError(err)
defer os.RemoveAll(tmpdir)
// Should fail to load a nil filename
_, err = newBasefile("")
assert.NotNil(err, "Did not error on empty filename")
// Should fail to load a file that does not exist
_, err = newBasefile("/some/file/that/does/not/exist")
assert.NotNil(err, "Did not error on non-existent file")
// Check a badly formed toml file
badFileName := tmpdir + "badFile.toml"
err = createBadFile(badFileName)
assert.NoError(err)
_, err = newBasefile(badFileName)
assert.NotNil(err, "Did not error on bad file contents")
// Check a well formed toml file
goodFileName := tmpdir + "goodFile.toml"
err = createGoodFile(goodFileName)
assert.NoError(err)
bf, err := newBasefile(goodFileName)
assert.Nil(err, "Error'd on good file contents")
// Now check we did load what we expected from the toml
t.Logf("Entry.Name: %v", bf.Metric[0].Name)
m := bf.Metric[0]
assert.Equal("boot-times", m.Name, "data loaded should match")
assert.Equal("measure container lifecycle timings", m.Description, "data loaded should match")
assert.Equal("json", m.Type, "data loaded should match")
assert.Equal("mean", m.CheckType, "data loaded should match")
assert.Equal(".Results | .[] | .\"to-workload\".Result", m.CheckVar, "data loaded should match")
assert.Equal(1.3, m.MinVal, "data loaded should match")
assert.Equal(1.5, m.MaxVal, "data loaded should match")
// Gap has not been calculated yet...
assert.Equal(0.0, m.Gap, "data loaded should match")
}

View File

@@ -0,0 +1,202 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"errors"
"strconv"
log "github.com/sirupsen/logrus"
)
// metricsCheck is a placeholder struct for us to attach the methods to and make
// it clear they belong this grouping. Maybe there is a better way?
type metricsCheck struct {
}
// reportTitleSlice returns the report table title row as a slice of strings
func (mc *metricsCheck) reportTitleSlice() []string {
// FIXME - now we don't only check the mean, let's re-arrange the order
// to make a little more sense.
// Also, CoV is so much more useful than SD - let's stop printout out
// the SD, and add instead the % gap between the Min and Max Results
return []string{"P/F",
"Name",
// This is the check boundary, not the smallest value in Results
"Flr",
"Mean",
// This is the check boundary, not the largest value in Results
"Ceil",
"Gap",
"Min",
"Max",
"Rng",
"Cov",
"Its"}
}
// genSummaryLine takes in all the relevant report arguments and returns
// a string slice formatted appropriately for the summary table generation
func (mc *metricsCheck) genSummaryLine(
passed bool,
name string,
minval string,
mean string,
maxval string,
gap string,
min string,
max string,
rnge string,
cov string,
iterations string) (summary []string) {
if passed {
summary = append(summary, "P")
} else {
summary = append(summary, "*F*")
}
summary = append(summary,
name,
minval,
mean,
maxval,
gap,
min,
max,
rnge,
cov,
iterations)
return
}
// genErrorLine takes a number of error argument strings and a pass/fail bool
// and returns a string slice formatted appropriately for the summary report.
// It exists to hide some of the inner details of just how the slice is meant
// to be formatted, such as the exact number of columns
func (mc *metricsCheck) genErrorLine(
passed bool,
error1 string,
error2 string,
error3 string) (summary []string) {
summary = mc.genSummaryLine(passed, error1, error2, error3,
"", "", "", "", "", "", "")
return
}
// check takes a basefile metric record and a filled out stats struct and checks
// if the file metrics pass the metrics comparison checks.
// check returns a string slice containing the results of the check.
// The err return will be non-nil if the check fails.
func (mc *metricsCheck) checkstats(m metrics) (summary []string, err error) {
var pass = true
var val float64
log.Debugf("Compare check for [%s]", m.Name)
log.Debugf("Checking value [%s]", m.CheckType)
//Pick out the value we are range checking depending on the
// config. Default if not set is the "mean"
switch m.CheckType {
case "min":
val = m.stats.Min
case "max":
val = m.stats.Max
case "cov":
val = m.stats.CoV
case "sd":
val = m.stats.SD
case "mean":
fallthrough
default:
val = m.stats.Mean
}
log.Debugf(" Check minval (%f < %f)", m.MinVal, val)
if val < m.MinVal {
log.Warnf("Failed Minval (%7f > %7f) for [%s]",
m.MinVal, val,
m.Name)
pass = false
} else {
log.Debug("Passed")
}
log.Debugf(" Check maxval (%f > %f)", m.MaxVal, val)
if val > m.MaxVal {
log.Warnf("Failed Maxval (%7f < %7f) for [%s]",
m.MaxVal, val,
m.Name)
pass = false
} else {
log.Debug("Passed")
}
if !pass {
err = errors.New("Failed")
}
// Note - choosing the precision for the fields is tricky without
// knowledge of the actual metrics tests results. For now set
// precision to 'probably big enough', and later we may want to
// add an annotation to the TOML baselines to give an indication of
// expected values - or, maybe we can derive it from the min/max values
// Are we presenting as a percentage based difference
if showPercentage {
// Work out what our midpoint baseline 'goal' is.
midpoint := (m.MinVal + m.MaxVal) / 2
// Calculate our values as a % based off the mid-point
// of the acceptable range.
floorpc := (m.MinVal / midpoint) * 100.0
ceilpc := (m.MaxVal / midpoint) * 100.0
meanpc := (m.stats.Mean / midpoint) * 100.0
minpc := (m.stats.Min / midpoint) * 100.0
maxpc := (m.stats.Max / midpoint) * 100.0
// Or present as physical values
summary = append(summary, mc.genSummaryLine(
pass,
m.Name,
// Note this is the check boundary, not the smallest Result seen
strconv.FormatFloat(floorpc, 'f', 1, 64)+"%",
strconv.FormatFloat(meanpc, 'f', 1, 64)+"%",
// Note this is the check boundary, not the largest Result seen
strconv.FormatFloat(ceilpc, 'f', 1, 64)+"%",
strconv.FormatFloat(m.Gap, 'f', 1, 64)+"%",
strconv.FormatFloat(minpc, 'f', 1, 64)+"%",
strconv.FormatFloat(maxpc, 'f', 1, 64)+"%",
strconv.FormatFloat(m.stats.RangeSpread, 'f', 1, 64)+"%",
strconv.FormatFloat(m.stats.CoV, 'f', 1, 64)+"%",
strconv.Itoa(m.stats.Iterations))...)
} else {
// Or present as physical values
summary = append(summary, mc.genSummaryLine(
pass,
m.Name,
// Note this is the check boundary, not the smallest Result seen
strconv.FormatFloat(m.MinVal, 'f', 2, 64),
strconv.FormatFloat(m.stats.Mean, 'f', 2, 64),
// Note this is the check boundary, not the largest Result seen
strconv.FormatFloat(m.MaxVal, 'f', 2, 64),
strconv.FormatFloat(m.Gap, 'f', 1, 64)+"%",
strconv.FormatFloat(m.stats.Min, 'f', 2, 64),
strconv.FormatFloat(m.stats.Max, 'f', 2, 64),
strconv.FormatFloat(m.stats.RangeSpread, 'f', 1, 64)+"%",
strconv.FormatFloat(m.stats.CoV, 'f', 1, 64)+"%",
strconv.Itoa(m.stats.Iterations))...)
}
return
}

View File

@@ -0,0 +1,312 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"math"
"testing"
"github.com/stretchr/testify/assert"
)
// Pre-filled out metrics (apart from the calculated stats)
// This should **pass** the "mean" metrics checks by default
var exampleM = metrics{
Name: "name",
Description: "desc",
Type: "type",
CheckType: "json",
CheckVar: "Results",
MinVal: 0.9,
MaxVal: 3.1,
Gap: 0,
stats: statistics{
Results: []float64{1.0, 2.0, 3.0},
Iterations: 3,
Mean: 0.0,
Min: 0.0,
Max: 0.0,
Range: 0.0,
RangeSpread: 0.0,
SD: 0.0,
CoV: 0.0}}
func TestGenSummaryLine(t *testing.T) {
assert := assert.New(t)
var args = []string{
"name",
"minval",
"mean",
"maxval",
"gap",
"min",
"max",
"rnge",
"cov",
"iterations"}
// Check for the 'passed' case
s := (&metricsCheck{}).genSummaryLine(
true, //passed
args[0], //name
args[1], //minval
args[2], //mean
args[3], //maxval
args[4], //gap
args[5], //min
args[6], //max
args[7], //rnge
args[8], //cov
args[9]) //iterations
for n, i := range s {
if n == 0 {
assert.Equal("P", i, "Should be equal")
} else {
assert.Equal(args[n-1], i, "Should be equal")
}
}
// Check for the 'failed' case
s = (&metricsCheck{}).genSummaryLine(
false, //passed
args[0], //name
args[1], //minval
args[2], //mean
args[3], //maxval
args[4], //gap
args[5], //min
args[6], //max
args[7], //rnge
args[8], //cov
args[9]) //iterations
for n, i := range s {
if n == 0 {
assert.Equal("*F*", i, "Should be equal")
} else {
assert.Equal(args[n-1], i, "Should be equal")
}
}
}
func TestCheckStats(t *testing.T) {
assert := assert.New(t)
var m = exampleM
m.Name = "CheckStats"
//Check before we have done the calculations - should fail
_, err := (&metricsCheck{}).checkstats(m)
assert.Error(err)
m.calculate()
// Constants here calculated from info coded in struct above
// Funky rounding of Gap, as float imprecision actually gives us
// 110.00000000000001 - check to within 0.1% then...
roundedGap := math.Round(m.Gap/0.001) * 0.001
assert.Equal(110.0, roundedGap, "Should be equal")
assert.Equal(2.0, m.stats.Mean, "Should be equal")
assert.Equal(1.0, m.stats.Min, "Should be equal")
assert.Equal(3.0, m.stats.Max, "Should be equal")
assert.Equal(2.0, m.stats.Range, "Should be equal")
assert.Equal(200.0, m.stats.RangeSpread, "Should be equal")
assert.Equal(0.816496580927726, m.stats.SD, "Should be equal")
assert.Equal(40.8248290463863, m.stats.CoV, "Should be equal")
s, err := (&metricsCheck{}).checkstats(m)
assert.NoError(err)
assert.Equal("P", s[0], "Should be equal") // Pass
assert.Equal("CheckStats", s[1], "Should be equal") // test name
assert.Equal("0.90", s[2], "Should be equal") // Floor
assert.Equal("2.00", s[3], "Should be equal") // Mean
assert.Equal("3.10", s[4], "Should be equal") // Ceiling
assert.Equal("110.0%", s[5], "Should be equal") // Gap
assert.Equal("1.00", s[6], "Should be equal") // Min
assert.Equal("3.00", s[7], "Should be equal") // Max
assert.Equal("200.0%", s[8], "Should be equal") // Range %
assert.Equal("40.8%", s[9], "Should be equal") // CoV
assert.Equal("3", s[10], "Should be equal") // Iterations
// And check in percentage presentation mode
showPercentage = true
s, err = (&metricsCheck{}).checkstats(m)
assert.NoError(err)
assert.Equal("P", s[0], "Should be equal") // Pass
assert.Equal("CheckStats", s[1], "Should be equal") // test name
assert.Equal("45.0%", s[2], "Should be equal") // Floor
assert.Equal("100.0%", s[3], "Should be equal") // Mean
assert.Equal("155.0%", s[4], "Should be equal") // Ceiling
assert.Equal("110.0%", s[5], "Should be equal") // Gap
assert.Equal("50.0%", s[6], "Should be equal") // Min
assert.Equal("150.0%", s[7], "Should be equal") // Max
assert.Equal("200.0%", s[8], "Should be equal") // Range %
assert.Equal("40.8%", s[9], "Should be equal") // CoV
assert.Equal("3", s[10], "Should be equal") // Iterations
// And put the default back
showPercentage = false
// Funcs called with a Min that fails and a Max that fails
// Presumption is that unmodified metrics should pass
// FIXME - we don't test the actual < vs <= boudary type conditions
// Mean is 2.0
CheckMean(assert, 3.0, 1.0)
// Min is 1.0
CheckMin(assert, 3.0, 0.5)
// Max is 3.0
CheckMax(assert, 4.0, 1.0)
// CoV is 40.8
CheckCoV(assert, 50.0, 1.0)
// SD is 0.8165
CheckSD(assert, 1.0, 0.5)
}
func CheckMean(assert *assert.Assertions, badmin float64, badmax float64) {
m := exampleM
m.CheckType = "mean"
m.Name = "CheckMean"
// Do the stats
m.calculate()
// Defaults should pass
_, err := (&metricsCheck{}).checkstats(m)
assert.NoError(err)
// badmin should fail
old := m.MinVal
m.MinVal = badmin
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
m.MinVal = old
// badmax should fail
m.MaxVal = badmax
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
}
func CheckMin(assert *assert.Assertions, badmin float64, badmax float64) {
m := exampleM
m.CheckType = "min"
m.Name = "CheckMin"
// Do the stats
m.calculate()
// Defaults should pass
_, err := (&metricsCheck{}).checkstats(m)
assert.NoError(err)
// badmin should fail
old := m.MinVal
m.MinVal = badmin
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
m.MinVal = old
// badmax should fail
m.MaxVal = badmax
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
}
func CheckMax(assert *assert.Assertions, badmin float64, badmax float64) {
m := exampleM
m.CheckType = "max"
m.Name = "CheckMax"
// Do the stats
m.calculate()
// Defaults should pass
_, err := (&metricsCheck{}).checkstats(m)
assert.NoError(err)
// badmin should fail
old := m.MinVal
m.MinVal = badmin
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
m.MinVal = old
// badmax should fail
m.MaxVal = badmax
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
}
func CheckSD(assert *assert.Assertions, badmin float64, badmax float64) {
m := exampleM
m.CheckType = "sd"
m.Name = "CheckSD"
// Do the stats
m.calculate()
// Set it up to pass by default
m.MinVal = 0.9 * m.stats.SD
m.MaxVal = 1.1 * m.stats.SD
oldMin := m.MinVal
oldMax := m.MinVal
// Defaults should pass
_, err := (&metricsCheck{}).checkstats(m)
assert.NoError(err)
// badmin should fail
m.MinVal = badmin
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
m.MinVal = oldMin
// badmax should fail
m.MaxVal = badmax
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
m.MaxVal = oldMax
}
func CheckCoV(assert *assert.Assertions, badmin float64, badmax float64) {
m := exampleM
m.CheckType = "cov"
m.Name = "CheckCoV"
// Do the stats
m.calculate()
// Set it up to pass by default
m.MinVal = 0.9 * m.stats.CoV
m.MaxVal = 1.1 * m.stats.CoV
oldMin := m.MinVal
oldMax := m.MinVal
// Defaults should pass
_, err := (&metricsCheck{}).checkstats(m)
assert.NoError(err)
// badmin should fail
m.MinVal = badmin
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
m.MinVal = oldMin
// badmax should fail
m.MaxVal = badmax
_, err = (&metricsCheck{}).checkstats(m)
assert.Error(err)
m.MaxVal = oldMax
}

View File

@@ -0,0 +1,22 @@
module example.com/m
go 1.19
require (
github.com/BurntSushi/toml v1.3.2
github.com/montanaflynn/stats v0.7.1
github.com/olekukonko/tablewriter v0.0.5
github.com/sirupsen/logrus v1.9.3
github.com/stretchr/testify v1.8.4
github.com/urfave/cli v1.22.14
)
require (
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View File

@@ -0,0 +1,37 @@
github.com/BurntSushi/toml v1.3.2 h1:o7IhLm0Msx3BaB+n3Ag7L8EVlByGnpq14C4YWiu/gL8=
github.com/BurntSushi/toml v1.3.2/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/urfave/cli v1.22.14 h1:ebbhrRiGK2i4naQJr+1Xj92HXZCrK7MsyTS/ob3HnAk=
github.com/urfave/cli v1.22.14/go.mod h1:X0eDS6pD6Exaclxm99NJ3FiCDRED7vIHpx2mDOHLvkA=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8 h1:0A+M6Uqn+Eje4kHMK80dtF3JCXC4ykBgQG4Fe06QRhQ=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -0,0 +1,97 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"bufio"
"bytes"
"io"
"os/exec"
"strconv"
log "github.com/sirupsen/logrus"
)
// jsonRecord has no data - the data is loaded and processed and stored
// back into the metrics structure passed in.
type jsonRecord struct {
}
// load reads in a JSON 'Metrics' results file from the file path given
// Parse out the actual results data using the 'jq' query found in the
// respective TOML entry.
func (c *jsonRecord) load(filepath string, metric *metrics) error {
var err error
log.Debugf("in json load of [%s]", filepath)
log.Debugf(" Run jq '%v' %s", metric.CheckVar, filepath)
out, err := exec.Command("jq", metric.CheckVar, filepath).Output()
if err != nil {
log.Warnf("Failed to run [jq %v %v][%v]", metric.CheckVar, filepath, err)
return err
}
log.Debugf(" Got result [%v]", out)
// Try to parse the results as floats first...
floats, err := readFloats(bytes.NewReader(out))
if err != nil {
// And if they are not floats, check if they are ints...
ints, err := readInts(bytes.NewReader(out))
if err != nil {
log.Warnf("Failed to decode [%v]", out)
return err
}
// Always store the internal data as floats
floats = []float64{}
for _, i := range ints {
floats = append(floats, float64(i))
}
}
log.Debugf(" and got output [%v]", floats)
// Store the results back 'up'
metric.stats.Results = floats
// And do the stats on them
metric.calculate()
return nil
}
// Parse a string of ascii ints into a slice of ints
func readInts(r io.Reader) ([]int, error) {
scanner := bufio.NewScanner(r)
scanner.Split(bufio.ScanWords)
var result []int
for scanner.Scan() {
i, err := strconv.Atoi(scanner.Text())
if err != nil {
return result, err
}
result = append(result, i)
}
return result, scanner.Err()
}
// Parse a string of ascii floats into a slice of floats
func readFloats(r io.Reader) ([]float64, error) {
scanner := bufio.NewScanner(r)
scanner.Split(bufio.ScanWords)
var result []float64
for scanner.Scan() {
f, err := strconv.ParseFloat(scanner.Text(), 64)
if err != nil {
return result, err
}
result = append(result, f)
}
return result, scanner.Err()
}

View File

@@ -0,0 +1,200 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"bytes"
"os"
"testing"
"github.com/stretchr/testify/assert"
)
const BadFileContents = `
this is not a valid json file
`
func CreateBadFile(filename string) error {
return os.WriteFile(filename, []byte(BadFileContents), os.FileMode(0640))
}
const GoodFileContents = `
{
"env" : {
"Runtime": "/usr/share/defaults/kata-containers/configuration.toml",
"RuntimeVersion": "0.1.0",
"Hypervisor": "/usr/bin/qemu-lite-system-x86_64",
"HypervisorVersion": " QEMU emulator version 2.7.0, Copyright (c) 2003-2016 Fabrice Bellard and the QEMU Project developers",
"Shim": "/usr/local/bin/containerd-shim-kata-v2",
"ShimVersion": " kata-shim version 2.4.0-rc0"
},
"date" : {
"ns": 1522162042326099526,
"Date": "2018-03-27 15:47:22 +0100"
},
"Config": [
{
"containers": 20,
"ksm": 0,
"auto": "",
"waittime": 5,
"image": "busybox",
"command": "sh"
}
],
"Results": [
{
"average": {
"Result": 10.56,
"Units" : "KB"
},
"qemus": {
"Result": 1.95,
"Units" : "KB"
},
"shims": {
"Result": 2.40,
"Units" : "KB"
},
"proxys": {
"Result": 3.21,
"Units" : "KB"
}
},
{
"average": {
"Result": 20.56,
"Units" : "KB"
},
"qemus": {
"Result": 4.95,
"Units" : "KB"
},
"shims": {
"Result": 5.40,
"Units" : "KB"
},
"proxys": {
"Result": 6.21,
"Units" : "KB"
}
},
{
"average": {
"Result": 30.56,
"Units" : "KB"
},
"qemus": {
"Result": 7.95,
"Units" : "KB"
},
"shims": {
"Result": 8.40,
"Units" : "KB"
},
"proxys": {
"Result": 9.21,
"Units" : "KB"
}
}
]
}
`
func CreateFile(filename string, contents string) error {
return os.WriteFile(filename, []byte(contents), os.FileMode(0640))
}
func TestLoad(t *testing.T) {
assert := assert.New(t)
// Set up and create a json results file
tmpdir, err := os.MkdirTemp("", "cm-")
assert.NoError(err)
defer os.RemoveAll(tmpdir)
// Check a badly formed JSON file
badFileName := tmpdir + "badFile.json"
err = CreateBadFile(badFileName)
assert.NoError(err)
// Set up our basic metrics struct
var m = metrics{
Name: "name",
Description: "desc",
Type: "type",
CheckType: "json",
CheckVar: ".Results | .[] | .average.Result",
MinVal: 1.9,
MaxVal: 2.1,
Gap: 0,
stats: statistics{
Results: []float64{1.0, 2.0, 3.0},
Iterations: 0,
Mean: 0.0,
Min: 0.0,
Max: 0.0,
Range: 0.0,
RangeSpread: 0.0,
SD: 0.0,
CoV: 0.0}}
err = (&jsonRecord{}).load(badFileName, &m)
assert.Error(err, "Did not error on bad file contents")
// Check the well formed file
goodFileName := tmpdir + "goodFile.json"
err = CreateFile(goodFileName, GoodFileContents)
assert.NoError(err)
err = (&jsonRecord{}).load(goodFileName, &m)
assert.NoError(err, "Error'd on good file contents")
t.Logf("m now %+v", m)
// And check some of the values we get from that JSON read
assert.Equal(3, m.stats.Iterations, "Should be equal")
assert.Equal(10.56, m.stats.Min, "Should be equal")
assert.Equal(30.56, m.stats.Max, "Should be equal")
// Check we default to json type
m2 := m
m2.CheckType = ""
err = (&jsonRecord{}).load(goodFileName, &m)
assert.NoError(err, "Error'd on no type file contents")
}
func TestReadInts(t *testing.T) {
assert := assert.New(t)
good := bytes.NewReader([]byte("1 2 3"))
bad := bytes.NewReader([]byte("1 2 3.0"))
_, err := readInts(bad)
assert.Error(err, "Should fail")
ints, err := readInts(good)
assert.NoError(err, "Should fail")
assert.Equal(1, ints[0], "Should be equal")
assert.Equal(2, ints[1], "Should be equal")
assert.Equal(3, ints[2], "Should be equal")
}
func TestReadFloats(t *testing.T) {
assert := assert.New(t)
good := bytes.NewReader([]byte("1.0 2.0 3.0"))
bad := bytes.NewReader([]byte("1.0 2.0 blah"))
_, err := readFloats(bad)
assert.Error(err, "Should fail")
floats, err := readFloats(good)
assert.NoError(err, "Should fail")
assert.Equal(1.0, floats[0], "Should be equal")
assert.Equal(2.0, floats[1], "Should be equal")
assert.Equal(3.0, floats[2], "Should be equal")
}

View File

@@ -0,0 +1,216 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
/*
Program checkmetrics compares the results from a set of metrics
results, stored in JSON files, against a set of baseline metrics
'expectations', defined in a TOML file.
It returns non zero if any of the TOML metrics are not met.
It prints out a tabulated report summary at the end of the run.
*/
package main
import (
"errors"
"fmt"
"os"
"path"
"github.com/olekukonko/tablewriter"
log "github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
// name is the name of the program.
const name = "checkmetrics"
// usage is the usage of the program.
const usage = name + ` checks JSON metrics results against a TOML baseline`
var (
// The TOML basefile
ciBasefile *baseFile
// If set then we show results as a relative percentage (to the baseline)
showPercentage = false
// System default path for baseline file
// the value will be set by Makefile
sysBaseFile string
)
// processMetricsBaseline locates the files matching each entry in the TOML
// baseline, loads and processes it, and checks if the metrics were in range.
// Finally it generates a summary report
func processMetricsBaseline(context *cli.Context) (err error) {
var report [][]string // summary report table
var passes int
var fails int
var summary []string
log.Debug("processMetricsBaseline")
// Process each Metrics TOML entry one at a time
// FIXME - this is not structured to be testable - if you need to add a unit
// test here - the *please* re-structure these funcs etc.
for _, m := range ciBasefile.Metric {
log.Debugf("Processing %s", m.Name)
fullpath := path.Join(context.GlobalString("metricsdir"), m.Name)
switch m.Type {
case "":
log.Debugf("No Type, default to JSON for [%s]", m.Name)
fallthrough
case "json":
{
var thisJSON jsonRecord
log.Debug("Process a JSON")
fullpath = fullpath + ".json"
log.Debugf("Fullpath %s", fullpath)
err = thisJSON.load(fullpath, &m)
if err != nil {
log.Warnf("[%s][%v]", fullpath, err)
// Record that this one did not complete successfully
fails++
// Make some sort of note in the summary table that this failed
summary = (&metricsCheck{}).genErrorLine(false, m.Name, "Failed to load JSON", fmt.Sprintf("%s", err))
// Not a fatal error - continue to process any remaining files
break
}
summary, err = (&metricsCheck{}).checkstats(m)
if err != nil {
log.Warnf("Check for [%s] failed [%v]", m.Name, err)
log.Warnf(" with [%s]", summary)
fails++
} else {
log.Debugf("Check for [%s] passed", m.Name)
log.Debugf(" with [%s]", summary)
passes++
}
}
default:
{
log.Warnf("Unknown type [%s] for metric [%s]", m.Type, m.Name)
summary = (&metricsCheck{}).genErrorLine(false, m.Name, "Unsupported Type", fmt.Sprint(m.Type))
fails++
}
}
report = append(report, summary)
log.Debugf("Done %s", m.Name)
}
if fails != 0 {
log.Warn("Overall we failed")
}
fmt.Printf("\n")
// We need to find a better way here to report that some tests failed to even
// get into the table - such as JSON file parse failures
// Actually, now we report file failures into the report as well, we should not
// see this - but, it is nice to leave as a sanity check.
if len(report) < fails+passes {
fmt.Printf("Warning: some tests (%d) failed to report\n", (fails+passes)-len(report))
}
// Note - not logging here - the summary goes to stdout
fmt.Println("Report Summary:")
table := tablewriter.NewWriter(os.Stdout)
table.SetHeader((&metricsCheck{}).reportTitleSlice())
for _, s := range report {
table.Append(s)
}
table.Render()
fmt.Printf("Fails: %d, Passes %d\n", fails, passes)
// Did we see any failures during the run?
if fails != 0 {
err = errors.New("Failed")
} else {
err = nil
}
return
}
// checkmetrics main entry point.
// Do the command line processing, load the TOML file, and do the processing
// against the data files
func main() {
app := cli.NewApp()
app.Name = name
app.Usage = usage
app.Flags = []cli.Flag{
cli.StringFlag{
Name: "basefile",
Usage: "path to baseline TOML metrics file",
},
cli.BoolFlag{
Name: "debug",
Usage: "enable debug output in the log",
},
cli.StringFlag{
Name: "log",
Usage: "set the log file path",
},
cli.StringFlag{
Name: "metricsdir",
Usage: "directory containing metrics results files",
},
cli.BoolFlag{
Name: "percentage",
Usage: "present results as percentage differences",
Destination: &showPercentage,
},
}
app.Before = func(context *cli.Context) error {
var err error
var baseFilePath string
if path := context.GlobalString("log"); path != "" {
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_APPEND|os.O_SYNC, 0640)
if err != nil {
return err
}
log.SetOutput(f)
}
if context.GlobalBool("debug") {
log.SetLevel(log.DebugLevel)
}
if context.GlobalString("metricsdir") == "" {
log.Error("Must supply metricsdir argument")
return errors.New("Must supply metricsdir argument")
}
baseFilePath = context.GlobalString("basefile")
if baseFilePath == "" {
baseFilePath = sysBaseFile
}
ciBasefile, err = newBasefile(baseFilePath)
return err
}
app.Action = func(context *cli.Context) error {
return processMetricsBaseline(context)
}
if err := app.Run(os.Args); err != nil {
fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}

View File

@@ -0,0 +1,108 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"github.com/montanaflynn/stats"
log "github.com/sirupsen/logrus"
)
type statistics struct {
Results []float64 // Result array converted to floats
Iterations int // How many results did we gather
Mean float64 // The 'average'
Min float64 // Smallest value we saw
Max float64 // Largest value we saw
Range float64 // Max - Min
RangeSpread float64 // (Range/Min) * 100
SD float64 // Standard Deviation
CoV float64 // Co-efficient of Variation
}
// metrics represents the repository under test
// The members are Public so the toml reflection can see them, but I quite
// like the lower case toml naming, hence we use the annotation strings to
// get the parser to look for lower case.
type metrics struct {
// Generic to JSON files
// Generally mandatory
Name string `toml:"name"` //Used to locate the results file
Description string `toml:"description"`
// Optional config entries
Type string `toml:"type"` //Default is JSON
// Processing related entries
CheckType string `toml:"checktype"` //Result val to calculate: mean, median, min, max
// default: mean
CheckVar string `toml:"checkvar"` //JSON: which var to (extract and) calculate on
// is a 'jq' query
stats statistics // collection of our stats data, calculated from Results
// For setting 'bounds', you can either set a min/max value pair,
// or you can set a mid-range value and a 'percentage gap'.
// You should set one or the other. Setting both will likely result
// in one of them being chosen first.
// The range we expect the processed result to fall within
// (MinVal <= Result <= MaxVal) == pass
MinVal float64 `toml:"minval"`
MaxVal float64 `toml:"maxval"`
// If we are doing a percentage range check then you need to set
// both a mid-value and a percentage range to check.
MidVal float64 `toml:"midval"`
MinPercent float64 `toml:"minpercent"`
MaxPercent float64 `toml:"maxpercent"`
// Vars that are not in the toml file, but are filled out later
// dynamically
Gap float64 // What is the % gap between the Min and Max vals
}
// Calculate the statistics from the stored Results data
// Although the calculations can fail, we don't fail the function
func (m *metrics) calculate() {
// First we check/calculate some non-stats values to fill out
// our base data.
// We should either have a Min/Max value pair or a percentage/MidVal
// set. If we find a non-0 percentage set, then calculate the Min/Max
// values from them, as the rest of the code base works off the Min/Max
// values.
if (m.MinPercent + m.MaxPercent) != 0 {
m.MinVal = m.MidVal * (1 - (m.MinPercent / 100))
m.MaxVal = m.MidVal * (1 + (m.MaxPercent / 100))
// The rest of the system works off the Min/Max value
// pair - so, if your min/max percentage values are not equal
// then **the values you see in the results table will not look
// like the ones you put in the toml file**, because they are
// based off the mid-value calculation below.
// This is unfortunate, but it keeps the code simpler overall.
}
// the gap is the % swing around the midpoint.
midpoint := (m.MinVal + m.MaxVal) / 2
m.Gap = (((m.MaxVal / midpoint) - 1) * 2) * 100
// And now we work out the actual stats
m.stats.Iterations = len(m.stats.Results)
m.stats.Mean, _ = stats.Mean(m.stats.Results)
m.stats.Min, _ = stats.Min(m.stats.Results)
m.stats.Max, _ = stats.Max(m.stats.Results)
m.stats.Range = m.stats.Max - m.stats.Min
m.stats.RangeSpread = (m.stats.Range / m.stats.Min) * 100.0
m.stats.SD, _ = stats.StandardDeviation(m.stats.Results)
m.stats.CoV = (m.stats.SD / m.stats.Mean) * 100.0
log.Debugf(" Iters is %d", m.stats.Iterations)
log.Debugf(" Min is %f", m.stats.Min)
log.Debugf(" Max is %f", m.stats.Max)
log.Debugf(" Mean is %f", m.stats.Mean)
log.Debugf(" SD is %f", m.stats.SD)
log.Debugf(" CoV is %.2f", m.stats.CoV)
}

View File

@@ -0,0 +1,97 @@
// Copyright (c) 2023 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
package main
import (
"math"
"testing"
"github.com/stretchr/testify/assert"
)
func TestCalculate(t *testing.T) {
assert := assert.New(t)
var m = metrics{
Name: "name",
Description: "desc",
Type: "type",
CheckType: "json",
CheckVar: "Results",
MinVal: 1.9,
MaxVal: 2.1,
Gap: 0,
stats: statistics{
Results: []float64{1.0, 2.0, 3.0},
Iterations: 3,
Mean: 0.0,
Min: 0.0,
Max: 0.0,
Range: 0.0,
RangeSpread: 0.0,
SD: 0.0,
CoV: 0.0}}
m.calculate()
// Constants here calculated from info coded in struct above
// We do a little funky math on Gap to round it to within 0.1% - as the actual
// gap math gave us 10.000000000000009 ...
roundedGap := math.Round(m.Gap/0.001) * 0.001
assert.Equal(10.0, roundedGap, "Should be equal")
assert.Equal(2.0, m.stats.Mean, "Should be equal")
assert.Equal(1.0, m.stats.Min, "Should be equal")
assert.Equal(3.0, m.stats.Max, "Should be equal")
assert.Equal(2.0, m.stats.Range, "Should be equal")
assert.Equal(200.0, m.stats.RangeSpread, "Should be equal")
assert.Equal(0.816496580927726, m.stats.SD, "Should be equal")
assert.Equal(40.8248290463863, m.stats.CoV, "Should be equal")
}
// Test that only setting a % range works
func TestCalculate2(t *testing.T) {
assert := assert.New(t)
var m = metrics{
Name: "name",
Description: "desc",
Type: "type",
CheckType: "json",
CheckVar: "Results",
//MinVal: 1.9,
//MaxVal: 2.1,
MinPercent: 20,
MaxPercent: 25,
MidVal: 2.0,
Gap: 0,
stats: statistics{
Results: []float64{1.0, 2.0, 3.0},
Iterations: 3,
Mean: 0.0,
Min: 0.0,
Max: 0.0,
Range: 0.0,
RangeSpread: 0.0,
SD: 0.0,
CoV: 0.0}}
m.calculate()
// Constants here calculated from info coded in struct above
// We do a little funky math on Gap to round it to within 0.1% - as the actual
// gap math gave us 10.000000000000009 ...
roundedGap := math.Round(m.Gap/0.001) * 0.001
// This is not a nice (20+25), as the 'midval' will skew it.
assert.Equal(43.902, roundedGap, "Should be equal")
assert.Equal(2.0, m.stats.Mean, "Should be equal")
assert.Equal(1.0, m.stats.Min, "Should be equal")
assert.Equal(3.0, m.stats.Max, "Should be equal")
assert.Equal(2.0, m.stats.Range, "Should be equal")
assert.Equal(200.0, m.stats.RangeSpread, "Should be equal")
assert.Equal(0.816496580927726, m.stats.SD, "Should be equal")
assert.Equal(40.8248290463863, m.stats.CoV, "Should be equal")
}

View File

@@ -0,0 +1,2 @@
/toml.test
/toml-test

View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013 TOML authors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@@ -0,0 +1,120 @@
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml` packages.
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
Documentation: https://godocs.io/github.com/BurntSushi/toml
See the [releases page](https://github.com/BurntSushi/toml/releases) for a
changelog; this information is also in the git tag annotations (e.g. `git show
v0.4.0`).
This library requires Go 1.13 or newer; add it to your go.mod with:
% go get github.com/BurntSushi/toml@latest
It also comes with a TOML validator CLI tool:
% go install github.com/BurntSushi/toml/cmd/tomlv@latest
% tomlv some-toml-file.toml
### Examples
For the simplest example, consider some TOML file as just a list of keys and
values:
```toml
Age = 25
Cats = [ "Cauchy", "Plato" ]
Pi = 3.14
Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which can be decoded with:
```go
type Config struct {
Age int
Cats []string
Pi float64
Perfection []int
DOB time.Time
}
var conf Config
_, err := toml.Decode(tomlData, &conf)
```
You can also use struct tags if your struct field name doesn't map to a TOML key
value directly:
```toml
some_key_NAME = "wat"
```
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
}
```
Beware that like other decoders **only exported fields** are considered when
encoding and decoding; private fields are silently ignored.
### Using the `Marshaler` and `encoding.TextUnmarshaler` interfaces
Here's an example that automatically parses values in a `mail.Address`:
```toml
contacts = [
"Donald Duck <donald@duckburg.com>",
"Scrooge McDuck <scrooge@duckburg.com>",
]
```
Can be decoded with:
```go
// Create address type which satisfies the encoding.TextUnmarshaler interface.
type address struct {
*mail.Address
}
func (a *address) UnmarshalText(text []byte) error {
var err error
a.Address, err = mail.ParseAddress(string(text))
return err
}
// Decode it.
func decode() {
blob := `
contacts = [
"Donald Duck <donald@duckburg.com>",
"Scrooge McDuck <scrooge@duckburg.com>",
]
`
var contacts struct {
Contacts []address
}
_, err := toml.Decode(blob, &contacts)
if err != nil {
log.Fatal(err)
}
for _, c := range contacts.Contacts {
fmt.Printf("%#v\n", c.Address)
}
// Output:
// &mail.Address{Name:"Donald Duck", Address:"donald@duckburg.com"}
// &mail.Address{Name:"Scrooge McDuck", Address:"scrooge@duckburg.com"}
}
```
To target TOML specifically you can implement `UnmarshalTOML` TOML interface in
a similar way.
### More complex usage
See the [`_example/`](/_example) directory for a more complex example.

View File

@@ -0,0 +1,602 @@
package toml
import (
"bytes"
"encoding"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"math"
"os"
"reflect"
"strconv"
"strings"
"time"
)
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of data in TOML format into a pointer v.
//
// See [Decoder] for a description of the decoding process.
func Unmarshal(data []byte, v interface{}) error {
_, err := NewDecoder(bytes.NewReader(data)).Decode(v)
return err
}
// Decode the TOML data in to the pointer v.
//
// See [Decoder] for a description of the decoding process.
func Decode(data string, v interface{}) (MetaData, error) {
return NewDecoder(strings.NewReader(data)).Decode(v)
}
// DecodeFile reads the contents of a file and decodes it with [Decode].
func DecodeFile(path string, v interface{}) (MetaData, error) {
fp, err := os.Open(path)
if err != nil {
return MetaData{}, err
}
defer fp.Close()
return NewDecoder(fp).Decode(v)
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
//
// This type can be used for any value, which will cause decoding to be delayed.
// You can use [PrimitiveDecode] to "manually" decode these values.
//
// NOTE: The underlying representation of a `Primitive` value is subject to
// change. Do not rely on it.
//
// NOTE: Primitive values are still parsed, so using them will only avoid the
// overhead of reflection. They can be useful when you don't know the exact type
// of TOML data until runtime.
type Primitive struct {
undecoded interface{}
context Key
}
// The significand precision for float32 and float64 is 24 and 53 bits; this is
// the range a natural number can be stored in a float without loss of data.
const (
maxSafeFloat32Int = 16777215 // 2^24-1
maxSafeFloat64Int = int64(9007199254740991) // 2^53-1
)
// Decoder decodes TOML data.
//
// TOML tables correspond to Go structs or maps; they can be used
// interchangeably, but structs offer better type safety.
//
// TOML table arrays correspond to either a slice of structs or a slice of maps.
//
// TOML datetimes correspond to [time.Time]. Local datetimes are parsed in the
// local timezone.
//
// [time.Duration] types are treated as nanoseconds if the TOML value is an
// integer, or they're parsed with time.ParseDuration() if they're strings.
//
// All other TOML types (float, string, int, bool and array) correspond to the
// obvious Go types.
//
// An exception to the above rules is if a type implements the TextUnmarshaler
// interface, in which case any primitive TOML value (floats, strings, integers,
// booleans, datetimes) will be converted to a []byte and given to the value's
// UnmarshalText method. See the Unmarshaler example for a demonstration with
// email addresses.
//
// # Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go struct.
// The special `toml` struct tag can be used to map TOML keys to struct fields
// that don't match the key name exactly (see the example). A case insensitive
// match to struct names will be tried if an exact match can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there may
// exist TOML values that cannot be placed into your representation, and there
// may be parts of your representation that do not correspond to TOML values.
// This loose mapping can be made stricter by using the IsDefined and/or
// Undecoded methods on the MetaData returned.
//
// This decoder does not handle cyclic types. Decode will not terminate if a
// cyclic type is passed.
type Decoder struct {
r io.Reader
}
// NewDecoder creates a new Decoder.
func NewDecoder(r io.Reader) *Decoder {
return &Decoder{r: r}
}
var (
unmarshalToml = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
unmarshalText = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
primitiveType = reflect.TypeOf((*Primitive)(nil)).Elem()
)
// Decode TOML data in to the pointer `v`.
func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
s := "%q"
if reflect.TypeOf(v) == nil {
s = "%v"
}
return MetaData{}, fmt.Errorf("toml: cannot decode to non-pointer "+s, reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, fmt.Errorf("toml: cannot decode to nil value of %q", reflect.TypeOf(v))
}
// Check if this is a supported type: struct, map, interface{}, or something
// that implements UnmarshalTOML or UnmarshalText.
rv = indirect(rv)
rt := rv.Type()
if rv.Kind() != reflect.Struct && rv.Kind() != reflect.Map &&
!(rv.Kind() == reflect.Interface && rv.NumMethod() == 0) &&
!rt.Implements(unmarshalToml) && !rt.Implements(unmarshalText) {
return MetaData{}, fmt.Errorf("toml: cannot decode to type %s", rt)
}
// TODO: parser should read from io.Reader? Or at the very least, make it
// read from []byte rather than string
data, err := ioutil.ReadAll(dec.r)
if err != nil {
return MetaData{}, err
}
p, err := parse(string(data))
if err != nil {
return MetaData{}, err
}
md := MetaData{
mapping: p.mapping,
keyInfo: p.keyInfo,
keys: p.ordered,
decoded: make(map[string]struct{}, len(p.ordered)),
context: nil,
data: data,
}
return md, md.unify(p.mapping, rv)
}
// PrimitiveDecode is just like the other Decode* functions, except it decodes a
// TOML value that has already been parsed. Valid primitive values can *only* be
// obtained from values filled by the decoder functions, including this method.
// (i.e., v may contain more [Primitive] values.)
//
// Meta data for primitive values is included in the meta data returned by the
// Decode* functions with one exception: keys returned by the Undecoded method
// will only reflect keys that were decoded. Namely, any keys hidden behind a
// Primitive will be considered undecoded. Executing this method will update the
// undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
// TODO: #76 would make this superfluous after implemented.
if rv.Type() == primitiveType {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
rvi := rv.Interface()
if v, ok := rvi.(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
if v, ok := rvi.(encoding.TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// TODO:
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML hash or
// array. In particular, the unmarshaler should only be applied to primitive
// TOML values. But at this point, it will be applied to all kinds of values
// and produce an incorrect error whenever those values are hashes or arrays
// (including arrays of tables).
k := rv.Kind()
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
if rv.NumMethod() > 0 { /// Only empty interfaces are supported.
return md.e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32, reflect.Float64:
return md.unifyFloat64(data, rv)
}
return md.e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return md.e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = struct{}{}
md.context = append(md.context, key)
err := md.unify(datum, subv)
if err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
return md.e("cannot write unexported field %s.%s", rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
keyType := rv.Type().Key().Kind()
if keyType != reflect.String && keyType != reflect.Interface {
return fmt.Errorf("toml: cannot decode to a map with non-string key type (%s in %q)",
keyType, rv.Type())
}
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return md.badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = struct{}{}
md.context = append(md.context, k)
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
err := md.unify(v, indirect(rvval))
if err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey := indirect(reflect.New(rv.Type().Key()))
switch keyType {
case reflect.Interface:
rvkey.Set(reflect.ValueOf(k))
case reflect.String:
rvkey.SetString(k)
}
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return md.badtype("slice", data)
}
if l := datav.Len(); l != rv.Len() {
return md.e("expected array length %d; got TOML array of length %d", rv.Len(), l)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return md.badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
l := data.Len()
for i := 0; i < l; i++ {
err := md.unify(data.Index(i).Interface(), indirect(rv.Index(i)))
if err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
_, ok := rv.Interface().(json.Number)
if ok {
if i, ok := data.(int64); ok {
rv.SetString(strconv.FormatInt(i, 10))
} else if f, ok := data.(float64); ok {
rv.SetString(strconv.FormatFloat(f, 'f', -1, 64))
} else {
return md.badtype("string", data)
}
return nil
}
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return md.badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
rvk := rv.Kind()
if num, ok := data.(float64); ok {
switch rvk {
case reflect.Float32:
if num < -math.MaxFloat32 || num > math.MaxFloat32 {
return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
if num, ok := data.(int64); ok {
if (rvk == reflect.Float32 && (num < -maxSafeFloat32Int || num > maxSafeFloat32Int)) ||
(rvk == reflect.Float64 && (num < -maxSafeFloat64Int || num > maxSafeFloat64Int)) {
return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
rv.SetFloat(float64(num))
return nil
}
return md.badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
_, ok := rv.Interface().(time.Duration)
if ok {
// Parse as string duration, and fall back to regular integer parsing
// (as nanosecond) if this is not a string.
if s, ok := data.(string); ok {
dur, err := time.ParseDuration(s)
if err != nil {
return md.parseErr(errParseDuration{s})
}
rv.SetInt(int64(dur))
return nil
}
}
num, ok := data.(int64)
if !ok {
return md.badtype("integer", data)
}
rvk := rv.Kind()
switch {
case rvk >= reflect.Int && rvk <= reflect.Int64:
if (rvk == reflect.Int8 && (num < math.MinInt8 || num > math.MaxInt8)) ||
(rvk == reflect.Int16 && (num < math.MinInt16 || num > math.MaxInt16)) ||
(rvk == reflect.Int32 && (num < math.MinInt32 || num > math.MaxInt32)) {
return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
rv.SetInt(num)
case rvk >= reflect.Uint && rvk <= reflect.Uint64:
unum := uint64(num)
if rvk == reflect.Uint8 && (num < 0 || unum > math.MaxUint8) ||
rvk == reflect.Uint16 && (num < 0 || unum > math.MaxUint16) ||
rvk == reflect.Uint32 && (num < 0 || unum > math.MaxUint32) {
return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
rv.SetUint(unum)
default:
panic("unreachable")
}
return nil
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return md.badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case Marshaler:
text, err := sdata.MarshalTOML()
if err != nil {
return err
}
s = string(text)
case encoding.TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return md.badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
func (md *MetaData) badtype(dst string, data interface{}) error {
return md.e("incompatible types: TOML value has type %T; destination has type %s", data, dst)
}
func (md *MetaData) parseErr(err error) error {
k := md.context.String()
return ParseError{
LastKey: k,
Position: md.keyInfo[k].pos,
Line: md.keyInfo[k].pos.Line,
err: err,
input: string(md.data),
}
}
func (md *MetaData) e(format string, args ...interface{}) error {
f := "toml: "
if len(md.context) > 0 {
f = fmt.Sprintf("toml: (last key %q): ", md.context)
p := md.keyInfo[md.context.String()].pos
if p.Line > 0 {
f = fmt.Sprintf("toml: line %d (last key %q): ", p.Line, md.context)
}
}
return fmt.Errorf(f+format, args...)
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
//
// Pointers are followed until the value is not a pointer. New values are
// allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of interest
// to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
pvi := pv.Interface()
if _, ok := pvi.(encoding.TextUnmarshaler); ok {
return pv
}
if _, ok := pvi.(Unmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
rvi := rv.Interface()
if _, ok := rvi.(encoding.TextUnmarshaler); ok {
return true
}
if _, ok := rvi.(Unmarshaler); ok {
return true
}
return false
}

View File

@@ -0,0 +1,19 @@
//go:build go1.16
// +build go1.16
package toml
import (
"io/fs"
)
// DecodeFS reads the contents of a file from [fs.FS] and decodes it with
// [Decode].
func DecodeFS(fsys fs.FS, path string, v interface{}) (MetaData, error) {
fp, err := fsys.Open(path)
if err != nil {
return MetaData{}, err
}
defer fp.Close()
return NewDecoder(fp).Decode(v)
}

View File

@@ -0,0 +1,29 @@
package toml
import (
"encoding"
"io"
)
// TextMarshaler is an alias for encoding.TextMarshaler.
//
// Deprecated: use encoding.TextMarshaler
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is an alias for encoding.TextUnmarshaler.
//
// Deprecated: use encoding.TextUnmarshaler
type TextUnmarshaler encoding.TextUnmarshaler
// PrimitiveDecode is an alias for MetaData.PrimitiveDecode().
//
// Deprecated: use MetaData.PrimitiveDecode.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]struct{})}
return md.unify(primValue.undecoded, rvalue(v))
}
// DecodeReader is an alias for NewDecoder(r).Decode(v).
//
// Deprecated: use NewDecoder(reader).Decode(&value).
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) { return NewDecoder(r).Decode(v) }

View File

@@ -0,0 +1,11 @@
// Package toml implements decoding and encoding of TOML files.
//
// This package supports TOML v1.0.0, as specified at https://toml.io
//
// There is also support for delaying decoding with the Primitive type, and
// querying the set of keys in a TOML document with the MetaData type.
//
// The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
// and can be used to verify if TOML document is valid. It can also be used to
// print the type of each key.
package toml

View File

@@ -0,0 +1,759 @@
package toml
import (
"bufio"
"encoding"
"encoding/json"
"errors"
"fmt"
"io"
"math"
"reflect"
"sort"
"strconv"
"strings"
"time"
"github.com/BurntSushi/toml/internal"
)
type tomlEncodeError struct{ error }
var (
errArrayNilElement = errors.New("toml: cannot encode array with nil element")
errNonString = errors.New("toml: cannot encode a map with non-string key type")
errNoKey = errors.New("toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var dblQuotedReplacer = strings.NewReplacer(
"\"", "\\\"",
"\\", "\\\\",
"\x00", `\u0000`,
"\x01", `\u0001`,
"\x02", `\u0002`,
"\x03", `\u0003`,
"\x04", `\u0004`,
"\x05", `\u0005`,
"\x06", `\u0006`,
"\x07", `\u0007`,
"\b", `\b`,
"\t", `\t`,
"\n", `\n`,
"\x0b", `\u000b`,
"\f", `\f`,
"\r", `\r`,
"\x0e", `\u000e`,
"\x0f", `\u000f`,
"\x10", `\u0010`,
"\x11", `\u0011`,
"\x12", `\u0012`,
"\x13", `\u0013`,
"\x14", `\u0014`,
"\x15", `\u0015`,
"\x16", `\u0016`,
"\x17", `\u0017`,
"\x18", `\u0018`,
"\x19", `\u0019`,
"\x1a", `\u001a`,
"\x1b", `\u001b`,
"\x1c", `\u001c`,
"\x1d", `\u001d`,
"\x1e", `\u001e`,
"\x1f", `\u001f`,
"\x7f", `\u007f`,
)
var (
marshalToml = reflect.TypeOf((*Marshaler)(nil)).Elem()
marshalText = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
)
// Marshaler is the interface implemented by types that can marshal themselves
// into valid TOML.
type Marshaler interface {
MarshalTOML() ([]byte, error)
}
// Encoder encodes a Go to a TOML document.
//
// The mapping between Go values and TOML values should be precisely the same as
// for [Decode].
//
// time.Time is encoded as a RFC 3339 string, and time.Duration as its string
// representation.
//
// The [Marshaler] and [encoding.TextMarshaler] interfaces are supported to
// encoding the value as custom TOML.
//
// If you want to write arbitrary binary data then you will need to use
// something like base64 since TOML does not have any binary types.
//
// When encoding TOML hashes (Go maps or structs), keys without any sub-hashes
// are encoded first.
//
// Go maps will be sorted alphabetically by key for deterministic output.
//
// The toml struct tag can be used to provide the key name; if omitted the
// struct field name will be used. If the "omitempty" option is present the
// following value will be skipped:
//
// - arrays, slices, maps, and string with len of 0
// - struct with all zero values
// - bool false
//
// If omitzero is given all int and float types with a value of 0 will be
// skipped.
//
// Encoding Go values without a corresponding TOML representation will return an
// error. Examples of this includes maps with non-string keys, slices with nil
// elements, embedded non-struct types, and nested slices containing maps or
// structs. (e.g. [][]map[string]string is not allowed but []map[string]string
// is okay, as is []map[string][]string).
//
// NOTE: only exported keys are encoded due to the use of reflection. Unexported
// keys are silently discarded.
type Encoder struct {
// String to use for a single indentation level; default is two spaces.
Indent string
w *bufio.Writer
hasWritten bool // written any output to w yet?
}
// NewEncoder create a new Encoder.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the [Encoder]'s writer.
//
// An error is returned if the value given cannot be encoded to a valid TOML
// document.
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
err := enc.safeEncode(Key([]string{}), rv)
if err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// If we can marshal the type to text, then we use that. This prevents the
// encoder for handling these types as generic structs (or whatever the
// underlying type of a TextMarshaler is).
switch {
case isMarshaler(rv):
enc.writeKeyValue(key, rv, false)
return
case rv.Type() == primitiveType: // TODO: #76 would make this superfluous after implemented.
enc.encode(key, reflect.ValueOf(rv.Interface().(Primitive).undecoded))
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.writeKeyValue(key, rv, false)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.writeKeyValue(key, rv, false)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
encPanic(fmt.Errorf("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element.
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time: // Using TextMarshaler adds extra quotes, which we don't want.
format := time.RFC3339Nano
switch v.Location() {
case internal.LocalDatetime:
format = "2006-01-02T15:04:05.999999999"
case internal.LocalDate:
format = "2006-01-02"
case internal.LocalTime:
format = "15:04:05.999999999"
}
switch v.Location() {
default:
enc.wf(v.Format(format))
case internal.LocalDatetime, internal.LocalDate, internal.LocalTime:
enc.wf(v.In(time.UTC).Format(format))
}
return
case Marshaler:
s, err := v.MarshalTOML()
if err != nil {
encPanic(err)
}
if s == nil {
encPanic(errors.New("MarshalTOML returned nil and no error"))
}
enc.w.Write(s)
return
case encoding.TextMarshaler:
s, err := v.MarshalText()
if err != nil {
encPanic(err)
}
if s == nil {
encPanic(errors.New("MarshalText returned nil and no error"))
}
enc.writeQuoted(string(s))
return
case time.Duration:
enc.writeQuoted(v.String())
return
case json.Number:
n, _ := rv.Interface().(json.Number)
if n == "" { /// Useful zero value.
enc.w.WriteByte('0')
return
} else if v, err := n.Int64(); err == nil {
enc.eElement(reflect.ValueOf(v))
return
} else if v, err := n.Float64(); err == nil {
enc.eElement(reflect.ValueOf(v))
return
}
encPanic(fmt.Errorf("unable to convert %q to int64 or float64", n))
}
switch rv.Kind() {
case reflect.Ptr:
enc.eElement(rv.Elem())
return
case reflect.String:
enc.writeQuoted(rv.String())
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
f := rv.Float()
if math.IsNaN(f) {
enc.wf("nan")
} else if math.IsInf(f, 0) {
enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
} else {
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 32)))
}
case reflect.Float64:
f := rv.Float()
if math.IsNaN(f) {
enc.wf("nan")
} else if math.IsInf(f, 0) {
enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
} else {
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 64)))
}
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Struct:
enc.eStruct(nil, rv, true)
case reflect.Map:
enc.eMap(nil, rv, true)
case reflect.Interface:
enc.eElement(rv.Elem())
default:
encPanic(fmt.Errorf("unexpected type: %T", rv.Interface()))
}
}
// By the TOML spec, all floats must have a decimal with at least one number on
// either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", dblQuotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := eindirect(rv.Index(i))
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := eindirect(rv.Index(i))
if isNil(trv) {
continue
}
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key)
enc.newline()
enc.eMapOrStruct(key, trv, false)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
if len(key) == 1 {
// Output an extra newline between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key)
enc.newline()
}
enc.eMapOrStruct(key, rv, false)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value, inline bool) {
switch rv.Kind() {
case reflect.Map:
enc.eMap(key, rv, inline)
case reflect.Struct:
enc.eStruct(key, rv, inline)
default:
// Should never happen?
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsTable(tomlTypeOfGo(eindirect(rv.MapIndex(mapKey)))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string, trailC bool) {
sort.Strings(mapKeys)
for i, mapKey := range mapKeys {
val := eindirect(rv.MapIndex(reflect.ValueOf(mapKey)))
if isNil(val) {
continue
}
if inline {
enc.writeKeyValue(Key{mapKey}, val, true)
if trailC || i != len(mapKeys)-1 {
enc.wf(", ")
}
} else {
enc.encode(key.add(mapKey), val)
}
}
}
if inline {
enc.wf("{")
}
writeMapKeys(mapKeysDirect, len(mapKeysSub) > 0)
writeMapKeys(mapKeysSub, false)
if inline {
enc.wf("}")
}
}
const is32Bit = (32 << (^uint(0) >> 63)) == 32
func pointerTo(t reflect.Type) reflect.Type {
if t.Kind() == reflect.Ptr {
return pointerTo(t.Elem())
}
return t
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table then all keys under it will be in that
// table (not the one we're writing here).
//
// Fields is a [][]int: for fieldsDirect this always has one entry (the
// struct index). For fieldsSub it contains two entries: the parent field
// index from tv, and the field indexes for the fields of the sub.
var (
rt = rv.Type()
fieldsDirect, fieldsSub [][]int
addFields func(rt reflect.Type, rv reflect.Value, start []int)
)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
isEmbed := f.Anonymous && pointerTo(f.Type).Kind() == reflect.Struct
if f.PkgPath != "" && !isEmbed { /// Skip unexported fields.
continue
}
opts := getOptions(f.Tag)
if opts.skip {
continue
}
frv := eindirect(rv.Field(i))
if is32Bit {
// Copy so it works correct on 32bit archs; not clear why this
// is needed. See #314, and https://www.reddit.com/r/golang/comments/pnx8v4
// This also works fine on 64bit, but 32bit archs are somewhat
// rare and this is a wee bit faster.
copyStart := make([]int, len(start))
copy(copyStart, start)
start = copyStart
}
// Treat anonymous struct fields with tag names as though they are
// not anonymous, like encoding/json does.
//
// Non-struct anonymous fields use the normal encoding logic.
if isEmbed {
if getOptions(f.Tag).name == "" && frv.Kind() == reflect.Struct {
addFields(frv.Type(), frv, append(start, f.Index...))
continue
}
}
if typeIsTable(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
writeFields := func(fields [][]int) {
for _, fieldIndex := range fields {
fieldType := rt.FieldByIndex(fieldIndex)
fieldVal := rv.FieldByIndex(fieldIndex)
opts := getOptions(fieldType.Tag)
if opts.skip {
continue
}
if opts.omitempty && isEmpty(fieldVal) {
continue
}
fieldVal = eindirect(fieldVal)
if isNil(fieldVal) { /// Don't write anything for nil fields.
continue
}
keyName := fieldType.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitzero && isZero(fieldVal) {
continue
}
if inline {
enc.writeKeyValue(Key{keyName}, fieldVal, true)
if fieldIndex[0] != len(fields)-1 {
enc.wf(", ")
}
} else {
enc.encode(key.add(keyName), fieldVal)
}
}
}
if inline {
enc.wf("{")
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
if inline {
enc.wf("}")
}
}
// tomlTypeOfGo returns the TOML type name of the Go value's type.
//
// It is used to determine whether the types of array elements are mixed (which
// is forbidden). If the Go value is nil, then it is illegal for it to be an
// array element, and valueIsNil is returned as true.
//
// The type may be `nil`, which means no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
if rv.Kind() == reflect.Struct {
if rv.Type() == timeType {
return tomlDatetime
}
if isMarshaler(rv) {
return tomlString
}
return tomlHash
}
if isMarshaler(rv) {
return tomlString
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if isTableArray(rv) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
default:
encPanic(errors.New("unsupported type: " + rv.Kind().String()))
panic("unreachable")
}
}
func isMarshaler(rv reflect.Value) bool {
return rv.Type().Implements(marshalText) || rv.Type().Implements(marshalToml)
}
// isTableArray reports if all entries in the array or slice are a table.
func isTableArray(arr reflect.Value) bool {
if isNil(arr) || !arr.IsValid() || arr.Len() == 0 {
return false
}
ret := true
for i := 0; i < arr.Len(); i++ {
tt := tomlTypeOfGo(eindirect(arr.Index(i)))
// Don't allow nil.
if tt == nil {
encPanic(errArrayNilElement)
}
if ret && !typeEqual(tomlHash, tt) {
ret = false
}
}
return ret
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Struct:
if rv.Type().Comparable() {
return reflect.Zero(rv.Type()).Interface() == rv.Interface()
}
// Need to also check if all the fields are empty, otherwise something
// like this with uncomparable types will always return true:
//
// type a struct{ field b }
// type b struct{ s []string }
// s := a{field: b{s: []string{"AAA"}}}
for i := 0; i < rv.NumField(); i++ {
if !isEmpty(rv.Field(i)) {
return false
}
}
return true
case reflect.Bool:
return !rv.Bool()
case reflect.Ptr:
return rv.IsNil()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
// Write a key/value pair:
//
// key = <any value>
//
// This is also used for "k = v" in inline tables; so something like this will
// be written in three calls:
//
// ┌───────────────────┐
// │ ┌───┐ ┌────┐│
// v v v v vv
// key = {k = 1, k2 = 2}
func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
/// Marshaler used on top-level document; call eElement() to just call
/// Marshal{TOML,Text}.
if len(key) == 0 {
enc.eElement(val)
return
}
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
if !inline {
enc.newline()
}
}
func (enc *Encoder) wf(format string, v ...interface{}) {
_, err := fmt.Fprintf(enc.w, format, v...)
if err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
// Resolve any level of pointers to the actual value (e.g. **string → string).
func eindirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr && v.Kind() != reflect.Interface {
if isMarshaler(v) {
return v
}
if v.CanAddr() { /// Special case for marshalers; see #358.
if pv := v.Addr(); isMarshaler(pv) {
return pv
}
}
return v
}
if v.IsNil() {
return v
}
return eindirect(v.Elem())
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}

View File

@@ -0,0 +1,279 @@
package toml
import (
"fmt"
"strings"
)
// ParseError is returned when there is an error parsing the TOML syntax such as
// invalid syntax, duplicate keys, etc.
//
// In addition to the error message itself, you can also print detailed location
// information with context by using [ErrorWithPosition]:
//
// toml: error: Key 'fruit' was already created and cannot be used as an array.
//
// At line 4, column 2-7:
//
// 2 | fruit = []
// 3 |
// 4 | [[fruit]] # Not allowed
// ^^^^^
//
// [ErrorWithUsage] can be used to print the above with some more detailed usage
// guidance:
//
// toml: error: newlines not allowed within inline tables
//
// At line 1, column 18:
//
// 1 | x = [{ key = 42 #
// ^
//
// Error help:
//
// Inline tables must always be on a single line:
//
// table = {key = 42, second = 43}
//
// It is invalid to split them over multiple lines like so:
//
// # INVALID
// table = {
// key = 42,
// second = 43
// }
//
// Use regular for this:
//
// [table]
// key = 42
// second = 43
type ParseError struct {
Message string // Short technical message.
Usage string // Longer message with usage guidance; may be blank.
Position Position // Position of the error
LastKey string // Last parsed key, may be blank.
// Line the error occurred.
//
// Deprecated: use [Position].
Line int
err error
input string
}
// Position of an error.
type Position struct {
Line int // Line number, starting at 1.
Start int // Start of error, as byte offset starting at 0.
Len int // Lenght in bytes.
}
func (pe ParseError) Error() string {
msg := pe.Message
if msg == "" { // Error from errorf()
msg = pe.err.Error()
}
if pe.LastKey == "" {
return fmt.Sprintf("toml: line %d: %s", pe.Position.Line, msg)
}
return fmt.Sprintf("toml: line %d (last key %q): %s",
pe.Position.Line, pe.LastKey, msg)
}
// ErrorWithPosition returns the error with detailed location context.
//
// See the documentation on [ParseError].
func (pe ParseError) ErrorWithPosition() string {
if pe.input == "" { // Should never happen, but just in case.
return pe.Error()
}
var (
lines = strings.Split(pe.input, "\n")
col = pe.column(lines)
b = new(strings.Builder)
)
msg := pe.Message
if msg == "" {
msg = pe.err.Error()
}
// TODO: don't show control characters as literals? This may not show up
// well everywhere.
if pe.Position.Len == 1 {
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d:\n\n",
msg, pe.Position.Line, col+1)
} else {
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d-%d:\n\n",
msg, pe.Position.Line, col, col+pe.Position.Len)
}
if pe.Position.Line > 2 {
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-2, lines[pe.Position.Line-3])
}
if pe.Position.Line > 1 {
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-1, lines[pe.Position.Line-2])
}
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line, lines[pe.Position.Line-1])
fmt.Fprintf(b, "% 10s%s%s\n", "", strings.Repeat(" ", col), strings.Repeat("^", pe.Position.Len))
return b.String()
}
// ErrorWithUsage returns the error with detailed location context and usage
// guidance.
//
// See the documentation on [ParseError].
func (pe ParseError) ErrorWithUsage() string {
m := pe.ErrorWithPosition()
if u, ok := pe.err.(interface{ Usage() string }); ok && u.Usage() != "" {
lines := strings.Split(strings.TrimSpace(u.Usage()), "\n")
for i := range lines {
if lines[i] != "" {
lines[i] = " " + lines[i]
}
}
return m + "Error help:\n\n" + strings.Join(lines, "\n") + "\n"
}
return m
}
func (pe ParseError) column(lines []string) int {
var pos, col int
for i := range lines {
ll := len(lines[i]) + 1 // +1 for the removed newline
if pos+ll >= pe.Position.Start {
col = pe.Position.Start - pos
if col < 0 { // Should never happen, but just in case.
col = 0
}
break
}
pos += ll
}
return col
}
type (
errLexControl struct{ r rune }
errLexEscape struct{ r rune }
errLexUTF8 struct{ b byte }
errLexInvalidNum struct{ v string }
errLexInvalidDate struct{ v string }
errLexInlineTableNL struct{}
errLexStringNL struct{}
errParseRange struct {
i interface{} // int or float
size string // "int64", "uint16", etc.
}
errParseDuration struct{ d string }
)
func (e errLexControl) Error() string {
return fmt.Sprintf("TOML files cannot contain control characters: '0x%02x'", e.r)
}
func (e errLexControl) Usage() string { return "" }
func (e errLexEscape) Error() string { return fmt.Sprintf(`invalid escape in string '\%c'`, e.r) }
func (e errLexEscape) Usage() string { return usageEscape }
func (e errLexUTF8) Error() string { return fmt.Sprintf("invalid UTF-8 byte: 0x%02x", e.b) }
func (e errLexUTF8) Usage() string { return "" }
func (e errLexInvalidNum) Error() string { return fmt.Sprintf("invalid number: %q", e.v) }
func (e errLexInvalidNum) Usage() string { return "" }
func (e errLexInvalidDate) Error() string { return fmt.Sprintf("invalid date: %q", e.v) }
func (e errLexInvalidDate) Usage() string { return "" }
func (e errLexInlineTableNL) Error() string { return "newlines not allowed within inline tables" }
func (e errLexInlineTableNL) Usage() string { return usageInlineNewline }
func (e errLexStringNL) Error() string { return "strings cannot contain newlines" }
func (e errLexStringNL) Usage() string { return usageStringNewline }
func (e errParseRange) Error() string { return fmt.Sprintf("%v is out of range for %s", e.i, e.size) }
func (e errParseRange) Usage() string { return usageIntOverflow }
func (e errParseDuration) Error() string { return fmt.Sprintf("invalid duration: %q", e.d) }
func (e errParseDuration) Usage() string { return usageDuration }
const usageEscape = `
A '\' inside a "-delimited string is interpreted as an escape character.
The following escape sequences are supported:
\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX
To prevent a '\' from being recognized as an escape character, use either:
- a ' or '''-delimited string; escape characters aren't processed in them; or
- write two backslashes to get a single backslash: '\\'.
If you're trying to add a Windows path (e.g. "C:\Users\martin") then using '/'
instead of '\' will usually also work: "C:/Users/martin".
`
const usageInlineNewline = `
Inline tables must always be on a single line:
table = {key = 42, second = 43}
It is invalid to split them over multiple lines like so:
# INVALID
table = {
key = 42,
second = 43
}
Use regular for this:
[table]
key = 42
second = 43
`
const usageStringNewline = `
Strings must always be on a single line, and cannot span more than one line:
# INVALID
string = "Hello,
world!"
Instead use """ or ''' to split strings over multiple lines:
string = """Hello,
world!"""
`
const usageIntOverflow = `
This number is too large; this may be an error in the TOML, but it can also be a
bug in the program that uses too small of an integer.
The maximum and minimum values are:
size │ lowest │ highest
───────┼────────────────┼──────────
int8 │ -128 │ 127
int16 │ -32,768 │ 32,767
int32 │ -2,147,483,648 │ 2,147,483,647
int64 │ -9.2 × 10¹⁷ │ 9.2 × 10¹⁷
uint8 │ 0 │ 255
uint16 │ 0 │ 65535
uint32 │ 0 │ 4294967295
uint64 │ 0 │ 1.8 × 10¹⁸
int refers to int32 on 32-bit systems and int64 on 64-bit systems.
`
const usageDuration = `
A duration must be as "number<unit>", without any spaces. Valid units are:
ns nanoseconds (billionth of a second)
us, µs microseconds (millionth of a second)
ms milliseconds (thousands of a second)
s seconds
m minutes
h hours
You can combine multiple units; for example "5m10s" for 5 minutes and 10
seconds.
`

View File

@@ -0,0 +1,36 @@
package internal
import "time"
// Timezones used for local datetime, date, and time TOML types.
//
// The exact way times and dates without a timezone should be interpreted is not
// well-defined in the TOML specification and left to the implementation. These
// defaults to current local timezone offset of the computer, but this can be
// changed by changing these variables before decoding.
//
// TODO:
// Ideally we'd like to offer people the ability to configure the used timezone
// by setting Decoder.Timezone and Encoder.Timezone; however, this is a bit
// tricky: the reason we use three different variables for this is to support
// round-tripping without these specific TZ names we wouldn't know which
// format to use.
//
// There isn't a good way to encode this right now though, and passing this sort
// of information also ties in to various related issues such as string format
// encoding, encoding of comments, etc.
//
// So, for the time being, just put this in internal until we can write a good
// comprehensive API for doing all of this.
//
// The reason they're exported is because they're referred from in e.g.
// internal/tag.
//
// Note that this behaviour is valid according to the TOML spec as the exact
// behaviour is left up to implementations.
var (
localOffset = func() int { _, o := time.Now().Zone(); return o }()
LocalDatetime = time.FixedZone("datetime-local", localOffset)
LocalDate = time.FixedZone("date-local", localOffset)
LocalTime = time.FixedZone("time-local", localOffset)
)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,121 @@
package toml
import (
"strings"
)
// MetaData allows access to meta information about TOML data that's not
// accessible otherwise.
//
// It allows checking if a key is defined in the TOML data, whether any keys
// were undecoded, and the TOML type of a key.
type MetaData struct {
context Key // Used only during decoding.
keyInfo map[string]keyInfo
mapping map[string]interface{}
keys []Key
decoded map[string]struct{}
data []byte // Input file; for errors.
}
// IsDefined reports if the key exists in the TOML data.
//
// The key should be specified hierarchically, for example to access the TOML
// key "a.b.c" you would use IsDefined("a", "b", "c"). Keys are case sensitive.
//
// Returns false for an empty key.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var (
hash map[string]interface{}
ok bool
hashOrVal interface{} = md.mapping
)
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that does
// not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
if ki, ok := md.keyInfo[Key(key).String()]; ok {
return ki.tomlType.typeString()
}
return ""
}
// Keys returns a slice of every key in the TOML data, including key groups.
//
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific. The list will have the same
// order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a [Primitive] value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if _, ok := md.decoded[key.String()]; !ok {
undecoded = append(undecoded, key)
}
}
return undecoded
}
// Key represents any TOML key, including key groups. Use [MetaData.Keys] to get
// values of this type.
type Key []string
func (k Key) String() string {
ss := make([]string, len(k))
for i := range k {
ss[i] = k.maybeQuoted(i)
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
if k[i] == "" {
return `""`
}
for _, c := range k[i] {
if !isBareKeyChar(c, false) {
return `"` + dblQuotedReplacer.Replace(k[i]) + `"`
}
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}

View File

@@ -0,0 +1,811 @@
package toml
import (
"fmt"
"os"
"strconv"
"strings"
"time"
"unicode/utf8"
"github.com/BurntSushi/toml/internal"
)
type parser struct {
lx *lexer
context Key // Full key for the current hash in scope.
currentKey string // Base key name for everything except hashes.
pos Position // Current position in the TOML file.
tomlNext bool
ordered []Key // List of keys in the order that they appear in the TOML data.
keyInfo map[string]keyInfo // Map keyname → info about the TOML key.
mapping map[string]interface{} // Map keyname → key value.
implicits map[string]struct{} // Record implicit keys (e.g. "key.group.names").
}
type keyInfo struct {
pos Position
tomlType tomlType
}
func parse(data string) (p *parser, err error) {
_, tomlNext := os.LookupEnv("BURNTSUSHI_TOML_110")
defer func() {
if r := recover(); r != nil {
if pErr, ok := r.(ParseError); ok {
pErr.input = data
err = pErr
return
}
panic(r)
}
}()
// Read over BOM; do this here as the lexer calls utf8.DecodeRuneInString()
// which mangles stuff. UTF-16 BOM isn't strictly valid, but some tools add
// it anyway.
if strings.HasPrefix(data, "\xff\xfe") || strings.HasPrefix(data, "\xfe\xff") { // UTF-16
data = data[2:]
} else if strings.HasPrefix(data, "\xef\xbb\xbf") { // UTF-8
data = data[3:]
}
// Examine first few bytes for NULL bytes; this probably means it's a UTF-16
// file (second byte in surrogate pair being NULL). Again, do this here to
// avoid having to deal with UTF-8/16 stuff in the lexer.
ex := 6
if len(data) < 6 {
ex = len(data)
}
if i := strings.IndexRune(data[:ex], 0); i > -1 {
return nil, ParseError{
Message: "files cannot contain NULL bytes; probably using UTF-16; TOML files must be UTF-8",
Position: Position{Line: 1, Start: i, Len: 1},
Line: 1,
input: data,
}
}
p = &parser{
keyInfo: make(map[string]keyInfo),
mapping: make(map[string]interface{}),
lx: lex(data, tomlNext),
ordered: make([]Key, 0),
implicits: make(map[string]struct{}),
tomlNext: tomlNext,
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicErr(it item, err error) {
panic(ParseError{
err: err,
Position: it.pos,
Line: it.pos.Len,
LastKey: p.current(),
})
}
func (p *parser) panicItemf(it item, format string, v ...interface{}) {
panic(ParseError{
Message: fmt.Sprintf(format, v...),
Position: it.pos,
Line: it.pos.Len,
LastKey: p.current(),
})
}
func (p *parser) panicf(format string, v ...interface{}) {
panic(ParseError{
Message: fmt.Sprintf(format, v...),
Position: p.pos,
Line: p.pos.Line,
LastKey: p.current(),
})
}
func (p *parser) next() item {
it := p.lx.nextItem()
//fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.pos.Line, it.val)
if it.typ == itemError {
if it.err != nil {
panic(ParseError{
Position: it.pos,
Line: it.pos.Line,
LastKey: p.current(),
err: it.err,
})
}
p.panicItemf(it, "%s", it.val)
}
return it
}
func (p *parser) nextPos() item {
it := p.next()
p.pos = it.pos
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart: // # ..
p.expect(itemText)
case itemTableStart: // [ .. ]
name := p.nextPos()
var key Key
for ; name.typ != itemTableEnd && name.typ != itemEOF; name = p.next() {
key = append(key, p.keyString(name))
}
p.assertEqual(itemTableEnd, name.typ)
p.addContext(key, false)
p.setType("", tomlHash, item.pos)
p.ordered = append(p.ordered, key)
case itemArrayTableStart: // [[ .. ]]
name := p.nextPos()
var key Key
for ; name.typ != itemArrayTableEnd && name.typ != itemEOF; name = p.next() {
key = append(key, p.keyString(name))
}
p.assertEqual(itemArrayTableEnd, name.typ)
p.addContext(key, true)
p.setType("", tomlArrayHash, item.pos)
p.ordered = append(p.ordered, key)
case itemKeyStart: // key = ..
outerContext := p.context
/// Read all the key parts (e.g. 'a' and 'b' in 'a.b')
k := p.nextPos()
var key Key
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
key = append(key, p.keyString(k))
}
p.assertEqual(itemKeyEnd, k.typ)
/// The current key is the last part.
p.currentKey = key[len(key)-1]
/// All the other parts (if any) are the context; need to set each part
/// as implicit.
context := key[:len(key)-1]
for i := range context {
p.addImplicitContext(append(p.context, context[i:i+1]...))
}
p.ordered = append(p.ordered, p.context.add(p.currentKey))
/// Set value.
vItem := p.next()
val, typ := p.value(vItem, false)
p.set(p.currentKey, val, typ, vItem.pos)
/// Remove the context we added (preserving any context from [tbl] lines).
p.context = outerContext
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it, false)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
}
panic("unreachable")
}
var datetimeRepl = strings.NewReplacer(
"z", "Z",
"t", "T",
" ", "T")
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item, parentIsArray bool) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it, it.val), p.typeOfPrimitive(it)
case itemMultilineString:
return p.replaceEscapes(it, p.stripEscapedNewlines(stripFirstNewline(it.val))), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemInteger:
return p.valueInteger(it)
case itemFloat:
return p.valueFloat(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
default:
p.bug("Expected boolean value, but got '%s'.", it.val)
}
case itemDatetime:
return p.valueDatetime(it)
case itemArray:
return p.valueArray(it)
case itemInlineTableStart:
return p.valueInlineTable(it, parentIsArray)
default:
p.bug("Unexpected value type: %s", it.typ)
}
panic("unreachable")
}
func (p *parser) valueInteger(it item) (interface{}, tomlType) {
if !numUnderscoresOK(it.val) {
p.panicItemf(it, "Invalid integer %q: underscores must be surrounded by digits", it.val)
}
if numHasLeadingZero(it.val) {
p.panicItemf(it, "Invalid integer %q: cannot have leading zeroes", it.val)
}
num, err := strconv.ParseInt(it.val, 0, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
p.panicErr(it, errParseRange{i: it.val, size: "int64"})
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
}
func (p *parser) valueFloat(it item) (interface{}, tomlType) {
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicItemf(it, "Invalid float %q: underscores must be surrounded by digits", it.val)
}
}
if len(parts) > 0 && numHasLeadingZero(parts[0]) {
p.panicItemf(it, "Invalid float %q: cannot have leading zeroes", it.val)
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicItemf(it, "Invalid float %q: '.' must be followed by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
if val == "+nan" || val == "-nan" { // Go doesn't support this, but TOML spec does.
val = "nan"
}
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
p.panicErr(it, errParseRange{i: it.val, size: "float64"})
} else {
p.panicItemf(it, "Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
}
var dtTypes = []struct {
fmt string
zone *time.Location
next bool
}{
{time.RFC3339Nano, time.Local, false},
{"2006-01-02T15:04:05.999999999", internal.LocalDatetime, false},
{"2006-01-02", internal.LocalDate, false},
{"15:04:05.999999999", internal.LocalTime, false},
// tomlNext
{"2006-01-02T15:04Z07:00", time.Local, true},
{"2006-01-02T15:04", internal.LocalDatetime, true},
{"15:04", internal.LocalTime, true},
}
func (p *parser) valueDatetime(it item) (interface{}, tomlType) {
it.val = datetimeRepl.Replace(it.val)
var (
t time.Time
ok bool
err error
)
for _, dt := range dtTypes {
if dt.next && !p.tomlNext {
continue
}
t, err = time.ParseInLocation(dt.fmt, it.val, dt.zone)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicItemf(it, "Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
}
func (p *parser) valueArray(it item) (interface{}, tomlType) {
p.setType(p.currentKey, tomlArray, it.pos)
var (
types []tomlType
// Initialize to a non-nil empty slice. This makes it consistent with
// how S = [] decodes into a non-nil slice inside something like struct
// { S []string }. See #338
array = []interface{}{}
)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it, true)
array = append(array, val)
types = append(types, typ)
// XXX: types isn't used here, we need it to record the accurate type
// information.
//
// Not entirely sure how to best store this; could use "key[0]",
// "key[1]" notation, or maybe store it on the Array type?
_ = types
}
return array, tomlArray
}
func (p *parser) valueInlineTable(it item, parentIsArray bool) (interface{}, tomlType) {
var (
hash = make(map[string]interface{})
outerContext = p.context
outerKey = p.currentKey
)
p.context = append(p.context, p.currentKey)
prevContext := p.context
p.currentKey = ""
p.addImplicit(p.context)
p.addContext(p.context, parentIsArray)
/// Loop over all table key/value pairs.
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
/// Read all key parts.
k := p.nextPos()
var key Key
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
key = append(key, p.keyString(k))
}
p.assertEqual(itemKeyEnd, k.typ)
/// The current key is the last part.
p.currentKey = key[len(key)-1]
/// All the other parts (if any) are the context; need to set each part
/// as implicit.
context := key[:len(key)-1]
for i := range context {
p.addImplicitContext(append(p.context, context[i:i+1]...))
}
p.ordered = append(p.ordered, p.context.add(p.currentKey))
/// Set the value.
val, typ := p.value(p.next(), false)
p.set(p.currentKey, val, typ, it.pos)
hash[p.currentKey] = val
/// Restore context.
p.context = prevContext
}
p.context = outerContext
p.currentKey = outerKey
return hash, tomlHash
}
// numHasLeadingZero checks if this number has leading zeroes, allowing for '0',
// +/- signs, and base prefixes.
func numHasLeadingZero(s string) bool {
if len(s) > 1 && s[0] == '0' && !(s[1] == 'b' || s[1] == 'o' || s[1] == 'x') { // Allow 0b, 0o, 0x
return true
}
if len(s) > 2 && (s[0] == '-' || s[0] == '+') && s[1] == '0' {
return true
}
return false
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
switch s {
case "nan", "+nan", "-nan", "inf", "-inf", "+inf":
return true
}
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
}
// isHexadecimal is a superset of all the permissable characters
// surrounding an underscore.
accept = isHexadecimal(r)
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// Set the current context of the parser, where the context is either a hash or
// an array of hashes, depending on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) addContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 4)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as an array.", key)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// set calls setValue and setType.
func (p *parser) set(key string, val interface{}, typ tomlType, pos Position) {
p.setValue(key, val)
p.setType(key, typ, pos)
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var (
tmpHash interface{}
ok bool
hash = p.mapping
keyContext Key
)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.panicf("Key '%s' has already been defined.", keyContext)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Normally redefining keys isn't allowed, but the key could have been
// defined implicitly and it's allowed to be redefined concretely. (See
// the `valid/implicit-and-explicit-after.toml` in toml-test)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isArray(keyContext) {
p.removeImplicit(keyContext)
hash[key] = value
return
}
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key. It should be
// called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType, pos Position) {
keyContext := make(Key, 0, len(p.context)+1)
keyContext = append(keyContext, p.context...)
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
// Special case to make empty keys ("" = 1) work.
// Without it it will set "" rather than `""`.
// TODO: why is this needed? And why is this only needed here?
if len(keyContext) == 0 {
keyContext = Key{""}
}
p.keyInfo[keyContext.String()] = keyInfo{tomlType: typ, pos: pos}
}
// Implicit keys need to be created when tables are implied in "a.b.c.d = 1" and
// "[a.b.c]" (the "a", "b", and "c" hashes are never created explicitly).
func (p *parser) addImplicit(key Key) { p.implicits[key.String()] = struct{}{} }
func (p *parser) removeImplicit(key Key) { delete(p.implicits, key.String()) }
func (p *parser) isImplicit(key Key) bool { _, ok := p.implicits[key.String()]; return ok }
func (p *parser) isArray(key Key) bool { return p.keyInfo[key.String()].tomlType == tomlArray }
func (p *parser) addImplicitContext(key Key) { p.addImplicit(key); p.addContext(key, false) }
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) > 0 && s[0] == '\n' {
return s[1:]
}
if len(s) > 1 && s[0] == '\r' && s[1] == '\n' {
return s[2:]
}
return s
}
// stripEscapedNewlines removes whitespace after line-ending backslashes in
// multiline strings.
//
// A line-ending backslash is an unescaped \ followed only by whitespace until
// the next newline. After a line-ending backslash, all whitespace is removed
// until the next non-whitespace character.
func (p *parser) stripEscapedNewlines(s string) string {
var b strings.Builder
var i int
for {
ix := strings.Index(s[i:], `\`)
if ix < 0 {
b.WriteString(s)
return b.String()
}
i += ix
if len(s) > i+1 && s[i+1] == '\\' {
// Escaped backslash.
i += 2
continue
}
// Scan until the next non-whitespace.
j := i + 1
whitespaceLoop:
for ; j < len(s); j++ {
switch s[j] {
case ' ', '\t', '\r', '\n':
default:
break whitespaceLoop
}
}
if j == i+1 {
// Not a whitespace escape.
i++
continue
}
if !strings.Contains(s[i:j], "\n") {
// This is not a line-ending backslash.
// (It's a bad escape sequence, but we can let
// replaceEscapes catch it.)
i++
continue
}
b.WriteString(s[:i])
s = s[j:]
i = 0
}
}
func (p *parser) replaceEscapes(it item, str string) string {
replaced := make([]rune, 0, len(str))
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
case ' ', '\t':
p.panicItemf(it, "invalid escape: '\\%c'", s[r])
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case 'e':
if p.tomlNext {
replaced = append(replaced, rune(0x001B))
r += 1
}
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'x':
if p.tomlNext {
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+3])
replaced = append(replaced, escaped)
r += 3
}
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(it item, bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicItemf(it, "Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}

View File

@@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
var count map[reflect.Type]int
var nextCount map[reflect.Type]int
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

View File

@@ -0,0 +1,70 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsTable(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}

View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Brian Goff
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,14 @@
package md2man
import (
"github.com/russross/blackfriday/v2"
)
// Render converts a markdown document into a roff formatted document.
func Render(doc []byte) []byte {
renderer := NewRoffRenderer()
return blackfriday.Run(doc,
[]blackfriday.Option{blackfriday.WithRenderer(renderer),
blackfriday.WithExtensions(renderer.GetExtensions())}...)
}

View File

@@ -0,0 +1,336 @@
package md2man
import (
"fmt"
"io"
"os"
"strings"
"github.com/russross/blackfriday/v2"
)
// roffRenderer implements the blackfriday.Renderer interface for creating
// roff format (manpages) from markdown text
type roffRenderer struct {
extensions blackfriday.Extensions
listCounters []int
firstHeader bool
firstDD bool
listDepth int
}
const (
titleHeader = ".TH "
topLevelHeader = "\n\n.SH "
secondLevelHdr = "\n.SH "
otherHeader = "\n.SS "
crTag = "\n"
emphTag = "\\fI"
emphCloseTag = "\\fP"
strongTag = "\\fB"
strongCloseTag = "\\fP"
breakTag = "\n.br\n"
paraTag = "\n.PP\n"
hruleTag = "\n.ti 0\n\\l'\\n(.lu'\n"
linkTag = "\n\\[la]"
linkCloseTag = "\\[ra]"
codespanTag = "\\fB\\fC"
codespanCloseTag = "\\fR"
codeTag = "\n.PP\n.RS\n\n.nf\n"
codeCloseTag = "\n.fi\n.RE\n"
quoteTag = "\n.PP\n.RS\n"
quoteCloseTag = "\n.RE\n"
listTag = "\n.RS\n"
listCloseTag = "\n.RE\n"
dtTag = "\n.TP\n"
dd2Tag = "\n"
tableStart = "\n.TS\nallbox;\n"
tableEnd = ".TE\n"
tableCellStart = "T{\n"
tableCellEnd = "\nT}\n"
)
// NewRoffRenderer creates a new blackfriday Renderer for generating roff documents
// from markdown
func NewRoffRenderer() *roffRenderer { // nolint: golint
var extensions blackfriday.Extensions
extensions |= blackfriday.NoIntraEmphasis
extensions |= blackfriday.Tables
extensions |= blackfriday.FencedCode
extensions |= blackfriday.SpaceHeadings
extensions |= blackfriday.Footnotes
extensions |= blackfriday.Titleblock
extensions |= blackfriday.DefinitionLists
return &roffRenderer{
extensions: extensions,
}
}
// GetExtensions returns the list of extensions used by this renderer implementation
func (r *roffRenderer) GetExtensions() blackfriday.Extensions {
return r.extensions
}
// RenderHeader handles outputting the header at document start
func (r *roffRenderer) RenderHeader(w io.Writer, ast *blackfriday.Node) {
// disable hyphenation
out(w, ".nh\n")
}
// RenderFooter handles outputting the footer at the document end; the roff
// renderer has no footer information
func (r *roffRenderer) RenderFooter(w io.Writer, ast *blackfriday.Node) {
}
// RenderNode is called for each node in a markdown document; based on the node
// type the equivalent roff output is sent to the writer
func (r *roffRenderer) RenderNode(w io.Writer, node *blackfriday.Node, entering bool) blackfriday.WalkStatus {
var walkAction = blackfriday.GoToNext
switch node.Type {
case blackfriday.Text:
escapeSpecialChars(w, node.Literal)
case blackfriday.Softbreak:
out(w, crTag)
case blackfriday.Hardbreak:
out(w, breakTag)
case blackfriday.Emph:
if entering {
out(w, emphTag)
} else {
out(w, emphCloseTag)
}
case blackfriday.Strong:
if entering {
out(w, strongTag)
} else {
out(w, strongCloseTag)
}
case blackfriday.Link:
if !entering {
out(w, linkTag+string(node.LinkData.Destination)+linkCloseTag)
}
case blackfriday.Image:
// ignore images
walkAction = blackfriday.SkipChildren
case blackfriday.Code:
out(w, codespanTag)
escapeSpecialChars(w, node.Literal)
out(w, codespanCloseTag)
case blackfriday.Document:
break
case blackfriday.Paragraph:
// roff .PP markers break lists
if r.listDepth > 0 {
return blackfriday.GoToNext
}
if entering {
out(w, paraTag)
} else {
out(w, crTag)
}
case blackfriday.BlockQuote:
if entering {
out(w, quoteTag)
} else {
out(w, quoteCloseTag)
}
case blackfriday.Heading:
r.handleHeading(w, node, entering)
case blackfriday.HorizontalRule:
out(w, hruleTag)
case blackfriday.List:
r.handleList(w, node, entering)
case blackfriday.Item:
r.handleItem(w, node, entering)
case blackfriday.CodeBlock:
out(w, codeTag)
escapeSpecialChars(w, node.Literal)
out(w, codeCloseTag)
case blackfriday.Table:
r.handleTable(w, node, entering)
case blackfriday.TableHead:
case blackfriday.TableBody:
case blackfriday.TableRow:
// no action as cell entries do all the nroff formatting
return blackfriday.GoToNext
case blackfriday.TableCell:
r.handleTableCell(w, node, entering)
case blackfriday.HTMLSpan:
// ignore other HTML tags
default:
fmt.Fprintln(os.Stderr, "WARNING: go-md2man does not handle node type "+node.Type.String())
}
return walkAction
}
func (r *roffRenderer) handleHeading(w io.Writer, node *blackfriday.Node, entering bool) {
if entering {
switch node.Level {
case 1:
if !r.firstHeader {
out(w, titleHeader)
r.firstHeader = true
break
}
out(w, topLevelHeader)
case 2:
out(w, secondLevelHdr)
default:
out(w, otherHeader)
}
}
}
func (r *roffRenderer) handleList(w io.Writer, node *blackfriday.Node, entering bool) {
openTag := listTag
closeTag := listCloseTag
if node.ListFlags&blackfriday.ListTypeDefinition != 0 {
// tags for definition lists handled within Item node
openTag = ""
closeTag = ""
}
if entering {
r.listDepth++
if node.ListFlags&blackfriday.ListTypeOrdered != 0 {
r.listCounters = append(r.listCounters, 1)
}
out(w, openTag)
} else {
if node.ListFlags&blackfriday.ListTypeOrdered != 0 {
r.listCounters = r.listCounters[:len(r.listCounters)-1]
}
out(w, closeTag)
r.listDepth--
}
}
func (r *roffRenderer) handleItem(w io.Writer, node *blackfriday.Node, entering bool) {
if entering {
if node.ListFlags&blackfriday.ListTypeOrdered != 0 {
out(w, fmt.Sprintf(".IP \"%3d.\" 5\n", r.listCounters[len(r.listCounters)-1]))
r.listCounters[len(r.listCounters)-1]++
} else if node.ListFlags&blackfriday.ListTypeTerm != 0 {
// DT (definition term): line just before DD (see below).
out(w, dtTag)
r.firstDD = true
} else if node.ListFlags&blackfriday.ListTypeDefinition != 0 {
// DD (definition description): line that starts with ": ".
//
// We have to distinguish between the first DD and the
// subsequent ones, as there should be no vertical
// whitespace between the DT and the first DD.
if r.firstDD {
r.firstDD = false
} else {
out(w, dd2Tag)
}
} else {
out(w, ".IP \\(bu 2\n")
}
} else {
out(w, "\n")
}
}
func (r *roffRenderer) handleTable(w io.Writer, node *blackfriday.Node, entering bool) {
if entering {
out(w, tableStart)
// call walker to count cells (and rows?) so format section can be produced
columns := countColumns(node)
out(w, strings.Repeat("l ", columns)+"\n")
out(w, strings.Repeat("l ", columns)+".\n")
} else {
out(w, tableEnd)
}
}
func (r *roffRenderer) handleTableCell(w io.Writer, node *blackfriday.Node, entering bool) {
if entering {
var start string
if node.Prev != nil && node.Prev.Type == blackfriday.TableCell {
start = "\t"
}
if node.IsHeader {
start += codespanTag
} else if nodeLiteralSize(node) > 30 {
start += tableCellStart
}
out(w, start)
} else {
var end string
if node.IsHeader {
end = codespanCloseTag
} else if nodeLiteralSize(node) > 30 {
end = tableCellEnd
}
if node.Next == nil && end != tableCellEnd {
// Last cell: need to carriage return if we are at the end of the
// header row and content isn't wrapped in a "tablecell"
end += crTag
}
out(w, end)
}
}
func nodeLiteralSize(node *blackfriday.Node) int {
total := 0
for n := node.FirstChild; n != nil; n = n.FirstChild {
total += len(n.Literal)
}
return total
}
// because roff format requires knowing the column count before outputting any table
// data we need to walk a table tree and count the columns
func countColumns(node *blackfriday.Node) int {
var columns int
node.Walk(func(node *blackfriday.Node, entering bool) blackfriday.WalkStatus {
switch node.Type {
case blackfriday.TableRow:
if !entering {
return blackfriday.Terminate
}
case blackfriday.TableCell:
if entering {
columns++
}
default:
}
return blackfriday.GoToNext
})
return columns
}
func out(w io.Writer, output string) {
io.WriteString(w, output) // nolint: errcheck
}
func escapeSpecialChars(w io.Writer, text []byte) {
for i := 0; i < len(text); i++ {
// escape initial apostrophe or period
if len(text) >= 1 && (text[0] == '\'' || text[0] == '.') {
out(w, "\\&")
}
// directly copy normal characters
org := i
for i < len(text) && text[i] != '\\' {
i++
}
if i > org {
w.Write(text[org:i]) // nolint: errcheck
}
// escape a character
if i >= len(text) {
break
}
w.Write([]byte{'\\', text[i]}) // nolint: errcheck
}
}

View File

@@ -0,0 +1,15 @@
ISC License
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,145 @@
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is not running on Google App Engine, compiled by GopherJS, and
// "-tags safe" is not added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// Go versions prior to 1.4 are disabled because they use a different layout
// for interfaces which make the implementation of unsafeReflectValue more complex.
// +build !js,!appengine,!safe,!disableunsafe,go1.4
package spew
import (
"reflect"
"unsafe"
)
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = false
// ptrSize is the size of a pointer on the current arch.
ptrSize = unsafe.Sizeof((*byte)(nil))
)
type flag uintptr
var (
// flagRO indicates whether the value field of a reflect.Value
// is read-only.
flagRO flag
// flagAddr indicates whether the address of the reflect.Value's
// value may be taken.
flagAddr flag
)
// flagKindMask holds the bits that make up the kind
// part of the flags field. In all the supported versions,
// it is in the lower 5 bits.
const flagKindMask = flag(0x1f)
// Different versions of Go have used different
// bit layouts for the flags type. This table
// records the known combinations.
var okFlags = []struct {
ro, addr flag
}{{
// From Go 1.4 to 1.5
ro: 1 << 5,
addr: 1 << 7,
}, {
// Up to Go tip.
ro: 1<<5 | 1<<6,
addr: 1 << 8,
}}
var flagValOffset = func() uintptr {
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
if !ok {
panic("reflect.Value has no flag field")
}
return field.Offset
}()
// flagField returns a pointer to the flag field of a reflect.Value.
func flagField(v *reflect.Value) *flag {
return (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset))
}
// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
// the typical safety restrictions preventing access to unaddressable and
// unexported data. It works by digging the raw pointer to the underlying
// value out of the protected value and generating a new unprotected (unsafe)
// reflect.Value to it.
//
// This allows us to check for implementations of the Stringer and error
// interfaces to be used for pretty printing ordinarily unaddressable and
// inaccessible values such as unexported struct fields.
func unsafeReflectValue(v reflect.Value) reflect.Value {
if !v.IsValid() || (v.CanInterface() && v.CanAddr()) {
return v
}
flagFieldPtr := flagField(&v)
*flagFieldPtr &^= flagRO
*flagFieldPtr |= flagAddr
return v
}
// Sanity checks against future reflect package changes
// to the type or semantics of the Value.flag field.
func init() {
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
if !ok {
panic("reflect.Value has no flag field")
}
if field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() {
panic("reflect.Value flag field has changed kind")
}
type t0 int
var t struct {
A t0
// t0 will have flagEmbedRO set.
t0
// a will have flagStickyRO set
a t0
}
vA := reflect.ValueOf(t).FieldByName("A")
va := reflect.ValueOf(t).FieldByName("a")
vt0 := reflect.ValueOf(t).FieldByName("t0")
// Infer flagRO from the difference between the flags
// for the (otherwise identical) fields in t.
flagPublic := *flagField(&vA)
flagWithRO := *flagField(&va) | *flagField(&vt0)
flagRO = flagPublic ^ flagWithRO
// Infer flagAddr from the difference between a value
// taken from a pointer and not.
vPtrA := reflect.ValueOf(&t).Elem().FieldByName("A")
flagNoPtr := *flagField(&vA)
flagPtr := *flagField(&vPtrA)
flagAddr = flagNoPtr ^ flagPtr
// Check that the inferred flags tally with one of the known versions.
for _, f := range okFlags {
if flagRO == f.ro && flagAddr == f.addr {
return
}
}
panic("reflect.Value read-only flag has changed semantics")
}

View File

@@ -0,0 +1,38 @@
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
//
// Permission to use, copy, modify, and distribute this software for any
// purpose with or without fee is hereby granted, provided that the above
// copyright notice and this permission notice appear in all copies.
//
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
// NOTE: Due to the following build constraints, this file will only be compiled
// when the code is running on Google App Engine, compiled by GopherJS, or
// "-tags safe" is added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// +build js appengine safe disableunsafe !go1.4
package spew
import "reflect"
const (
// UnsafeDisabled is a build-time constant which specifies whether or
// not access to the unsafe package is available.
UnsafeDisabled = true
)
// unsafeReflectValue typically converts the passed reflect.Value into a one
// that bypasses the typical safety restrictions preventing access to
// unaddressable and unexported data. However, doing this relies on access to
// the unsafe package. This is a stub version which simply returns the passed
// reflect.Value when the unsafe package is not available.
func unsafeReflectValue(v reflect.Value) reflect.Value {
return v
}

View File

@@ -0,0 +1,341 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"reflect"
"sort"
"strconv"
)
// Some constants in the form of bytes to avoid string overhead. This mirrors
// the technique used in the fmt package.
var (
panicBytes = []byte("(PANIC=")
plusBytes = []byte("+")
iBytes = []byte("i")
trueBytes = []byte("true")
falseBytes = []byte("false")
interfaceBytes = []byte("(interface {})")
commaNewlineBytes = []byte(",\n")
newlineBytes = []byte("\n")
openBraceBytes = []byte("{")
openBraceNewlineBytes = []byte("{\n")
closeBraceBytes = []byte("}")
asteriskBytes = []byte("*")
colonBytes = []byte(":")
colonSpaceBytes = []byte(": ")
openParenBytes = []byte("(")
closeParenBytes = []byte(")")
spaceBytes = []byte(" ")
pointerChainBytes = []byte("->")
nilAngleBytes = []byte("<nil>")
maxNewlineBytes = []byte("<max depth reached>\n")
maxShortBytes = []byte("<max>")
circularBytes = []byte("<already shown>")
circularShortBytes = []byte("<shown>")
invalidAngleBytes = []byte("<invalid>")
openBracketBytes = []byte("[")
closeBracketBytes = []byte("]")
percentBytes = []byte("%")
precisionBytes = []byte(".")
openAngleBytes = []byte("<")
closeAngleBytes = []byte(">")
openMapBytes = []byte("map[")
closeMapBytes = []byte("]")
lenEqualsBytes = []byte("len=")
capEqualsBytes = []byte("cap=")
)
// hexDigits is used to map a decimal value to a hex digit.
var hexDigits = "0123456789abcdef"
// catchPanic handles any panics that might occur during the handleMethods
// calls.
func catchPanic(w io.Writer, v reflect.Value) {
if err := recover(); err != nil {
w.Write(panicBytes)
fmt.Fprintf(w, "%v", err)
w.Write(closeParenBytes)
}
}
// handleMethods attempts to call the Error and String methods on the underlying
// type the passed reflect.Value represents and outputes the result to Writer w.
//
// It handles panics in any called methods by catching and displaying the error
// as the formatted value.
func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {
// We need an interface to check if the type implements the error or
// Stringer interface. However, the reflect package won't give us an
// interface on certain things like unexported struct fields in order
// to enforce visibility rules. We use unsafe, when it's available,
// to bypass these restrictions since this package does not mutate the
// values.
if !v.CanInterface() {
if UnsafeDisabled {
return false
}
v = unsafeReflectValue(v)
}
// Choose whether or not to do error and Stringer interface lookups against
// the base type or a pointer to the base type depending on settings.
// Technically calling one of these methods with a pointer receiver can
// mutate the value, however, types which choose to satisify an error or
// Stringer interface with a pointer receiver should not be mutating their
// state inside these interface methods.
if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {
v = unsafeReflectValue(v)
}
if v.CanAddr() {
v = v.Addr()
}
// Is it an error or Stringer?
switch iface := v.Interface().(type) {
case error:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.Error()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.Error()))
return true
case fmt.Stringer:
defer catchPanic(w, v)
if cs.ContinueOnMethod {
w.Write(openParenBytes)
w.Write([]byte(iface.String()))
w.Write(closeParenBytes)
w.Write(spaceBytes)
return false
}
w.Write([]byte(iface.String()))
return true
}
return false
}
// printBool outputs a boolean value as true or false to Writer w.
func printBool(w io.Writer, val bool) {
if val {
w.Write(trueBytes)
} else {
w.Write(falseBytes)
}
}
// printInt outputs a signed integer value to Writer w.
func printInt(w io.Writer, val int64, base int) {
w.Write([]byte(strconv.FormatInt(val, base)))
}
// printUint outputs an unsigned integer value to Writer w.
func printUint(w io.Writer, val uint64, base int) {
w.Write([]byte(strconv.FormatUint(val, base)))
}
// printFloat outputs a floating point value using the specified precision,
// which is expected to be 32 or 64bit, to Writer w.
func printFloat(w io.Writer, val float64, precision int) {
w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))
}
// printComplex outputs a complex value using the specified float precision
// for the real and imaginary parts to Writer w.
func printComplex(w io.Writer, c complex128, floatPrecision int) {
r := real(c)
w.Write(openParenBytes)
w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))
i := imag(c)
if i >= 0 {
w.Write(plusBytes)
}
w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))
w.Write(iBytes)
w.Write(closeParenBytes)
}
// printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x'
// prefix to Writer w.
func printHexPtr(w io.Writer, p uintptr) {
// Null pointer.
num := uint64(p)
if num == 0 {
w.Write(nilAngleBytes)
return
}
// Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix
buf := make([]byte, 18)
// It's simpler to construct the hex string right to left.
base := uint64(16)
i := len(buf) - 1
for num >= base {
buf[i] = hexDigits[num%base]
num /= base
i--
}
buf[i] = hexDigits[num]
// Add '0x' prefix.
i--
buf[i] = 'x'
i--
buf[i] = '0'
// Strip unused leading bytes.
buf = buf[i:]
w.Write(buf)
}
// valuesSorter implements sort.Interface to allow a slice of reflect.Value
// elements to be sorted.
type valuesSorter struct {
values []reflect.Value
strings []string // either nil or same len and values
cs *ConfigState
}
// newValuesSorter initializes a valuesSorter instance, which holds a set of
// surrogate keys on which the data should be sorted. It uses flags in
// ConfigState to decide if and how to populate those surrogate keys.
func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {
vs := &valuesSorter{values: values, cs: cs}
if canSortSimply(vs.values[0].Kind()) {
return vs
}
if !cs.DisableMethods {
vs.strings = make([]string, len(values))
for i := range vs.values {
b := bytes.Buffer{}
if !handleMethods(cs, &b, vs.values[i]) {
vs.strings = nil
break
}
vs.strings[i] = b.String()
}
}
if vs.strings == nil && cs.SpewKeys {
vs.strings = make([]string, len(values))
for i := range vs.values {
vs.strings[i] = Sprintf("%#v", vs.values[i].Interface())
}
}
return vs
}
// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted
// directly, or whether it should be considered for sorting by surrogate keys
// (if the ConfigState allows it).
func canSortSimply(kind reflect.Kind) bool {
// This switch parallels valueSortLess, except for the default case.
switch kind {
case reflect.Bool:
return true
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return true
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return true
case reflect.Float32, reflect.Float64:
return true
case reflect.String:
return true
case reflect.Uintptr:
return true
case reflect.Array:
return true
}
return false
}
// Len returns the number of values in the slice. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Len() int {
return len(s.values)
}
// Swap swaps the values at the passed indices. It is part of the
// sort.Interface implementation.
func (s *valuesSorter) Swap(i, j int) {
s.values[i], s.values[j] = s.values[j], s.values[i]
if s.strings != nil {
s.strings[i], s.strings[j] = s.strings[j], s.strings[i]
}
}
// valueSortLess returns whether the first value should sort before the second
// value. It is used by valueSorter.Less as part of the sort.Interface
// implementation.
func valueSortLess(a, b reflect.Value) bool {
switch a.Kind() {
case reflect.Bool:
return !a.Bool() && b.Bool()
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
return a.Int() < b.Int()
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
return a.Uint() < b.Uint()
case reflect.Float32, reflect.Float64:
return a.Float() < b.Float()
case reflect.String:
return a.String() < b.String()
case reflect.Uintptr:
return a.Uint() < b.Uint()
case reflect.Array:
// Compare the contents of both arrays.
l := a.Len()
for i := 0; i < l; i++ {
av := a.Index(i)
bv := b.Index(i)
if av.Interface() == bv.Interface() {
continue
}
return valueSortLess(av, bv)
}
}
return a.String() < b.String()
}
// Less returns whether the value at index i should sort before the
// value at index j. It is part of the sort.Interface implementation.
func (s *valuesSorter) Less(i, j int) bool {
if s.strings == nil {
return valueSortLess(s.values[i], s.values[j])
}
return s.strings[i] < s.strings[j]
}
// sortValues is a sort function that handles both native types and any type that
// can be converted to error or Stringer. Other inputs are sorted according to
// their Value.String() value to ensure display stability.
func sortValues(values []reflect.Value, cs *ConfigState) {
if len(values) == 0 {
return
}
sort.Sort(newValuesSorter(values, cs))
}

View File

@@ -0,0 +1,306 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"io"
"os"
)
// ConfigState houses the configuration options used by spew to format and
// display values. There is a global instance, Config, that is used to control
// all top-level Formatter and Dump functionality. Each ConfigState instance
// provides methods equivalent to the top-level functions.
//
// The zero value for ConfigState provides no indentation. You would typically
// want to set it to a space or a tab.
//
// Alternatively, you can use NewDefaultConfig to get a ConfigState instance
// with default settings. See the documentation of NewDefaultConfig for default
// values.
type ConfigState struct {
// Indent specifies the string to use for each indentation level. The
// global config instance that all top-level functions use set this to a
// single space by default. If you would like more indentation, you might
// set this to a tab with "\t" or perhaps two spaces with " ".
Indent string
// MaxDepth controls the maximum number of levels to descend into nested
// data structures. The default, 0, means there is no limit.
//
// NOTE: Circular data structures are properly detected, so it is not
// necessary to set this value unless you specifically want to limit deeply
// nested data structures.
MaxDepth int
// DisableMethods specifies whether or not error and Stringer interfaces are
// invoked for types that implement them.
DisableMethods bool
// DisablePointerMethods specifies whether or not to check for and invoke
// error and Stringer interfaces on types which only accept a pointer
// receiver when the current type is not a pointer.
//
// NOTE: This might be an unsafe action since calling one of these methods
// with a pointer receiver could technically mutate the value, however,
// in practice, types which choose to satisify an error or Stringer
// interface with a pointer receiver should not be mutating their state
// inside these interface methods. As a result, this option relies on
// access to the unsafe package, so it will not have any effect when
// running in environments without access to the unsafe package such as
// Google App Engine or with the "safe" build tag specified.
DisablePointerMethods bool
// DisablePointerAddresses specifies whether to disable the printing of
// pointer addresses. This is useful when diffing data structures in tests.
DisablePointerAddresses bool
// DisableCapacities specifies whether to disable the printing of capacities
// for arrays, slices, maps and channels. This is useful when diffing
// data structures in tests.
DisableCapacities bool
// ContinueOnMethod specifies whether or not recursion should continue once
// a custom error or Stringer interface is invoked. The default, false,
// means it will print the results of invoking the custom error or Stringer
// interface and return immediately instead of continuing to recurse into
// the internals of the data type.
//
// NOTE: This flag does not have any effect if method invocation is disabled
// via the DisableMethods or DisablePointerMethods options.
ContinueOnMethod bool
// SortKeys specifies map keys should be sorted before being printed. Use
// this to have a more deterministic, diffable output. Note that only
// native types (bool, int, uint, floats, uintptr and string) and types
// that support the error or Stringer interfaces (if methods are
// enabled) are supported, with other types sorted according to the
// reflect.Value.String() output which guarantees display stability.
SortKeys bool
// SpewKeys specifies that, as a last resort attempt, map keys should
// be spewed to strings and sorted by those strings. This is only
// considered if SortKeys is true.
SpewKeys bool
}
// Config is the active configuration of the top-level functions.
// The configuration can be changed by modifying the contents of spew.Config.
var Config = ConfigState{Indent: " "}
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the formatted string as a value that satisfies error. See NewFormatter
// for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, c.convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, c.convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, c.convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a Formatter interface returned by c.NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, c.convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Print(a ...interface{}) (n int, err error) {
return fmt.Print(c.convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, c.convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Println(a ...interface{}) (n int, err error) {
return fmt.Println(c.convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprint(a ...interface{}) string {
return fmt.Sprint(c.convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a Formatter interface returned by c.NewFormatter. It returns
// the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, c.convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a Formatter interface returned by c.NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))
func (c *ConfigState) Sprintln(a ...interface{}) string {
return fmt.Sprintln(c.convertArgs(a)...)
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
c.Printf, c.Println, or c.Printf.
*/
func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(c, v)
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {
fdump(c, w, a...)
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by modifying the public members
of c. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func (c *ConfigState) Dump(a ...interface{}) {
fdump(c, os.Stdout, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func (c *ConfigState) Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(c, &buf, a...)
return buf.String()
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a spew Formatter interface using
// the ConfigState associated with s.
func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = newFormatter(c, arg)
}
return formatters
}
// NewDefaultConfig returns a ConfigState with the following default settings.
//
// Indent: " "
// MaxDepth: 0
// DisableMethods: false
// DisablePointerMethods: false
// ContinueOnMethod: false
// SortKeys: false
func NewDefaultConfig() *ConfigState {
return &ConfigState{Indent: " "}
}

View File

@@ -0,0 +1,211 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
/*
Package spew implements a deep pretty printer for Go data structures to aid in
debugging.
A quick overview of the additional features spew provides over the built-in
printing facilities for Go data types are as follows:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output (only when using
Dump style)
There are two different approaches spew allows for dumping Go data structures:
* Dump style which prints with newlines, customizable indentation,
and additional debug information such as types and all pointer addresses
used to indirect to the final value
* A custom Formatter interface that integrates cleanly with the standard fmt
package and replaces %v, %+v, %#v, and %#+v to provide inline printing
similar to the default %v while providing the additional functionality
outlined above and passing unsupported format verbs such as %x and %q
along to fmt
Quick Start
This section demonstrates how to quickly get started with spew. See the
sections below for further details on formatting and configuration options.
To dump a variable with full newlines, indentation, type, and pointer
information use Dump, Fdump, or Sdump:
spew.Dump(myVar1, myVar2, ...)
spew.Fdump(someWriter, myVar1, myVar2, ...)
str := spew.Sdump(myVar1, myVar2, ...)
Alternatively, if you would prefer to use format strings with a compacted inline
printing style, use the convenience wrappers Printf, Fprintf, etc with
%v (most compact), %+v (adds pointer addresses), %#v (adds types), or
%#+v (adds types and pointer addresses):
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
Configuration Options
Configuration of spew is handled by fields in the ConfigState type. For
convenience, all of the top-level functions use a global state available
via the spew.Config global.
It is also possible to create a ConfigState instance that provides methods
equivalent to the top-level functions. This allows concurrent configuration
options. See the ConfigState documentation for more details.
The following configuration options are available:
* Indent
String to use for each indentation level for Dump functions.
It is a single space by default. A popular alternative is "\t".
* MaxDepth
Maximum number of levels to descend into nested data structures.
There is no limit by default.
* DisableMethods
Disables invocation of error and Stringer interface methods.
Method invocation is enabled by default.
* DisablePointerMethods
Disables invocation of error and Stringer interface methods on types
which only accept pointer receivers from non-pointer variables.
Pointer method invocation is enabled by default.
* DisablePointerAddresses
DisablePointerAddresses specifies whether to disable the printing of
pointer addresses. This is useful when diffing data structures in tests.
* DisableCapacities
DisableCapacities specifies whether to disable the printing of
capacities for arrays, slices, maps and channels. This is useful when
diffing data structures in tests.
* ContinueOnMethod
Enables recursion into types after invoking error and Stringer interface
methods. Recursion after method invocation is disabled by default.
* SortKeys
Specifies map keys should be sorted before being printed. Use
this to have a more deterministic, diffable output. Note that
only native types (bool, int, uint, floats, uintptr and string)
and types which implement error or Stringer interfaces are
supported with other types sorted according to the
reflect.Value.String() output which guarantees display
stability. Natural map order is used by default.
* SpewKeys
Specifies that, as a last resort attempt, map keys should be
spewed to strings and sorted by those strings. This is only
considered if SortKeys is true.
Dump Usage
Simply call spew.Dump with a list of variables you want to dump:
spew.Dump(myVar1, myVar2, ...)
You may also call spew.Fdump if you would prefer to output to an arbitrary
io.Writer. For example, to dump to standard error:
spew.Fdump(os.Stderr, myVar1, myVar2, ...)
A third option is to call spew.Sdump to get the formatted output as a string:
str := spew.Sdump(myVar1, myVar2, ...)
Sample Dump Output
See the Dump example for details on the setup of the types and variables being
shown here.
(main.Foo) {
unexportedField: (*main.Bar)(0xf84002e210)({
flag: (main.Flag) flagTwo,
data: (uintptr) <nil>
}),
ExportedField: (map[interface {}]interface {}) (len=1) {
(string) (len=3) "one": (bool) true
}
}
Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C
command as shown.
([]uint8) (len=32 cap=32) {
00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
00000020 31 32 |12|
}
Custom Formatter
Spew provides a custom formatter that implements the fmt.Formatter interface
so that it integrates cleanly with standard fmt package printing functions. The
formatter is useful for inline printing of smaller data types similar to the
standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Custom Formatter Usage
The simplest way to make use of the spew custom formatter is to call one of the
convenience functions such as spew.Printf, spew.Println, or spew.Printf. The
functions have syntax you are most likely already familiar with:
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
spew.Println(myVar, myVar2)
spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
See the Index for the full list convenience functions.
Sample Formatter Output
Double pointer to a uint8:
%v: <**>5
%+v: <**>(0xf8400420d0->0xf8400420c8)5
%#v: (**uint8)5
%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
Pointer to circular struct with a uint8 field and a pointer to itself:
%v: <*>{1 <*><shown>}
%+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}
%#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}
%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}
See the Printf example for details on the setup of variables being shown
here.
Errors
Since it is possible for custom Stringer/error interfaces to panic, spew
detects them and handles them internally by printing the panic information
inline with the output. Since spew is intended to provide deep pretty printing
capabilities on structures, it intentionally does not return any errors.
*/
package spew

View File

@@ -0,0 +1,509 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"encoding/hex"
"fmt"
"io"
"os"
"reflect"
"regexp"
"strconv"
"strings"
)
var (
// uint8Type is a reflect.Type representing a uint8. It is used to
// convert cgo types to uint8 slices for hexdumping.
uint8Type = reflect.TypeOf(uint8(0))
// cCharRE is a regular expression that matches a cgo char.
// It is used to detect character arrays to hexdump them.
cCharRE = regexp.MustCompile(`^.*\._Ctype_char$`)
// cUnsignedCharRE is a regular expression that matches a cgo unsigned
// char. It is used to detect unsigned character arrays to hexdump
// them.
cUnsignedCharRE = regexp.MustCompile(`^.*\._Ctype_unsignedchar$`)
// cUint8tCharRE is a regular expression that matches a cgo uint8_t.
// It is used to detect uint8_t arrays to hexdump them.
cUint8tCharRE = regexp.MustCompile(`^.*\._Ctype_uint8_t$`)
)
// dumpState contains information about the state of a dump operation.
type dumpState struct {
w io.Writer
depth int
pointers map[uintptr]int
ignoreNextType bool
ignoreNextIndent bool
cs *ConfigState
}
// indent performs indentation according to the depth level and cs.Indent
// option.
func (d *dumpState) indent() {
if d.ignoreNextIndent {
d.ignoreNextIndent = false
return
}
d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))
}
// unpackValue returns values inside of non-nil interfaces when possible.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (d *dumpState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface && !v.IsNil() {
v = v.Elem()
}
return v
}
// dumpPtr handles formatting of pointers by indirecting them as necessary.
func (d *dumpState) dumpPtr(v reflect.Value) {
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range d.pointers {
if depth >= d.depth {
delete(d.pointers, k)
}
}
// Keep list of all dereferenced pointers to show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by dereferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := d.pointers[addr]; ok && pd < d.depth {
cycleFound = true
indirects--
break
}
d.pointers[addr] = d.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type information.
d.w.Write(openParenBytes)
d.w.Write(bytes.Repeat(asteriskBytes, indirects))
d.w.Write([]byte(ve.Type().String()))
d.w.Write(closeParenBytes)
// Display pointer information.
if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 {
d.w.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
d.w.Write(pointerChainBytes)
}
printHexPtr(d.w, addr)
}
d.w.Write(closeParenBytes)
}
// Display dereferenced value.
d.w.Write(openParenBytes)
switch {
case nilFound:
d.w.Write(nilAngleBytes)
case cycleFound:
d.w.Write(circularBytes)
default:
d.ignoreNextType = true
d.dump(ve)
}
d.w.Write(closeParenBytes)
}
// dumpSlice handles formatting of arrays and slices. Byte (uint8 under
// reflection) arrays and slices are dumped in hexdump -C fashion.
func (d *dumpState) dumpSlice(v reflect.Value) {
// Determine whether this type should be hex dumped or not. Also,
// for types which should be hexdumped, try to use the underlying data
// first, then fall back to trying to convert them to a uint8 slice.
var buf []uint8
doConvert := false
doHexDump := false
numEntries := v.Len()
if numEntries > 0 {
vt := v.Index(0).Type()
vts := vt.String()
switch {
// C types that need to be converted.
case cCharRE.MatchString(vts):
fallthrough
case cUnsignedCharRE.MatchString(vts):
fallthrough
case cUint8tCharRE.MatchString(vts):
doConvert = true
// Try to use existing uint8 slices and fall back to converting
// and copying if that fails.
case vt.Kind() == reflect.Uint8:
// We need an addressable interface to convert the type
// to a byte slice. However, the reflect package won't
// give us an interface on certain things like
// unexported struct fields in order to enforce
// visibility rules. We use unsafe, when available, to
// bypass these restrictions since this package does not
// mutate the values.
vs := v
if !vs.CanInterface() || !vs.CanAddr() {
vs = unsafeReflectValue(vs)
}
if !UnsafeDisabled {
vs = vs.Slice(0, numEntries)
// Use the existing uint8 slice if it can be
// type asserted.
iface := vs.Interface()
if slice, ok := iface.([]uint8); ok {
buf = slice
doHexDump = true
break
}
}
// The underlying data needs to be converted if it can't
// be type asserted to a uint8 slice.
doConvert = true
}
// Copy and convert the underlying type if needed.
if doConvert && vt.ConvertibleTo(uint8Type) {
// Convert and copy each element into a uint8 byte
// slice.
buf = make([]uint8, numEntries)
for i := 0; i < numEntries; i++ {
vv := v.Index(i)
buf[i] = uint8(vv.Convert(uint8Type).Uint())
}
doHexDump = true
}
}
// Hexdump the entire slice as needed.
if doHexDump {
indent := strings.Repeat(d.cs.Indent, d.depth)
str := indent + hex.Dump(buf)
str = strings.Replace(str, "\n", "\n"+indent, -1)
str = strings.TrimRight(str, d.cs.Indent)
d.w.Write([]byte(str))
return
}
// Recursively call dump for each item.
for i := 0; i < numEntries; i++ {
d.dump(d.unpackValue(v.Index(i)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
// dump is the main workhorse for dumping a value. It uses the passed reflect
// value to figure out what kind of object we are dealing with and formats it
// appropriately. It is a recursive function, however circular data structures
// are detected and handled properly.
func (d *dumpState) dump(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
d.w.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
d.indent()
d.dumpPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !d.ignoreNextType {
d.indent()
d.w.Write(openParenBytes)
d.w.Write([]byte(v.Type().String()))
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
d.ignoreNextType = false
// Display length and capacity if the built-in len and cap functions
// work with the value's kind and the len/cap itself is non-zero.
valueLen, valueCap := 0, 0
switch v.Kind() {
case reflect.Array, reflect.Slice, reflect.Chan:
valueLen, valueCap = v.Len(), v.Cap()
case reflect.Map, reflect.String:
valueLen = v.Len()
}
if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 {
d.w.Write(openParenBytes)
if valueLen != 0 {
d.w.Write(lenEqualsBytes)
printInt(d.w, int64(valueLen), 10)
}
if !d.cs.DisableCapacities && valueCap != 0 {
if valueLen != 0 {
d.w.Write(spaceBytes)
}
d.w.Write(capEqualsBytes)
printInt(d.w, int64(valueCap), 10)
}
d.w.Write(closeParenBytes)
d.w.Write(spaceBytes)
}
// Call Stringer/error interfaces if they exist and the handle methods flag
// is enabled
if !d.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(d.cs, d.w, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(d.w, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(d.w, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(d.w, v.Uint(), 10)
case reflect.Float32:
printFloat(d.w, v.Float(), 32)
case reflect.Float64:
printFloat(d.w, v.Float(), 64)
case reflect.Complex64:
printComplex(d.w, v.Complex(), 32)
case reflect.Complex128:
printComplex(d.w, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
d.dumpSlice(v)
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.String:
d.w.Write([]byte(strconv.Quote(v.String())))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
d.w.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
d.w.Write(nilAngleBytes)
break
}
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
numEntries := v.Len()
keys := v.MapKeys()
if d.cs.SortKeys {
sortValues(keys, d.cs)
}
for i, key := range keys {
d.dump(d.unpackValue(key))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.MapIndex(key)))
if i < (numEntries - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Struct:
d.w.Write(openBraceNewlineBytes)
d.depth++
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
d.indent()
d.w.Write(maxNewlineBytes)
} else {
vt := v.Type()
numFields := v.NumField()
for i := 0; i < numFields; i++ {
d.indent()
vtf := vt.Field(i)
d.w.Write([]byte(vtf.Name))
d.w.Write(colonSpaceBytes)
d.ignoreNextIndent = true
d.dump(d.unpackValue(v.Field(i)))
if i < (numFields - 1) {
d.w.Write(commaNewlineBytes)
} else {
d.w.Write(newlineBytes)
}
}
}
d.depth--
d.indent()
d.w.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(d.w, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(d.w, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it in case any new
// types are added.
default:
if v.CanInterface() {
fmt.Fprintf(d.w, "%v", v.Interface())
} else {
fmt.Fprintf(d.w, "%v", v.String())
}
}
}
// fdump is a helper function to consolidate the logic from the various public
// methods which take varying writers and config states.
func fdump(cs *ConfigState, w io.Writer, a ...interface{}) {
for _, arg := range a {
if arg == nil {
w.Write(interfaceBytes)
w.Write(spaceBytes)
w.Write(nilAngleBytes)
w.Write(newlineBytes)
continue
}
d := dumpState{w: w, cs: cs}
d.pointers = make(map[uintptr]int)
d.dump(reflect.ValueOf(arg))
d.w.Write(newlineBytes)
}
}
// Fdump formats and displays the passed arguments to io.Writer w. It formats
// exactly the same as Dump.
func Fdump(w io.Writer, a ...interface{}) {
fdump(&Config, w, a...)
}
// Sdump returns a string with the passed arguments formatted exactly the same
// as Dump.
func Sdump(a ...interface{}) string {
var buf bytes.Buffer
fdump(&Config, &buf, a...)
return buf.String()
}
/*
Dump displays the passed parameters to standard out with newlines, customizable
indentation, and additional debug information such as complete types and all
pointer addresses used to indirect to the final value. It provides the
following features over the built-in printing facilities provided by the fmt
package:
* Pointers are dereferenced and followed
* Circular data structures are detected and handled properly
* Custom Stringer/error interfaces are optionally invoked, including
on unexported types
* Custom types which only implement the Stringer/error interfaces via
a pointer receiver are optionally invoked when passing non-pointer
variables
* Byte arrays and slices are dumped like the hexdump -C command which
includes offsets, byte values in hex, and ASCII output
The configuration options are controlled by an exported package global,
spew.Config. See ConfigState for options documentation.
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
get the formatted result as a string.
*/
func Dump(a ...interface{}) {
fdump(&Config, os.Stdout, a...)
}

View File

@@ -0,0 +1,419 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"bytes"
"fmt"
"reflect"
"strconv"
"strings"
)
// supportedFlags is a list of all the character flags supported by fmt package.
const supportedFlags = "0-+# "
// formatState implements the fmt.Formatter interface and contains information
// about the state of a formatting operation. The NewFormatter function can
// be used to get a new Formatter which can be used directly as arguments
// in standard fmt package printing calls.
type formatState struct {
value interface{}
fs fmt.State
depth int
pointers map[uintptr]int
ignoreNextType bool
cs *ConfigState
}
// buildDefaultFormat recreates the original format string without precision
// and width information to pass in to fmt.Sprintf in the case of an
// unrecognized type. Unless new types are added to the language, this
// function won't ever be called.
func (f *formatState) buildDefaultFormat() (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
buf.WriteRune('v')
format = buf.String()
return format
}
// constructOrigFormat recreates the original format string including precision
// and width information to pass along to the standard fmt package. This allows
// automatic deferral of all format strings this package doesn't support.
func (f *formatState) constructOrigFormat(verb rune) (format string) {
buf := bytes.NewBuffer(percentBytes)
for _, flag := range supportedFlags {
if f.fs.Flag(int(flag)) {
buf.WriteRune(flag)
}
}
if width, ok := f.fs.Width(); ok {
buf.WriteString(strconv.Itoa(width))
}
if precision, ok := f.fs.Precision(); ok {
buf.Write(precisionBytes)
buf.WriteString(strconv.Itoa(precision))
}
buf.WriteRune(verb)
format = buf.String()
return format
}
// unpackValue returns values inside of non-nil interfaces when possible and
// ensures that types for values which have been unpacked from an interface
// are displayed when the show types flag is also set.
// This is useful for data types like structs, arrays, slices, and maps which
// can contain varying types packed inside an interface.
func (f *formatState) unpackValue(v reflect.Value) reflect.Value {
if v.Kind() == reflect.Interface {
f.ignoreNextType = false
if !v.IsNil() {
v = v.Elem()
}
}
return v
}
// formatPtr handles formatting of pointers by indirecting them as necessary.
func (f *formatState) formatPtr(v reflect.Value) {
// Display nil if top level pointer is nil.
showTypes := f.fs.Flag('#')
if v.IsNil() && (!showTypes || f.ignoreNextType) {
f.fs.Write(nilAngleBytes)
return
}
// Remove pointers at or below the current depth from map used to detect
// circular refs.
for k, depth := range f.pointers {
if depth >= f.depth {
delete(f.pointers, k)
}
}
// Keep list of all dereferenced pointers to possibly show later.
pointerChain := make([]uintptr, 0)
// Figure out how many levels of indirection there are by derferencing
// pointers and unpacking interfaces down the chain while detecting circular
// references.
nilFound := false
cycleFound := false
indirects := 0
ve := v
for ve.Kind() == reflect.Ptr {
if ve.IsNil() {
nilFound = true
break
}
indirects++
addr := ve.Pointer()
pointerChain = append(pointerChain, addr)
if pd, ok := f.pointers[addr]; ok && pd < f.depth {
cycleFound = true
indirects--
break
}
f.pointers[addr] = f.depth
ve = ve.Elem()
if ve.Kind() == reflect.Interface {
if ve.IsNil() {
nilFound = true
break
}
ve = ve.Elem()
}
}
// Display type or indirection level depending on flags.
if showTypes && !f.ignoreNextType {
f.fs.Write(openParenBytes)
f.fs.Write(bytes.Repeat(asteriskBytes, indirects))
f.fs.Write([]byte(ve.Type().String()))
f.fs.Write(closeParenBytes)
} else {
if nilFound || cycleFound {
indirects += strings.Count(ve.Type().String(), "*")
}
f.fs.Write(openAngleBytes)
f.fs.Write([]byte(strings.Repeat("*", indirects)))
f.fs.Write(closeAngleBytes)
}
// Display pointer information depending on flags.
if f.fs.Flag('+') && (len(pointerChain) > 0) {
f.fs.Write(openParenBytes)
for i, addr := range pointerChain {
if i > 0 {
f.fs.Write(pointerChainBytes)
}
printHexPtr(f.fs, addr)
}
f.fs.Write(closeParenBytes)
}
// Display dereferenced value.
switch {
case nilFound:
f.fs.Write(nilAngleBytes)
case cycleFound:
f.fs.Write(circularShortBytes)
default:
f.ignoreNextType = true
f.format(ve)
}
}
// format is the main workhorse for providing the Formatter interface. It
// uses the passed reflect value to figure out what kind of object we are
// dealing with and formats it appropriately. It is a recursive function,
// however circular data structures are detected and handled properly.
func (f *formatState) format(v reflect.Value) {
// Handle invalid reflect values immediately.
kind := v.Kind()
if kind == reflect.Invalid {
f.fs.Write(invalidAngleBytes)
return
}
// Handle pointers specially.
if kind == reflect.Ptr {
f.formatPtr(v)
return
}
// Print type information unless already handled elsewhere.
if !f.ignoreNextType && f.fs.Flag('#') {
f.fs.Write(openParenBytes)
f.fs.Write([]byte(v.Type().String()))
f.fs.Write(closeParenBytes)
}
f.ignoreNextType = false
// Call Stringer/error interfaces if they exist and the handle methods
// flag is enabled.
if !f.cs.DisableMethods {
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
if handled := handleMethods(f.cs, f.fs, v); handled {
return
}
}
}
switch kind {
case reflect.Invalid:
// Do nothing. We should never get here since invalid has already
// been handled above.
case reflect.Bool:
printBool(f.fs, v.Bool())
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
printInt(f.fs, v.Int(), 10)
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
printUint(f.fs, v.Uint(), 10)
case reflect.Float32:
printFloat(f.fs, v.Float(), 32)
case reflect.Float64:
printFloat(f.fs, v.Float(), 64)
case reflect.Complex64:
printComplex(f.fs, v.Complex(), 32)
case reflect.Complex128:
printComplex(f.fs, v.Complex(), 64)
case reflect.Slice:
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
fallthrough
case reflect.Array:
f.fs.Write(openBracketBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
numEntries := v.Len()
for i := 0; i < numEntries; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(v.Index(i)))
}
}
f.depth--
f.fs.Write(closeBracketBytes)
case reflect.String:
f.fs.Write([]byte(v.String()))
case reflect.Interface:
// The only time we should get here is for nil interfaces due to
// unpackValue calls.
if v.IsNil() {
f.fs.Write(nilAngleBytes)
}
case reflect.Ptr:
// Do nothing. We should never get here since pointers have already
// been handled above.
case reflect.Map:
// nil maps should be indicated as different than empty maps
if v.IsNil() {
f.fs.Write(nilAngleBytes)
break
}
f.fs.Write(openMapBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
keys := v.MapKeys()
if f.cs.SortKeys {
sortValues(keys, f.cs)
}
for i, key := range keys {
if i > 0 {
f.fs.Write(spaceBytes)
}
f.ignoreNextType = true
f.format(f.unpackValue(key))
f.fs.Write(colonBytes)
f.ignoreNextType = true
f.format(f.unpackValue(v.MapIndex(key)))
}
}
f.depth--
f.fs.Write(closeMapBytes)
case reflect.Struct:
numFields := v.NumField()
f.fs.Write(openBraceBytes)
f.depth++
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
f.fs.Write(maxShortBytes)
} else {
vt := v.Type()
for i := 0; i < numFields; i++ {
if i > 0 {
f.fs.Write(spaceBytes)
}
vtf := vt.Field(i)
if f.fs.Flag('+') || f.fs.Flag('#') {
f.fs.Write([]byte(vtf.Name))
f.fs.Write(colonBytes)
}
f.format(f.unpackValue(v.Field(i)))
}
}
f.depth--
f.fs.Write(closeBraceBytes)
case reflect.Uintptr:
printHexPtr(f.fs, uintptr(v.Uint()))
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
printHexPtr(f.fs, v.Pointer())
// There were not any other types at the time this code was written, but
// fall back to letting the default fmt package handle it if any get added.
default:
format := f.buildDefaultFormat()
if v.CanInterface() {
fmt.Fprintf(f.fs, format, v.Interface())
} else {
fmt.Fprintf(f.fs, format, v.String())
}
}
}
// Format satisfies the fmt.Formatter interface. See NewFormatter for usage
// details.
func (f *formatState) Format(fs fmt.State, verb rune) {
f.fs = fs
// Use standard formatting for verbs that are not v.
if verb != 'v' {
format := f.constructOrigFormat(verb)
fmt.Fprintf(fs, format, f.value)
return
}
if f.value == nil {
if fs.Flag('#') {
fs.Write(interfaceBytes)
}
fs.Write(nilAngleBytes)
return
}
f.format(reflect.ValueOf(f.value))
}
// newFormatter is a helper function to consolidate the logic from the various
// public methods which take varying config states.
func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {
fs := &formatState{value: v, cs: cs}
fs.pointers = make(map[uintptr]int)
return fs
}
/*
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
interface. As a result, it integrates cleanly with standard fmt package
printing functions. The formatter is useful for inline printing of smaller data
types similar to the standard %v format specifier.
The custom formatter only responds to the %v (most compact), %+v (adds pointer
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
combinations. Any other verbs such as %x and %q will be sent to the the
standard fmt package for formatting. In addition, the custom formatter ignores
the width and precision arguments (however they will still work on the format
specifiers not handled by the custom formatter).
Typically this function shouldn't be called directly. It is much easier to make
use of the custom formatter by calling one of the convenience functions such as
Printf, Println, or Fprintf.
*/
func NewFormatter(v interface{}) fmt.Formatter {
return newFormatter(&Config, v)
}

View File

@@ -0,0 +1,148 @@
/*
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package spew
import (
"fmt"
"io"
)
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the formatted string as a value that satisfies error. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Errorf(format string, a ...interface{}) (err error) {
return fmt.Errorf(format, convertArgs(a)...)
}
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprint(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprint(w, convertArgs(a)...)
}
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
return fmt.Fprintf(w, format, convertArgs(a)...)
}
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
// passed with a default Formatter interface returned by NewFormatter. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))
func Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
return fmt.Fprintln(w, convertArgs(a)...)
}
// Print is a wrapper for fmt.Print that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))
func Print(a ...interface{}) (n int, err error) {
return fmt.Print(convertArgs(a)...)
}
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Printf(format string, a ...interface{}) (n int, err error) {
return fmt.Printf(format, convertArgs(a)...)
}
// Println is a wrapper for fmt.Println that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the number of bytes written and any write error encountered. See
// NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))
func Println(a ...interface{}) (n int, err error) {
return fmt.Println(convertArgs(a)...)
}
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprint(a ...interface{}) string {
return fmt.Sprint(convertArgs(a)...)
}
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
// passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintf(format string, a ...interface{}) string {
return fmt.Sprintf(format, convertArgs(a)...)
}
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
// were passed with a default Formatter interface returned by NewFormatter. It
// returns the resulting string. See NewFormatter for formatting details.
//
// This function is shorthand for the following syntax:
//
// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))
func Sprintln(a ...interface{}) string {
return fmt.Sprintln(convertArgs(a)...)
}
// convertArgs accepts a slice of arguments and returns a slice of the same
// length with each argument converted to a default spew Formatter interface.
func convertArgs(args []interface{}) (formatters []interface{}) {
formatters = make([]interface{}, len(args))
for index, arg := range args {
formatters[index] = NewFormatter(arg)
}
return formatters
}

View File

@@ -0,0 +1,16 @@
language: go
sudo: false
go:
- 1.13.x
- tip
before_install:
- go get -t -v ./...
script:
- go generate
- git diff --cached --exit-code
- ./go.test.sh
after_success:
- bash <(curl -s https://codecov.io/bash)

View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2016 Yasuhiro Matsumoto
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,27 @@
go-runewidth
============
[![Build Status](https://travis-ci.org/mattn/go-runewidth.png?branch=master)](https://travis-ci.org/mattn/go-runewidth)
[![Codecov](https://codecov.io/gh/mattn/go-runewidth/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-runewidth)
[![GoDoc](https://godoc.org/github.com/mattn/go-runewidth?status.svg)](http://godoc.org/github.com/mattn/go-runewidth)
[![Go Report Card](https://goreportcard.com/badge/github.com/mattn/go-runewidth)](https://goreportcard.com/report/github.com/mattn/go-runewidth)
Provides functions to get fixed width of the character or string.
Usage
-----
```go
runewidth.StringWidth("つのだ☆HIRO") == 12
```
Author
------
Yasuhiro Matsumoto
License
-------
under the MIT License: http://mattn.mit-license.org/2013

View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -e
echo "" > coverage.txt
for d in $(go list ./... | grep -v vendor); do
go test -race -coverprofile=profile.out -covermode=atomic "$d"
if [ -f profile.out ]; then
cat profile.out >> coverage.txt
rm profile.out
fi
done

View File

@@ -0,0 +1,257 @@
package runewidth
import (
"os"
)
//go:generate go run script/generate.go
var (
// EastAsianWidth will be set true if the current locale is CJK
EastAsianWidth bool
// ZeroWidthJoiner is flag to set to use UTR#51 ZWJ
ZeroWidthJoiner bool
// DefaultCondition is a condition in current locale
DefaultCondition = &Condition{}
)
func init() {
handleEnv()
}
func handleEnv() {
env := os.Getenv("RUNEWIDTH_EASTASIAN")
if env == "" {
EastAsianWidth = IsEastAsian()
} else {
EastAsianWidth = env == "1"
}
// update DefaultCondition
DefaultCondition.EastAsianWidth = EastAsianWidth
DefaultCondition.ZeroWidthJoiner = ZeroWidthJoiner
}
type interval struct {
first rune
last rune
}
type table []interval
func inTables(r rune, ts ...table) bool {
for _, t := range ts {
if inTable(r, t) {
return true
}
}
return false
}
func inTable(r rune, t table) bool {
if r < t[0].first {
return false
}
bot := 0
top := len(t) - 1
for top >= bot {
mid := (bot + top) >> 1
switch {
case t[mid].last < r:
bot = mid + 1
case t[mid].first > r:
top = mid - 1
default:
return true
}
}
return false
}
var private = table{
{0x00E000, 0x00F8FF}, {0x0F0000, 0x0FFFFD}, {0x100000, 0x10FFFD},
}
var nonprint = table{
{0x0000, 0x001F}, {0x007F, 0x009F}, {0x00AD, 0x00AD},
{0x070F, 0x070F}, {0x180B, 0x180E}, {0x200B, 0x200F},
{0x2028, 0x202E}, {0x206A, 0x206F}, {0xD800, 0xDFFF},
{0xFEFF, 0xFEFF}, {0xFFF9, 0xFFFB}, {0xFFFE, 0xFFFF},
}
// Condition have flag EastAsianWidth whether the current locale is CJK or not.
type Condition struct {
EastAsianWidth bool
ZeroWidthJoiner bool
}
// NewCondition return new instance of Condition which is current locale.
func NewCondition() *Condition {
return &Condition{
EastAsianWidth: EastAsianWidth,
ZeroWidthJoiner: ZeroWidthJoiner,
}
}
// RuneWidth returns the number of cells in r.
// See http://www.unicode.org/reports/tr11/
func (c *Condition) RuneWidth(r rune) int {
switch {
case r < 0 || r > 0x10FFFF || inTables(r, nonprint, combining, notassigned):
return 0
case (c.EastAsianWidth && IsAmbiguousWidth(r)) || inTables(r, doublewidth):
return 2
default:
return 1
}
}
func (c *Condition) stringWidth(s string) (width int) {
for _, r := range []rune(s) {
width += c.RuneWidth(r)
}
return width
}
func (c *Condition) stringWidthZeroJoiner(s string) (width int) {
r1, r2 := rune(0), rune(0)
for _, r := range []rune(s) {
if r == 0xFE0E || r == 0xFE0F {
continue
}
w := c.RuneWidth(r)
if r2 == 0x200D && inTables(r, emoji) && inTables(r1, emoji) {
if width < w {
width = w
}
} else {
width += w
}
r1, r2 = r2, r
}
return width
}
// StringWidth return width as you can see
func (c *Condition) StringWidth(s string) (width int) {
if c.ZeroWidthJoiner {
return c.stringWidthZeroJoiner(s)
}
return c.stringWidth(s)
}
// Truncate return string truncated with w cells
func (c *Condition) Truncate(s string, w int, tail string) string {
if c.StringWidth(s) <= w {
return s
}
r := []rune(s)
tw := c.StringWidth(tail)
w -= tw
width := 0
i := 0
for ; i < len(r); i++ {
cw := c.RuneWidth(r[i])
if width+cw > w {
break
}
width += cw
}
return string(r[0:i]) + tail
}
// Wrap return string wrapped with w cells
func (c *Condition) Wrap(s string, w int) string {
width := 0
out := ""
for _, r := range []rune(s) {
cw := RuneWidth(r)
if r == '\n' {
out += string(r)
width = 0
continue
} else if width+cw > w {
out += "\n"
width = 0
out += string(r)
width += cw
continue
}
out += string(r)
width += cw
}
return out
}
// FillLeft return string filled in left by spaces in w cells
func (c *Condition) FillLeft(s string, w int) string {
width := c.StringWidth(s)
count := w - width
if count > 0 {
b := make([]byte, count)
for i := range b {
b[i] = ' '
}
return string(b) + s
}
return s
}
// FillRight return string filled in left by spaces in w cells
func (c *Condition) FillRight(s string, w int) string {
width := c.StringWidth(s)
count := w - width
if count > 0 {
b := make([]byte, count)
for i := range b {
b[i] = ' '
}
return s + string(b)
}
return s
}
// RuneWidth returns the number of cells in r.
// See http://www.unicode.org/reports/tr11/
func RuneWidth(r rune) int {
return DefaultCondition.RuneWidth(r)
}
// IsAmbiguousWidth returns whether is ambiguous width or not.
func IsAmbiguousWidth(r rune) bool {
return inTables(r, private, ambiguous)
}
// IsNeutralWidth returns whether is neutral width or not.
func IsNeutralWidth(r rune) bool {
return inTable(r, neutral)
}
// StringWidth return width as you can see
func StringWidth(s string) (width int) {
return DefaultCondition.StringWidth(s)
}
// Truncate return string truncated with w cells
func Truncate(s string, w int, tail string) string {
return DefaultCondition.Truncate(s, w, tail)
}
// Wrap return string wrapped with w cells
func Wrap(s string, w int) string {
return DefaultCondition.Wrap(s, w)
}
// FillLeft return string filled in left by spaces in w cells
func FillLeft(s string, w int) string {
return DefaultCondition.FillLeft(s, w)
}
// FillRight return string filled in left by spaces in w cells
func FillRight(s string, w int) string {
return DefaultCondition.FillRight(s, w)
}

View File

@@ -0,0 +1,8 @@
// +build appengine
package runewidth
// IsEastAsian return true if the current locale is CJK
func IsEastAsian() bool {
return false
}

View File

@@ -0,0 +1,9 @@
// +build js
// +build !appengine
package runewidth
func IsEastAsian() bool {
// TODO: Implement this for the web. Detect east asian in a compatible way, and return true.
return false
}

View File

@@ -0,0 +1,82 @@
// +build !windows
// +build !js
// +build !appengine
package runewidth
import (
"os"
"regexp"
"strings"
)
var reLoc = regexp.MustCompile(`^[a-z][a-z][a-z]?(?:_[A-Z][A-Z])?\.(.+)`)
var mblenTable = map[string]int{
"utf-8": 6,
"utf8": 6,
"jis": 8,
"eucjp": 3,
"euckr": 2,
"euccn": 2,
"sjis": 2,
"cp932": 2,
"cp51932": 2,
"cp936": 2,
"cp949": 2,
"cp950": 2,
"big5": 2,
"gbk": 2,
"gb2312": 2,
}
func isEastAsian(locale string) bool {
charset := strings.ToLower(locale)
r := reLoc.FindStringSubmatch(locale)
if len(r) == 2 {
charset = strings.ToLower(r[1])
}
if strings.HasSuffix(charset, "@cjk_narrow") {
return false
}
for pos, b := range []byte(charset) {
if b == '@' {
charset = charset[:pos]
break
}
}
max := 1
if m, ok := mblenTable[charset]; ok {
max = m
}
if max > 1 && (charset[0] != 'u' ||
strings.HasPrefix(locale, "ja") ||
strings.HasPrefix(locale, "ko") ||
strings.HasPrefix(locale, "zh")) {
return true
}
return false
}
// IsEastAsian return true if the current locale is CJK
func IsEastAsian() bool {
locale := os.Getenv("LC_ALL")
if locale == "" {
locale = os.Getenv("LC_CTYPE")
}
if locale == "" {
locale = os.Getenv("LANG")
}
// ignore C locale
if locale == "POSIX" || locale == "C" {
return false
}
if len(locale) > 1 && locale[0] == 'C' && (locale[1] == '.' || locale[1] == '-') {
return false
}
return isEastAsian(locale)
}

View File

@@ -0,0 +1,437 @@
// Code generated by script/generate.go. DO NOT EDIT.
package runewidth
var combining = table{
{0x0300, 0x036F}, {0x0483, 0x0489}, {0x07EB, 0x07F3},
{0x0C00, 0x0C00}, {0x0C04, 0x0C04}, {0x0D00, 0x0D01},
{0x135D, 0x135F}, {0x1A7F, 0x1A7F}, {0x1AB0, 0x1AC0},
{0x1B6B, 0x1B73}, {0x1DC0, 0x1DF9}, {0x1DFB, 0x1DFF},
{0x20D0, 0x20F0}, {0x2CEF, 0x2CF1}, {0x2DE0, 0x2DFF},
{0x3099, 0x309A}, {0xA66F, 0xA672}, {0xA674, 0xA67D},
{0xA69E, 0xA69F}, {0xA6F0, 0xA6F1}, {0xA8E0, 0xA8F1},
{0xFE20, 0xFE2F}, {0x101FD, 0x101FD}, {0x10376, 0x1037A},
{0x10EAB, 0x10EAC}, {0x10F46, 0x10F50}, {0x11300, 0x11301},
{0x1133B, 0x1133C}, {0x11366, 0x1136C}, {0x11370, 0x11374},
{0x16AF0, 0x16AF4}, {0x1D165, 0x1D169}, {0x1D16D, 0x1D172},
{0x1D17B, 0x1D182}, {0x1D185, 0x1D18B}, {0x1D1AA, 0x1D1AD},
{0x1D242, 0x1D244}, {0x1E000, 0x1E006}, {0x1E008, 0x1E018},
{0x1E01B, 0x1E021}, {0x1E023, 0x1E024}, {0x1E026, 0x1E02A},
{0x1E8D0, 0x1E8D6},
}
var doublewidth = table{
{0x1100, 0x115F}, {0x231A, 0x231B}, {0x2329, 0x232A},
{0x23E9, 0x23EC}, {0x23F0, 0x23F0}, {0x23F3, 0x23F3},
{0x25FD, 0x25FE}, {0x2614, 0x2615}, {0x2648, 0x2653},
{0x267F, 0x267F}, {0x2693, 0x2693}, {0x26A1, 0x26A1},
{0x26AA, 0x26AB}, {0x26BD, 0x26BE}, {0x26C4, 0x26C5},
{0x26CE, 0x26CE}, {0x26D4, 0x26D4}, {0x26EA, 0x26EA},
{0x26F2, 0x26F3}, {0x26F5, 0x26F5}, {0x26FA, 0x26FA},
{0x26FD, 0x26FD}, {0x2705, 0x2705}, {0x270A, 0x270B},
{0x2728, 0x2728}, {0x274C, 0x274C}, {0x274E, 0x274E},
{0x2753, 0x2755}, {0x2757, 0x2757}, {0x2795, 0x2797},
{0x27B0, 0x27B0}, {0x27BF, 0x27BF}, {0x2B1B, 0x2B1C},
{0x2B50, 0x2B50}, {0x2B55, 0x2B55}, {0x2E80, 0x2E99},
{0x2E9B, 0x2EF3}, {0x2F00, 0x2FD5}, {0x2FF0, 0x2FFB},
{0x3000, 0x303E}, {0x3041, 0x3096}, {0x3099, 0x30FF},
{0x3105, 0x312F}, {0x3131, 0x318E}, {0x3190, 0x31E3},
{0x31F0, 0x321E}, {0x3220, 0x3247}, {0x3250, 0x4DBF},
{0x4E00, 0xA48C}, {0xA490, 0xA4C6}, {0xA960, 0xA97C},
{0xAC00, 0xD7A3}, {0xF900, 0xFAFF}, {0xFE10, 0xFE19},
{0xFE30, 0xFE52}, {0xFE54, 0xFE66}, {0xFE68, 0xFE6B},
{0xFF01, 0xFF60}, {0xFFE0, 0xFFE6}, {0x16FE0, 0x16FE4},
{0x16FF0, 0x16FF1}, {0x17000, 0x187F7}, {0x18800, 0x18CD5},
{0x18D00, 0x18D08}, {0x1B000, 0x1B11E}, {0x1B150, 0x1B152},
{0x1B164, 0x1B167}, {0x1B170, 0x1B2FB}, {0x1F004, 0x1F004},
{0x1F0CF, 0x1F0CF}, {0x1F18E, 0x1F18E}, {0x1F191, 0x1F19A},
{0x1F200, 0x1F202}, {0x1F210, 0x1F23B}, {0x1F240, 0x1F248},
{0x1F250, 0x1F251}, {0x1F260, 0x1F265}, {0x1F300, 0x1F320},
{0x1F32D, 0x1F335}, {0x1F337, 0x1F37C}, {0x1F37E, 0x1F393},
{0x1F3A0, 0x1F3CA}, {0x1F3CF, 0x1F3D3}, {0x1F3E0, 0x1F3F0},
{0x1F3F4, 0x1F3F4}, {0x1F3F8, 0x1F43E}, {0x1F440, 0x1F440},
{0x1F442, 0x1F4FC}, {0x1F4FF, 0x1F53D}, {0x1F54B, 0x1F54E},
{0x1F550, 0x1F567}, {0x1F57A, 0x1F57A}, {0x1F595, 0x1F596},
{0x1F5A4, 0x1F5A4}, {0x1F5FB, 0x1F64F}, {0x1F680, 0x1F6C5},
{0x1F6CC, 0x1F6CC}, {0x1F6D0, 0x1F6D2}, {0x1F6D5, 0x1F6D7},
{0x1F6EB, 0x1F6EC}, {0x1F6F4, 0x1F6FC}, {0x1F7E0, 0x1F7EB},
{0x1F90C, 0x1F93A}, {0x1F93C, 0x1F945}, {0x1F947, 0x1F978},
{0x1F97A, 0x1F9CB}, {0x1F9CD, 0x1F9FF}, {0x1FA70, 0x1FA74},
{0x1FA78, 0x1FA7A}, {0x1FA80, 0x1FA86}, {0x1FA90, 0x1FAA8},
{0x1FAB0, 0x1FAB6}, {0x1FAC0, 0x1FAC2}, {0x1FAD0, 0x1FAD6},
{0x20000, 0x2FFFD}, {0x30000, 0x3FFFD},
}
var ambiguous = table{
{0x00A1, 0x00A1}, {0x00A4, 0x00A4}, {0x00A7, 0x00A8},
{0x00AA, 0x00AA}, {0x00AD, 0x00AE}, {0x00B0, 0x00B4},
{0x00B6, 0x00BA}, {0x00BC, 0x00BF}, {0x00C6, 0x00C6},
{0x00D0, 0x00D0}, {0x00D7, 0x00D8}, {0x00DE, 0x00E1},
{0x00E6, 0x00E6}, {0x00E8, 0x00EA}, {0x00EC, 0x00ED},
{0x00F0, 0x00F0}, {0x00F2, 0x00F3}, {0x00F7, 0x00FA},
{0x00FC, 0x00FC}, {0x00FE, 0x00FE}, {0x0101, 0x0101},
{0x0111, 0x0111}, {0x0113, 0x0113}, {0x011B, 0x011B},
{0x0126, 0x0127}, {0x012B, 0x012B}, {0x0131, 0x0133},
{0x0138, 0x0138}, {0x013F, 0x0142}, {0x0144, 0x0144},
{0x0148, 0x014B}, {0x014D, 0x014D}, {0x0152, 0x0153},
{0x0166, 0x0167}, {0x016B, 0x016B}, {0x01CE, 0x01CE},
{0x01D0, 0x01D0}, {0x01D2, 0x01D2}, {0x01D4, 0x01D4},
{0x01D6, 0x01D6}, {0x01D8, 0x01D8}, {0x01DA, 0x01DA},
{0x01DC, 0x01DC}, {0x0251, 0x0251}, {0x0261, 0x0261},
{0x02C4, 0x02C4}, {0x02C7, 0x02C7}, {0x02C9, 0x02CB},
{0x02CD, 0x02CD}, {0x02D0, 0x02D0}, {0x02D8, 0x02DB},
{0x02DD, 0x02DD}, {0x02DF, 0x02DF}, {0x0300, 0x036F},
{0x0391, 0x03A1}, {0x03A3, 0x03A9}, {0x03B1, 0x03C1},
{0x03C3, 0x03C9}, {0x0401, 0x0401}, {0x0410, 0x044F},
{0x0451, 0x0451}, {0x2010, 0x2010}, {0x2013, 0x2016},
{0x2018, 0x2019}, {0x201C, 0x201D}, {0x2020, 0x2022},
{0x2024, 0x2027}, {0x2030, 0x2030}, {0x2032, 0x2033},
{0x2035, 0x2035}, {0x203B, 0x203B}, {0x203E, 0x203E},
{0x2074, 0x2074}, {0x207F, 0x207F}, {0x2081, 0x2084},
{0x20AC, 0x20AC}, {0x2103, 0x2103}, {0x2105, 0x2105},
{0x2109, 0x2109}, {0x2113, 0x2113}, {0x2116, 0x2116},
{0x2121, 0x2122}, {0x2126, 0x2126}, {0x212B, 0x212B},
{0x2153, 0x2154}, {0x215B, 0x215E}, {0x2160, 0x216B},
{0x2170, 0x2179}, {0x2189, 0x2189}, {0x2190, 0x2199},
{0x21B8, 0x21B9}, {0x21D2, 0x21D2}, {0x21D4, 0x21D4},
{0x21E7, 0x21E7}, {0x2200, 0x2200}, {0x2202, 0x2203},
{0x2207, 0x2208}, {0x220B, 0x220B}, {0x220F, 0x220F},
{0x2211, 0x2211}, {0x2215, 0x2215}, {0x221A, 0x221A},
{0x221D, 0x2220}, {0x2223, 0x2223}, {0x2225, 0x2225},
{0x2227, 0x222C}, {0x222E, 0x222E}, {0x2234, 0x2237},
{0x223C, 0x223D}, {0x2248, 0x2248}, {0x224C, 0x224C},
{0x2252, 0x2252}, {0x2260, 0x2261}, {0x2264, 0x2267},
{0x226A, 0x226B}, {0x226E, 0x226F}, {0x2282, 0x2283},
{0x2286, 0x2287}, {0x2295, 0x2295}, {0x2299, 0x2299},
{0x22A5, 0x22A5}, {0x22BF, 0x22BF}, {0x2312, 0x2312},
{0x2460, 0x24E9}, {0x24EB, 0x254B}, {0x2550, 0x2573},
{0x2580, 0x258F}, {0x2592, 0x2595}, {0x25A0, 0x25A1},
{0x25A3, 0x25A9}, {0x25B2, 0x25B3}, {0x25B6, 0x25B7},
{0x25BC, 0x25BD}, {0x25C0, 0x25C1}, {0x25C6, 0x25C8},
{0x25CB, 0x25CB}, {0x25CE, 0x25D1}, {0x25E2, 0x25E5},
{0x25EF, 0x25EF}, {0x2605, 0x2606}, {0x2609, 0x2609},
{0x260E, 0x260F}, {0x261C, 0x261C}, {0x261E, 0x261E},
{0x2640, 0x2640}, {0x2642, 0x2642}, {0x2660, 0x2661},
{0x2663, 0x2665}, {0x2667, 0x266A}, {0x266C, 0x266D},
{0x266F, 0x266F}, {0x269E, 0x269F}, {0x26BF, 0x26BF},
{0x26C6, 0x26CD}, {0x26CF, 0x26D3}, {0x26D5, 0x26E1},
{0x26E3, 0x26E3}, {0x26E8, 0x26E9}, {0x26EB, 0x26F1},
{0x26F4, 0x26F4}, {0x26F6, 0x26F9}, {0x26FB, 0x26FC},
{0x26FE, 0x26FF}, {0x273D, 0x273D}, {0x2776, 0x277F},
{0x2B56, 0x2B59}, {0x3248, 0x324F}, {0xE000, 0xF8FF},
{0xFE00, 0xFE0F}, {0xFFFD, 0xFFFD}, {0x1F100, 0x1F10A},
{0x1F110, 0x1F12D}, {0x1F130, 0x1F169}, {0x1F170, 0x1F18D},
{0x1F18F, 0x1F190}, {0x1F19B, 0x1F1AC}, {0xE0100, 0xE01EF},
{0xF0000, 0xFFFFD}, {0x100000, 0x10FFFD},
}
var notassigned = table{
{0x27E6, 0x27ED}, {0x2985, 0x2986},
}
var neutral = table{
{0x0000, 0x001F}, {0x007F, 0x00A0}, {0x00A9, 0x00A9},
{0x00AB, 0x00AB}, {0x00B5, 0x00B5}, {0x00BB, 0x00BB},
{0x00C0, 0x00C5}, {0x00C7, 0x00CF}, {0x00D1, 0x00D6},
{0x00D9, 0x00DD}, {0x00E2, 0x00E5}, {0x00E7, 0x00E7},
{0x00EB, 0x00EB}, {0x00EE, 0x00EF}, {0x00F1, 0x00F1},
{0x00F4, 0x00F6}, {0x00FB, 0x00FB}, {0x00FD, 0x00FD},
{0x00FF, 0x0100}, {0x0102, 0x0110}, {0x0112, 0x0112},
{0x0114, 0x011A}, {0x011C, 0x0125}, {0x0128, 0x012A},
{0x012C, 0x0130}, {0x0134, 0x0137}, {0x0139, 0x013E},
{0x0143, 0x0143}, {0x0145, 0x0147}, {0x014C, 0x014C},
{0x014E, 0x0151}, {0x0154, 0x0165}, {0x0168, 0x016A},
{0x016C, 0x01CD}, {0x01CF, 0x01CF}, {0x01D1, 0x01D1},
{0x01D3, 0x01D3}, {0x01D5, 0x01D5}, {0x01D7, 0x01D7},
{0x01D9, 0x01D9}, {0x01DB, 0x01DB}, {0x01DD, 0x0250},
{0x0252, 0x0260}, {0x0262, 0x02C3}, {0x02C5, 0x02C6},
{0x02C8, 0x02C8}, {0x02CC, 0x02CC}, {0x02CE, 0x02CF},
{0x02D1, 0x02D7}, {0x02DC, 0x02DC}, {0x02DE, 0x02DE},
{0x02E0, 0x02FF}, {0x0370, 0x0377}, {0x037A, 0x037F},
{0x0384, 0x038A}, {0x038C, 0x038C}, {0x038E, 0x0390},
{0x03AA, 0x03B0}, {0x03C2, 0x03C2}, {0x03CA, 0x0400},
{0x0402, 0x040F}, {0x0450, 0x0450}, {0x0452, 0x052F},
{0x0531, 0x0556}, {0x0559, 0x058A}, {0x058D, 0x058F},
{0x0591, 0x05C7}, {0x05D0, 0x05EA}, {0x05EF, 0x05F4},
{0x0600, 0x061C}, {0x061E, 0x070D}, {0x070F, 0x074A},
{0x074D, 0x07B1}, {0x07C0, 0x07FA}, {0x07FD, 0x082D},
{0x0830, 0x083E}, {0x0840, 0x085B}, {0x085E, 0x085E},
{0x0860, 0x086A}, {0x08A0, 0x08B4}, {0x08B6, 0x08C7},
{0x08D3, 0x0983}, {0x0985, 0x098C}, {0x098F, 0x0990},
{0x0993, 0x09A8}, {0x09AA, 0x09B0}, {0x09B2, 0x09B2},
{0x09B6, 0x09B9}, {0x09BC, 0x09C4}, {0x09C7, 0x09C8},
{0x09CB, 0x09CE}, {0x09D7, 0x09D7}, {0x09DC, 0x09DD},
{0x09DF, 0x09E3}, {0x09E6, 0x09FE}, {0x0A01, 0x0A03},
{0x0A05, 0x0A0A}, {0x0A0F, 0x0A10}, {0x0A13, 0x0A28},
{0x0A2A, 0x0A30}, {0x0A32, 0x0A33}, {0x0A35, 0x0A36},
{0x0A38, 0x0A39}, {0x0A3C, 0x0A3C}, {0x0A3E, 0x0A42},
{0x0A47, 0x0A48}, {0x0A4B, 0x0A4D}, {0x0A51, 0x0A51},
{0x0A59, 0x0A5C}, {0x0A5E, 0x0A5E}, {0x0A66, 0x0A76},
{0x0A81, 0x0A83}, {0x0A85, 0x0A8D}, {0x0A8F, 0x0A91},
{0x0A93, 0x0AA8}, {0x0AAA, 0x0AB0}, {0x0AB2, 0x0AB3},
{0x0AB5, 0x0AB9}, {0x0ABC, 0x0AC5}, {0x0AC7, 0x0AC9},
{0x0ACB, 0x0ACD}, {0x0AD0, 0x0AD0}, {0x0AE0, 0x0AE3},
{0x0AE6, 0x0AF1}, {0x0AF9, 0x0AFF}, {0x0B01, 0x0B03},
{0x0B05, 0x0B0C}, {0x0B0F, 0x0B10}, {0x0B13, 0x0B28},
{0x0B2A, 0x0B30}, {0x0B32, 0x0B33}, {0x0B35, 0x0B39},
{0x0B3C, 0x0B44}, {0x0B47, 0x0B48}, {0x0B4B, 0x0B4D},
{0x0B55, 0x0B57}, {0x0B5C, 0x0B5D}, {0x0B5F, 0x0B63},
{0x0B66, 0x0B77}, {0x0B82, 0x0B83}, {0x0B85, 0x0B8A},
{0x0B8E, 0x0B90}, {0x0B92, 0x0B95}, {0x0B99, 0x0B9A},
{0x0B9C, 0x0B9C}, {0x0B9E, 0x0B9F}, {0x0BA3, 0x0BA4},
{0x0BA8, 0x0BAA}, {0x0BAE, 0x0BB9}, {0x0BBE, 0x0BC2},
{0x0BC6, 0x0BC8}, {0x0BCA, 0x0BCD}, {0x0BD0, 0x0BD0},
{0x0BD7, 0x0BD7}, {0x0BE6, 0x0BFA}, {0x0C00, 0x0C0C},
{0x0C0E, 0x0C10}, {0x0C12, 0x0C28}, {0x0C2A, 0x0C39},
{0x0C3D, 0x0C44}, {0x0C46, 0x0C48}, {0x0C4A, 0x0C4D},
{0x0C55, 0x0C56}, {0x0C58, 0x0C5A}, {0x0C60, 0x0C63},
{0x0C66, 0x0C6F}, {0x0C77, 0x0C8C}, {0x0C8E, 0x0C90},
{0x0C92, 0x0CA8}, {0x0CAA, 0x0CB3}, {0x0CB5, 0x0CB9},
{0x0CBC, 0x0CC4}, {0x0CC6, 0x0CC8}, {0x0CCA, 0x0CCD},
{0x0CD5, 0x0CD6}, {0x0CDE, 0x0CDE}, {0x0CE0, 0x0CE3},
{0x0CE6, 0x0CEF}, {0x0CF1, 0x0CF2}, {0x0D00, 0x0D0C},
{0x0D0E, 0x0D10}, {0x0D12, 0x0D44}, {0x0D46, 0x0D48},
{0x0D4A, 0x0D4F}, {0x0D54, 0x0D63}, {0x0D66, 0x0D7F},
{0x0D81, 0x0D83}, {0x0D85, 0x0D96}, {0x0D9A, 0x0DB1},
{0x0DB3, 0x0DBB}, {0x0DBD, 0x0DBD}, {0x0DC0, 0x0DC6},
{0x0DCA, 0x0DCA}, {0x0DCF, 0x0DD4}, {0x0DD6, 0x0DD6},
{0x0DD8, 0x0DDF}, {0x0DE6, 0x0DEF}, {0x0DF2, 0x0DF4},
{0x0E01, 0x0E3A}, {0x0E3F, 0x0E5B}, {0x0E81, 0x0E82},
{0x0E84, 0x0E84}, {0x0E86, 0x0E8A}, {0x0E8C, 0x0EA3},
{0x0EA5, 0x0EA5}, {0x0EA7, 0x0EBD}, {0x0EC0, 0x0EC4},
{0x0EC6, 0x0EC6}, {0x0EC8, 0x0ECD}, {0x0ED0, 0x0ED9},
{0x0EDC, 0x0EDF}, {0x0F00, 0x0F47}, {0x0F49, 0x0F6C},
{0x0F71, 0x0F97}, {0x0F99, 0x0FBC}, {0x0FBE, 0x0FCC},
{0x0FCE, 0x0FDA}, {0x1000, 0x10C5}, {0x10C7, 0x10C7},
{0x10CD, 0x10CD}, {0x10D0, 0x10FF}, {0x1160, 0x1248},
{0x124A, 0x124D}, {0x1250, 0x1256}, {0x1258, 0x1258},
{0x125A, 0x125D}, {0x1260, 0x1288}, {0x128A, 0x128D},
{0x1290, 0x12B0}, {0x12B2, 0x12B5}, {0x12B8, 0x12BE},
{0x12C0, 0x12C0}, {0x12C2, 0x12C5}, {0x12C8, 0x12D6},
{0x12D8, 0x1310}, {0x1312, 0x1315}, {0x1318, 0x135A},
{0x135D, 0x137C}, {0x1380, 0x1399}, {0x13A0, 0x13F5},
{0x13F8, 0x13FD}, {0x1400, 0x169C}, {0x16A0, 0x16F8},
{0x1700, 0x170C}, {0x170E, 0x1714}, {0x1720, 0x1736},
{0x1740, 0x1753}, {0x1760, 0x176C}, {0x176E, 0x1770},
{0x1772, 0x1773}, {0x1780, 0x17DD}, {0x17E0, 0x17E9},
{0x17F0, 0x17F9}, {0x1800, 0x180E}, {0x1810, 0x1819},
{0x1820, 0x1878}, {0x1880, 0x18AA}, {0x18B0, 0x18F5},
{0x1900, 0x191E}, {0x1920, 0x192B}, {0x1930, 0x193B},
{0x1940, 0x1940}, {0x1944, 0x196D}, {0x1970, 0x1974},
{0x1980, 0x19AB}, {0x19B0, 0x19C9}, {0x19D0, 0x19DA},
{0x19DE, 0x1A1B}, {0x1A1E, 0x1A5E}, {0x1A60, 0x1A7C},
{0x1A7F, 0x1A89}, {0x1A90, 0x1A99}, {0x1AA0, 0x1AAD},
{0x1AB0, 0x1AC0}, {0x1B00, 0x1B4B}, {0x1B50, 0x1B7C},
{0x1B80, 0x1BF3}, {0x1BFC, 0x1C37}, {0x1C3B, 0x1C49},
{0x1C4D, 0x1C88}, {0x1C90, 0x1CBA}, {0x1CBD, 0x1CC7},
{0x1CD0, 0x1CFA}, {0x1D00, 0x1DF9}, {0x1DFB, 0x1F15},
{0x1F18, 0x1F1D}, {0x1F20, 0x1F45}, {0x1F48, 0x1F4D},
{0x1F50, 0x1F57}, {0x1F59, 0x1F59}, {0x1F5B, 0x1F5B},
{0x1F5D, 0x1F5D}, {0x1F5F, 0x1F7D}, {0x1F80, 0x1FB4},
{0x1FB6, 0x1FC4}, {0x1FC6, 0x1FD3}, {0x1FD6, 0x1FDB},
{0x1FDD, 0x1FEF}, {0x1FF2, 0x1FF4}, {0x1FF6, 0x1FFE},
{0x2000, 0x200F}, {0x2011, 0x2012}, {0x2017, 0x2017},
{0x201A, 0x201B}, {0x201E, 0x201F}, {0x2023, 0x2023},
{0x2028, 0x202F}, {0x2031, 0x2031}, {0x2034, 0x2034},
{0x2036, 0x203A}, {0x203C, 0x203D}, {0x203F, 0x2064},
{0x2066, 0x2071}, {0x2075, 0x207E}, {0x2080, 0x2080},
{0x2085, 0x208E}, {0x2090, 0x209C}, {0x20A0, 0x20A8},
{0x20AA, 0x20AB}, {0x20AD, 0x20BF}, {0x20D0, 0x20F0},
{0x2100, 0x2102}, {0x2104, 0x2104}, {0x2106, 0x2108},
{0x210A, 0x2112}, {0x2114, 0x2115}, {0x2117, 0x2120},
{0x2123, 0x2125}, {0x2127, 0x212A}, {0x212C, 0x2152},
{0x2155, 0x215A}, {0x215F, 0x215F}, {0x216C, 0x216F},
{0x217A, 0x2188}, {0x218A, 0x218B}, {0x219A, 0x21B7},
{0x21BA, 0x21D1}, {0x21D3, 0x21D3}, {0x21D5, 0x21E6},
{0x21E8, 0x21FF}, {0x2201, 0x2201}, {0x2204, 0x2206},
{0x2209, 0x220A}, {0x220C, 0x220E}, {0x2210, 0x2210},
{0x2212, 0x2214}, {0x2216, 0x2219}, {0x221B, 0x221C},
{0x2221, 0x2222}, {0x2224, 0x2224}, {0x2226, 0x2226},
{0x222D, 0x222D}, {0x222F, 0x2233}, {0x2238, 0x223B},
{0x223E, 0x2247}, {0x2249, 0x224B}, {0x224D, 0x2251},
{0x2253, 0x225F}, {0x2262, 0x2263}, {0x2268, 0x2269},
{0x226C, 0x226D}, {0x2270, 0x2281}, {0x2284, 0x2285},
{0x2288, 0x2294}, {0x2296, 0x2298}, {0x229A, 0x22A4},
{0x22A6, 0x22BE}, {0x22C0, 0x2311}, {0x2313, 0x2319},
{0x231C, 0x2328}, {0x232B, 0x23E8}, {0x23ED, 0x23EF},
{0x23F1, 0x23F2}, {0x23F4, 0x2426}, {0x2440, 0x244A},
{0x24EA, 0x24EA}, {0x254C, 0x254F}, {0x2574, 0x257F},
{0x2590, 0x2591}, {0x2596, 0x259F}, {0x25A2, 0x25A2},
{0x25AA, 0x25B1}, {0x25B4, 0x25B5}, {0x25B8, 0x25BB},
{0x25BE, 0x25BF}, {0x25C2, 0x25C5}, {0x25C9, 0x25CA},
{0x25CC, 0x25CD}, {0x25D2, 0x25E1}, {0x25E6, 0x25EE},
{0x25F0, 0x25FC}, {0x25FF, 0x2604}, {0x2607, 0x2608},
{0x260A, 0x260D}, {0x2610, 0x2613}, {0x2616, 0x261B},
{0x261D, 0x261D}, {0x261F, 0x263F}, {0x2641, 0x2641},
{0x2643, 0x2647}, {0x2654, 0x265F}, {0x2662, 0x2662},
{0x2666, 0x2666}, {0x266B, 0x266B}, {0x266E, 0x266E},
{0x2670, 0x267E}, {0x2680, 0x2692}, {0x2694, 0x269D},
{0x26A0, 0x26A0}, {0x26A2, 0x26A9}, {0x26AC, 0x26BC},
{0x26C0, 0x26C3}, {0x26E2, 0x26E2}, {0x26E4, 0x26E7},
{0x2700, 0x2704}, {0x2706, 0x2709}, {0x270C, 0x2727},
{0x2729, 0x273C}, {0x273E, 0x274B}, {0x274D, 0x274D},
{0x274F, 0x2752}, {0x2756, 0x2756}, {0x2758, 0x2775},
{0x2780, 0x2794}, {0x2798, 0x27AF}, {0x27B1, 0x27BE},
{0x27C0, 0x27E5}, {0x27EE, 0x2984}, {0x2987, 0x2B1A},
{0x2B1D, 0x2B4F}, {0x2B51, 0x2B54}, {0x2B5A, 0x2B73},
{0x2B76, 0x2B95}, {0x2B97, 0x2C2E}, {0x2C30, 0x2C5E},
{0x2C60, 0x2CF3}, {0x2CF9, 0x2D25}, {0x2D27, 0x2D27},
{0x2D2D, 0x2D2D}, {0x2D30, 0x2D67}, {0x2D6F, 0x2D70},
{0x2D7F, 0x2D96}, {0x2DA0, 0x2DA6}, {0x2DA8, 0x2DAE},
{0x2DB0, 0x2DB6}, {0x2DB8, 0x2DBE}, {0x2DC0, 0x2DC6},
{0x2DC8, 0x2DCE}, {0x2DD0, 0x2DD6}, {0x2DD8, 0x2DDE},
{0x2DE0, 0x2E52}, {0x303F, 0x303F}, {0x4DC0, 0x4DFF},
{0xA4D0, 0xA62B}, {0xA640, 0xA6F7}, {0xA700, 0xA7BF},
{0xA7C2, 0xA7CA}, {0xA7F5, 0xA82C}, {0xA830, 0xA839},
{0xA840, 0xA877}, {0xA880, 0xA8C5}, {0xA8CE, 0xA8D9},
{0xA8E0, 0xA953}, {0xA95F, 0xA95F}, {0xA980, 0xA9CD},
{0xA9CF, 0xA9D9}, {0xA9DE, 0xA9FE}, {0xAA00, 0xAA36},
{0xAA40, 0xAA4D}, {0xAA50, 0xAA59}, {0xAA5C, 0xAAC2},
{0xAADB, 0xAAF6}, {0xAB01, 0xAB06}, {0xAB09, 0xAB0E},
{0xAB11, 0xAB16}, {0xAB20, 0xAB26}, {0xAB28, 0xAB2E},
{0xAB30, 0xAB6B}, {0xAB70, 0xABED}, {0xABF0, 0xABF9},
{0xD7B0, 0xD7C6}, {0xD7CB, 0xD7FB}, {0xD800, 0xDFFF},
{0xFB00, 0xFB06}, {0xFB13, 0xFB17}, {0xFB1D, 0xFB36},
{0xFB38, 0xFB3C}, {0xFB3E, 0xFB3E}, {0xFB40, 0xFB41},
{0xFB43, 0xFB44}, {0xFB46, 0xFBC1}, {0xFBD3, 0xFD3F},
{0xFD50, 0xFD8F}, {0xFD92, 0xFDC7}, {0xFDF0, 0xFDFD},
{0xFE20, 0xFE2F}, {0xFE70, 0xFE74}, {0xFE76, 0xFEFC},
{0xFEFF, 0xFEFF}, {0xFFF9, 0xFFFC}, {0x10000, 0x1000B},
{0x1000D, 0x10026}, {0x10028, 0x1003A}, {0x1003C, 0x1003D},
{0x1003F, 0x1004D}, {0x10050, 0x1005D}, {0x10080, 0x100FA},
{0x10100, 0x10102}, {0x10107, 0x10133}, {0x10137, 0x1018E},
{0x10190, 0x1019C}, {0x101A0, 0x101A0}, {0x101D0, 0x101FD},
{0x10280, 0x1029C}, {0x102A0, 0x102D0}, {0x102E0, 0x102FB},
{0x10300, 0x10323}, {0x1032D, 0x1034A}, {0x10350, 0x1037A},
{0x10380, 0x1039D}, {0x1039F, 0x103C3}, {0x103C8, 0x103D5},
{0x10400, 0x1049D}, {0x104A0, 0x104A9}, {0x104B0, 0x104D3},
{0x104D8, 0x104FB}, {0x10500, 0x10527}, {0x10530, 0x10563},
{0x1056F, 0x1056F}, {0x10600, 0x10736}, {0x10740, 0x10755},
{0x10760, 0x10767}, {0x10800, 0x10805}, {0x10808, 0x10808},
{0x1080A, 0x10835}, {0x10837, 0x10838}, {0x1083C, 0x1083C},
{0x1083F, 0x10855}, {0x10857, 0x1089E}, {0x108A7, 0x108AF},
{0x108E0, 0x108F2}, {0x108F4, 0x108F5}, {0x108FB, 0x1091B},
{0x1091F, 0x10939}, {0x1093F, 0x1093F}, {0x10980, 0x109B7},
{0x109BC, 0x109CF}, {0x109D2, 0x10A03}, {0x10A05, 0x10A06},
{0x10A0C, 0x10A13}, {0x10A15, 0x10A17}, {0x10A19, 0x10A35},
{0x10A38, 0x10A3A}, {0x10A3F, 0x10A48}, {0x10A50, 0x10A58},
{0x10A60, 0x10A9F}, {0x10AC0, 0x10AE6}, {0x10AEB, 0x10AF6},
{0x10B00, 0x10B35}, {0x10B39, 0x10B55}, {0x10B58, 0x10B72},
{0x10B78, 0x10B91}, {0x10B99, 0x10B9C}, {0x10BA9, 0x10BAF},
{0x10C00, 0x10C48}, {0x10C80, 0x10CB2}, {0x10CC0, 0x10CF2},
{0x10CFA, 0x10D27}, {0x10D30, 0x10D39}, {0x10E60, 0x10E7E},
{0x10E80, 0x10EA9}, {0x10EAB, 0x10EAD}, {0x10EB0, 0x10EB1},
{0x10F00, 0x10F27}, {0x10F30, 0x10F59}, {0x10FB0, 0x10FCB},
{0x10FE0, 0x10FF6}, {0x11000, 0x1104D}, {0x11052, 0x1106F},
{0x1107F, 0x110C1}, {0x110CD, 0x110CD}, {0x110D0, 0x110E8},
{0x110F0, 0x110F9}, {0x11100, 0x11134}, {0x11136, 0x11147},
{0x11150, 0x11176}, {0x11180, 0x111DF}, {0x111E1, 0x111F4},
{0x11200, 0x11211}, {0x11213, 0x1123E}, {0x11280, 0x11286},
{0x11288, 0x11288}, {0x1128A, 0x1128D}, {0x1128F, 0x1129D},
{0x1129F, 0x112A9}, {0x112B0, 0x112EA}, {0x112F0, 0x112F9},
{0x11300, 0x11303}, {0x11305, 0x1130C}, {0x1130F, 0x11310},
{0x11313, 0x11328}, {0x1132A, 0x11330}, {0x11332, 0x11333},
{0x11335, 0x11339}, {0x1133B, 0x11344}, {0x11347, 0x11348},
{0x1134B, 0x1134D}, {0x11350, 0x11350}, {0x11357, 0x11357},
{0x1135D, 0x11363}, {0x11366, 0x1136C}, {0x11370, 0x11374},
{0x11400, 0x1145B}, {0x1145D, 0x11461}, {0x11480, 0x114C7},
{0x114D0, 0x114D9}, {0x11580, 0x115B5}, {0x115B8, 0x115DD},
{0x11600, 0x11644}, {0x11650, 0x11659}, {0x11660, 0x1166C},
{0x11680, 0x116B8}, {0x116C0, 0x116C9}, {0x11700, 0x1171A},
{0x1171D, 0x1172B}, {0x11730, 0x1173F}, {0x11800, 0x1183B},
{0x118A0, 0x118F2}, {0x118FF, 0x11906}, {0x11909, 0x11909},
{0x1190C, 0x11913}, {0x11915, 0x11916}, {0x11918, 0x11935},
{0x11937, 0x11938}, {0x1193B, 0x11946}, {0x11950, 0x11959},
{0x119A0, 0x119A7}, {0x119AA, 0x119D7}, {0x119DA, 0x119E4},
{0x11A00, 0x11A47}, {0x11A50, 0x11AA2}, {0x11AC0, 0x11AF8},
{0x11C00, 0x11C08}, {0x11C0A, 0x11C36}, {0x11C38, 0x11C45},
{0x11C50, 0x11C6C}, {0x11C70, 0x11C8F}, {0x11C92, 0x11CA7},
{0x11CA9, 0x11CB6}, {0x11D00, 0x11D06}, {0x11D08, 0x11D09},
{0x11D0B, 0x11D36}, {0x11D3A, 0x11D3A}, {0x11D3C, 0x11D3D},
{0x11D3F, 0x11D47}, {0x11D50, 0x11D59}, {0x11D60, 0x11D65},
{0x11D67, 0x11D68}, {0x11D6A, 0x11D8E}, {0x11D90, 0x11D91},
{0x11D93, 0x11D98}, {0x11DA0, 0x11DA9}, {0x11EE0, 0x11EF8},
{0x11FB0, 0x11FB0}, {0x11FC0, 0x11FF1}, {0x11FFF, 0x12399},
{0x12400, 0x1246E}, {0x12470, 0x12474}, {0x12480, 0x12543},
{0x13000, 0x1342E}, {0x13430, 0x13438}, {0x14400, 0x14646},
{0x16800, 0x16A38}, {0x16A40, 0x16A5E}, {0x16A60, 0x16A69},
{0x16A6E, 0x16A6F}, {0x16AD0, 0x16AED}, {0x16AF0, 0x16AF5},
{0x16B00, 0x16B45}, {0x16B50, 0x16B59}, {0x16B5B, 0x16B61},
{0x16B63, 0x16B77}, {0x16B7D, 0x16B8F}, {0x16E40, 0x16E9A},
{0x16F00, 0x16F4A}, {0x16F4F, 0x16F87}, {0x16F8F, 0x16F9F},
{0x1BC00, 0x1BC6A}, {0x1BC70, 0x1BC7C}, {0x1BC80, 0x1BC88},
{0x1BC90, 0x1BC99}, {0x1BC9C, 0x1BCA3}, {0x1D000, 0x1D0F5},
{0x1D100, 0x1D126}, {0x1D129, 0x1D1E8}, {0x1D200, 0x1D245},
{0x1D2E0, 0x1D2F3}, {0x1D300, 0x1D356}, {0x1D360, 0x1D378},
{0x1D400, 0x1D454}, {0x1D456, 0x1D49C}, {0x1D49E, 0x1D49F},
{0x1D4A2, 0x1D4A2}, {0x1D4A5, 0x1D4A6}, {0x1D4A9, 0x1D4AC},
{0x1D4AE, 0x1D4B9}, {0x1D4BB, 0x1D4BB}, {0x1D4BD, 0x1D4C3},
{0x1D4C5, 0x1D505}, {0x1D507, 0x1D50A}, {0x1D50D, 0x1D514},
{0x1D516, 0x1D51C}, {0x1D51E, 0x1D539}, {0x1D53B, 0x1D53E},
{0x1D540, 0x1D544}, {0x1D546, 0x1D546}, {0x1D54A, 0x1D550},
{0x1D552, 0x1D6A5}, {0x1D6A8, 0x1D7CB}, {0x1D7CE, 0x1DA8B},
{0x1DA9B, 0x1DA9F}, {0x1DAA1, 0x1DAAF}, {0x1E000, 0x1E006},
{0x1E008, 0x1E018}, {0x1E01B, 0x1E021}, {0x1E023, 0x1E024},
{0x1E026, 0x1E02A}, {0x1E100, 0x1E12C}, {0x1E130, 0x1E13D},
{0x1E140, 0x1E149}, {0x1E14E, 0x1E14F}, {0x1E2C0, 0x1E2F9},
{0x1E2FF, 0x1E2FF}, {0x1E800, 0x1E8C4}, {0x1E8C7, 0x1E8D6},
{0x1E900, 0x1E94B}, {0x1E950, 0x1E959}, {0x1E95E, 0x1E95F},
{0x1EC71, 0x1ECB4}, {0x1ED01, 0x1ED3D}, {0x1EE00, 0x1EE03},
{0x1EE05, 0x1EE1F}, {0x1EE21, 0x1EE22}, {0x1EE24, 0x1EE24},
{0x1EE27, 0x1EE27}, {0x1EE29, 0x1EE32}, {0x1EE34, 0x1EE37},
{0x1EE39, 0x1EE39}, {0x1EE3B, 0x1EE3B}, {0x1EE42, 0x1EE42},
{0x1EE47, 0x1EE47}, {0x1EE49, 0x1EE49}, {0x1EE4B, 0x1EE4B},
{0x1EE4D, 0x1EE4F}, {0x1EE51, 0x1EE52}, {0x1EE54, 0x1EE54},
{0x1EE57, 0x1EE57}, {0x1EE59, 0x1EE59}, {0x1EE5B, 0x1EE5B},
{0x1EE5D, 0x1EE5D}, {0x1EE5F, 0x1EE5F}, {0x1EE61, 0x1EE62},
{0x1EE64, 0x1EE64}, {0x1EE67, 0x1EE6A}, {0x1EE6C, 0x1EE72},
{0x1EE74, 0x1EE77}, {0x1EE79, 0x1EE7C}, {0x1EE7E, 0x1EE7E},
{0x1EE80, 0x1EE89}, {0x1EE8B, 0x1EE9B}, {0x1EEA1, 0x1EEA3},
{0x1EEA5, 0x1EEA9}, {0x1EEAB, 0x1EEBB}, {0x1EEF0, 0x1EEF1},
{0x1F000, 0x1F003}, {0x1F005, 0x1F02B}, {0x1F030, 0x1F093},
{0x1F0A0, 0x1F0AE}, {0x1F0B1, 0x1F0BF}, {0x1F0C1, 0x1F0CE},
{0x1F0D1, 0x1F0F5}, {0x1F10B, 0x1F10F}, {0x1F12E, 0x1F12F},
{0x1F16A, 0x1F16F}, {0x1F1AD, 0x1F1AD}, {0x1F1E6, 0x1F1FF},
{0x1F321, 0x1F32C}, {0x1F336, 0x1F336}, {0x1F37D, 0x1F37D},
{0x1F394, 0x1F39F}, {0x1F3CB, 0x1F3CE}, {0x1F3D4, 0x1F3DF},
{0x1F3F1, 0x1F3F3}, {0x1F3F5, 0x1F3F7}, {0x1F43F, 0x1F43F},
{0x1F441, 0x1F441}, {0x1F4FD, 0x1F4FE}, {0x1F53E, 0x1F54A},
{0x1F54F, 0x1F54F}, {0x1F568, 0x1F579}, {0x1F57B, 0x1F594},
{0x1F597, 0x1F5A3}, {0x1F5A5, 0x1F5FA}, {0x1F650, 0x1F67F},
{0x1F6C6, 0x1F6CB}, {0x1F6CD, 0x1F6CF}, {0x1F6D3, 0x1F6D4},
{0x1F6E0, 0x1F6EA}, {0x1F6F0, 0x1F6F3}, {0x1F700, 0x1F773},
{0x1F780, 0x1F7D8}, {0x1F800, 0x1F80B}, {0x1F810, 0x1F847},
{0x1F850, 0x1F859}, {0x1F860, 0x1F887}, {0x1F890, 0x1F8AD},
{0x1F8B0, 0x1F8B1}, {0x1F900, 0x1F90B}, {0x1F93B, 0x1F93B},
{0x1F946, 0x1F946}, {0x1FA00, 0x1FA53}, {0x1FA60, 0x1FA6D},
{0x1FB00, 0x1FB92}, {0x1FB94, 0x1FBCA}, {0x1FBF0, 0x1FBF9},
{0xE0001, 0xE0001}, {0xE0020, 0xE007F},
}
var emoji = table{
{0x203C, 0x203C}, {0x2049, 0x2049}, {0x2122, 0x2122},
{0x2139, 0x2139}, {0x2194, 0x2199}, {0x21A9, 0x21AA},
{0x231A, 0x231B}, {0x2328, 0x2328}, {0x2388, 0x2388},
{0x23CF, 0x23CF}, {0x23E9, 0x23F3}, {0x23F8, 0x23FA},
{0x24C2, 0x24C2}, {0x25AA, 0x25AB}, {0x25B6, 0x25B6},
{0x25C0, 0x25C0}, {0x25FB, 0x25FE}, {0x2600, 0x2605},
{0x2607, 0x2612}, {0x2614, 0x2685}, {0x2690, 0x2705},
{0x2708, 0x2712}, {0x2714, 0x2714}, {0x2716, 0x2716},
{0x271D, 0x271D}, {0x2721, 0x2721}, {0x2728, 0x2728},
{0x2733, 0x2734}, {0x2744, 0x2744}, {0x2747, 0x2747},
{0x274C, 0x274C}, {0x274E, 0x274E}, {0x2753, 0x2755},
{0x2757, 0x2757}, {0x2763, 0x2767}, {0x2795, 0x2797},
{0x27A1, 0x27A1}, {0x27B0, 0x27B0}, {0x27BF, 0x27BF},
{0x2934, 0x2935}, {0x2B05, 0x2B07}, {0x2B1B, 0x2B1C},
{0x2B50, 0x2B50}, {0x2B55, 0x2B55}, {0x3030, 0x3030},
{0x303D, 0x303D}, {0x3297, 0x3297}, {0x3299, 0x3299},
{0x1F000, 0x1F0FF}, {0x1F10D, 0x1F10F}, {0x1F12F, 0x1F12F},
{0x1F16C, 0x1F171}, {0x1F17E, 0x1F17F}, {0x1F18E, 0x1F18E},
{0x1F191, 0x1F19A}, {0x1F1AD, 0x1F1E5}, {0x1F201, 0x1F20F},
{0x1F21A, 0x1F21A}, {0x1F22F, 0x1F22F}, {0x1F232, 0x1F23A},
{0x1F23C, 0x1F23F}, {0x1F249, 0x1F3FA}, {0x1F400, 0x1F53D},
{0x1F546, 0x1F64F}, {0x1F680, 0x1F6FF}, {0x1F774, 0x1F77F},
{0x1F7D5, 0x1F7FF}, {0x1F80C, 0x1F80F}, {0x1F848, 0x1F84F},
{0x1F85A, 0x1F85F}, {0x1F888, 0x1F88F}, {0x1F8AE, 0x1F8FF},
{0x1F90C, 0x1F93A}, {0x1F93C, 0x1F945}, {0x1F947, 0x1FAFF},
{0x1FC00, 0x1FFFD},
}

View File

@@ -0,0 +1,28 @@
// +build windows
// +build !appengine
package runewidth
import (
"syscall"
)
var (
kernel32 = syscall.NewLazyDLL("kernel32")
procGetConsoleOutputCP = kernel32.NewProc("GetConsoleOutputCP")
)
// IsEastAsian return true if the current locale is CJK
func IsEastAsian() bool {
r1, _, _ := procGetConsoleOutputCP.Call()
if r1 == 0 {
return false
}
switch int(r1) {
case 932, 51932, 936, 949, 950:
return true
}
return false
}

View File

@@ -0,0 +1,7 @@
coverage.out
coverage.txt
release-notes.txt
.directory
.chglog
.vscode
.DS_Store

View File

@@ -0,0 +1,534 @@
<a name="unreleased"></a>
## [Unreleased]
<a name="v0.7.1"></a>
## [v0.7.1] - 2023-05-11
### Add
- Add describe functions ([#77](https://github.com/montanaflynn/stats/issues/77))
### Update
- Update .gitignore
- Update README.md, LICENSE and DOCUMENTATION.md files
- Update github action go workflow to run on push
<a name="v0.7.0"></a>
## [v0.7.0] - 2023-01-08
### Add
- Add geometric distribution functions ([#75](https://github.com/montanaflynn/stats/issues/75))
- Add GitHub action go workflow
### Remove
- Remove travis CI config
### Update
- Update changelog with v0.7.0 changes
- Update changelog with v0.7.0 changes
- Update github action go workflow
- Update geometric distribution tests
<a name="v0.6.6"></a>
## [v0.6.6] - 2021-04-26
### Add
- Add support for string and io.Reader in LoadRawData (pr [#68](https://github.com/montanaflynn/stats/issues/68))
- Add latest versions of Go to test against
### Update
- Update changelog with v0.6.6 changes
### Use
- Use math.Sqrt in StandardDeviation (PR [#64](https://github.com/montanaflynn/stats/issues/64))
<a name="v0.6.5"></a>
## [v0.6.5] - 2021-02-21
### Add
- Add Float64Data.Quartiles documentation
- Add Quartiles method to Float64Data type (issue [#60](https://github.com/montanaflynn/stats/issues/60))
### Fix
- Fix make release changelog command and add changelog history
### Update
- Update changelog with v0.6.5 changes
- Update changelog with v0.6.4 changes
- Update README.md links to CHANGELOG.md and DOCUMENTATION.md
- Update README.md and Makefile with new release commands
<a name="v0.6.4"></a>
## [v0.6.4] - 2021-01-13
### Fix
- Fix failing tests due to precision errors on arm64 ([#58](https://github.com/montanaflynn/stats/issues/58))
### Update
- Update changelog with v0.6.4 changes
- Update examples directory to include a README.md used for synopsis
- Update go.mod to include go version where modules are enabled by default
- Update changelog with v0.6.3 changes
<a name="v0.6.3"></a>
## [v0.6.3] - 2020-02-18
### Add
- Add creating and committing changelog to Makefile release directive
- Add release-notes.txt and .chglog directory to .gitignore
### Update
- Update exported tests to use import for better example documentation
- Update documentation using godoc2md
- Update changelog with v0.6.2 release
<a name="v0.6.2"></a>
## [v0.6.2] - 2020-02-18
### Fix
- Fix linting errcheck warnings in go benchmarks
### Update
- Update Makefile release directive to use correct release name
<a name="v0.6.1"></a>
## [v0.6.1] - 2020-02-18
### Add
- Add StableSample function signature to readme
### Fix
- Fix linting warnings for normal distribution functions formatting and tests
### Update
- Update documentation links and rename DOC.md to DOCUMENTATION.md
- Update README with link to pkg.go.dev reference and release section
- Update Makefile with new changelog, docs, and release directives
- Update DOC.md links to GitHub source code
- Update doc.go comment and add DOC.md package reference file
- Update changelog using git-chglog
<a name="v0.6.0"></a>
## [v0.6.0] - 2020-02-17
### Add
- Add Normal Distribution Functions ([#56](https://github.com/montanaflynn/stats/issues/56))
- Add previous versions of Go to travis CI config
- Add check for distinct values in Mode function ([#51](https://github.com/montanaflynn/stats/issues/51))
- Add StableSample function ([#48](https://github.com/montanaflynn/stats/issues/48))
- Add doc.go file to show description and usage on godoc.org
- Add comments to new error and legacy error variables
- Add ExampleRound function to tests
- Add go.mod file for module support
- Add Sigmoid, SoftMax and Entropy methods and tests
- Add Entropy documentation, example and benchmarks
- Add Entropy function ([#44](https://github.com/montanaflynn/stats/issues/44))
### Fix
- Fix percentile when only one element ([#47](https://github.com/montanaflynn/stats/issues/47))
- Fix AutoCorrelation name in comments and remove unneeded Sprintf
### Improve
- Improve documentation section with command comments
### Remove
- Remove very old versions of Go in travis CI config
- Remove boolean comparison to get rid of gometalinter warning
### Update
- Update license dates
- Update Distance functions signatures to use Float64Data
- Update Sigmoid examples
- Update error names with backward compatibility
### Use
- Use relative link to examples/main.go
- Use a single var block for exported errors
<a name="v0.5.0"></a>
## [v0.5.0] - 2019-01-16
### Add
- Add Sigmoid and Softmax functions
### Fix
- Fix syntax highlighting and add CumulativeSum func
<a name="v0.4.0"></a>
## [v0.4.0] - 2019-01-14
### Add
- Add goreport badge and documentation section to README.md
- Add Examples to test files
- Add AutoCorrelation and nist tests
- Add String method to statsErr type
- Add Y coordinate error for ExponentialRegression
- Add syntax highlighting ([#43](https://github.com/montanaflynn/stats/issues/43))
- Add CumulativeSum ([#40](https://github.com/montanaflynn/stats/issues/40))
- Add more tests and rename distance files
- Add coverage and benchmarks to azure pipeline
- Add go tests to azure pipeline
### Change
- Change travis tip alias to master
- Change codecov to coveralls for code coverage
### Fix
- Fix a few lint warnings
- Fix example error
### Improve
- Improve test coverage of distance functions
### Only
- Only run travis on stable and tip versions
- Only check code coverage on tip
### Remove
- Remove azure CI pipeline
- Remove unnecessary type conversions
### Return
- Return EmptyInputErr instead of EmptyInput
### Set
- Set up CI with Azure Pipelines
<a name="0.3.0"></a>
## [0.3.0] - 2017-12-02
### Add
- Add Chebyshev, Manhattan, Euclidean and Minkowski distance functions ([#35](https://github.com/montanaflynn/stats/issues/35))
- Add function for computing chebyshev distance. ([#34](https://github.com/montanaflynn/stats/issues/34))
- Add support for time.Duration
- Add LoadRawData to docs and examples
- Add unit test for edge case that wasn't covered
- Add unit tests for edge cases that weren't covered
- Add pearson alias delegating to correlation
- Add CovariancePopulation to Float64Data
- Add pearson product-moment correlation coefficient
- Add population covariance
- Add random slice benchmarks
- Add all applicable functions as methods to Float64Data type
- Add MIT license badge
- Add link to examples/methods.go
- Add Protips for usage and documentation sections
- Add tests for rounding up
- Add webdoc target and remove linting from test target
- Add example usage and consolidate contributing information
### Added
- Added MedianAbsoluteDeviation
### Annotation
- Annotation spelling error
### Auto
- auto commit
- auto commit
### Calculate
- Calculate correlation with sdev and covp
### Clean
- Clean up README.md and add info for offline docs
### Consolidated
- Consolidated all error values.
### Fix
- Fix Percentile logic
- Fix InterQuartileRange method test
- Fix zero percent bug and add test
- Fix usage example output typos
### Improve
- Improve bounds checking in Percentile
- Improve error log messaging
### Imput
- Imput -> Input
### Include
- Include alternative way to set Float64Data in example
### Make
- Make various changes to README.md
### Merge
- Merge branch 'master' of github.com:montanaflynn/stats
- Merge master
### Mode
- Mode calculation fix and tests
### Realized
- Realized the obvious efficiency gains of ignoring the unique numbers at the beginning of the slice. Benchmark joy ensued.
### Refactor
- Refactor testing of Round()
- Refactor setting Coordinate y field using Exp in place of Pow
- Refactor Makefile and add docs target
### Remove
- Remove deep links to types and functions
### Rename
- Rename file from types to data
### Retrieve
- Retrieve InterQuartileRange for the Float64Data.
### Split
- Split up stats.go into separate files
### Support
- Support more types on LoadRawData() ([#36](https://github.com/montanaflynn/stats/issues/36))
### Switch
- Switch default and check targets
### Update
- Update Readme
- Update example methods and some text
- Update README and include Float64Data type method examples
### Pull Requests
- Merge pull request [#32](https://github.com/montanaflynn/stats/issues/32) from a-robinson/percentile
- Merge pull request [#30](https://github.com/montanaflynn/stats/issues/30) from montanaflynn/fix-test
- Merge pull request [#29](https://github.com/montanaflynn/stats/issues/29) from edupsousa/master
- Merge pull request [#27](https://github.com/montanaflynn/stats/issues/27) from andrey-yantsen/fix-percentile-out-of-bounds
- Merge pull request [#25](https://github.com/montanaflynn/stats/issues/25) from kazhuravlev/patch-1
- Merge pull request [#22](https://github.com/montanaflynn/stats/issues/22) from JanBerktold/time-duration
- Merge pull request [#24](https://github.com/montanaflynn/stats/issues/24) from alouche/master
- Merge pull request [#21](https://github.com/montanaflynn/stats/issues/21) from brydavis/master
- Merge pull request [#19](https://github.com/montanaflynn/stats/issues/19) from ginodeis/mode-bug
- Merge pull request [#17](https://github.com/montanaflynn/stats/issues/17) from Kunde21/master
- Merge pull request [#3](https://github.com/montanaflynn/stats/issues/3) from montanaflynn/master
- Merge pull request [#2](https://github.com/montanaflynn/stats/issues/2) from montanaflynn/master
- Merge pull request [#13](https://github.com/montanaflynn/stats/issues/13) from toashd/pearson
- Merge pull request [#12](https://github.com/montanaflynn/stats/issues/12) from alixaxel/MAD
- Merge pull request [#1](https://github.com/montanaflynn/stats/issues/1) from montanaflynn/master
- Merge pull request [#11](https://github.com/montanaflynn/stats/issues/11) from Kunde21/modeMemReduce
- Merge pull request [#10](https://github.com/montanaflynn/stats/issues/10) from Kunde21/ModeRewrite
<a name="0.2.0"></a>
## [0.2.0] - 2015-10-14
### Add
- Add Makefile with gometalinter, testing, benchmarking and coverage report targets
- Add comments describing functions and structs
- Add Correlation func
- Add Covariance func
- Add tests for new function shortcuts
- Add StandardDeviation function as a shortcut to StandardDeviationPopulation
- Add Float64Data and Series types
### Change
- Change Sample to return a standard []float64 type
### Fix
- Fix broken link to Makefile
- Fix broken link and simplify code coverage reporting command
- Fix go vet warning about printf type placeholder
- Fix failing codecov test coverage reporting
- Fix link to CHANGELOG.md
### Fixed
- Fixed typographical error, changed accomdate to accommodate in README.
### Include
- Include Variance and StandardDeviation shortcuts
### Pass
- Pass gometalinter
### Refactor
- Refactor Variance function to be the same as population variance
### Release
- Release version 0.2.0
### Remove
- Remove unneeded do packages and update cover URL
- Remove sudo from pip install
### Reorder
- Reorder functions and sections
### Revert
- Revert to legacy containers to preserve go1.1 testing
### Switch
- Switch from legacy to container-based CI infrastructure
### Update
- Update contributing instructions and mention Makefile
### Pull Requests
- Merge pull request [#5](https://github.com/montanaflynn/stats/issues/5) from orthographic-pedant/spell_check/accommodate
<a name="0.1.0"></a>
## [0.1.0] - 2015-08-19
### Add
- Add CONTRIBUTING.md
### Rename
- Rename functions while preserving backwards compatibility
<a name="0.0.9"></a>
## 0.0.9 - 2015-08-18
### Add
- Add HarmonicMean func
- Add GeometricMean func
- Add .gitignore to avoid commiting test coverage report
- Add Outliers stuct and QuantileOutliers func
- Add Interquartile Range, Midhinge and Trimean examples
- Add Trimean
- Add Midhinge
- Add Inter Quartile Range
- Add a unit test to check for an empty slice error
- Add Quantiles struct and Quantile func
- Add more tests and fix a typo
- Add Golang 1.5 to build tests
- Add a standard MIT license file
- Add basic benchmarking
- Add regression models
- Add codecov token
- Add codecov
- Add check for slices with a single item
- Add coverage tests
- Add back previous Go versions to Travis CI
- Add Travis CI
- Add GoDoc badge
- Add Percentile and Float64ToInt functions
- Add another rounding test for whole numbers
- Add build status badge
- Add code coverage badge
- Add test for NaN, achieving 100% code coverage
- Add round function
- Add standard deviation function
- Add sum function
### Add
- add tests for sample
- add sample
### Added
- Added sample and population variance and deviation functions
- Added README
### Adjust
- Adjust API ordering
### Avoid
- Avoid unintended consequence of using sort
### Better
- Better performing min/max
- Better description
### Change
- Change package path to potentially fix a bug in earlier versions of Go
### Clean
- Clean up README and add some more information
- Clean up test error
### Consistent
- Consistent empty slice error messages
- Consistent var naming
- Consistent func declaration
### Convert
- Convert ints to floats
### Duplicate
- Duplicate packages for all versions
### Export
- Export Coordinate struct fields
### First
- First commit
### Fix
- Fix copy pasta mistake testing the wrong function
- Fix error message
- Fix usage output and edit API doc section
- Fix testing edgecase where map was in wrong order
- Fix usage example
- Fix usage examples
### Include
- Include the Nearest Rank method of calculating percentiles
### More
- More commenting
### Move
- Move GoDoc link to top
### Redirect
- Redirect kills newer versions of Go
### Refactor
- Refactor code and error checking
### Remove
- Remove unnecassary typecasting in sum func
- Remove cover since it doesn't work for later versions of go
- Remove golint and gocoveralls
### Rename
- Rename StandardDev to StdDev
- Rename StandardDev to StdDev
### Return
- Return errors for all functions
### Run
- Run go fmt to clean up formatting
### Simplify
- Simplify min/max function
### Start
- Start with minimal tests
### Switch
- Switch wercker to travis and update todos
### Table
- table testing style
### Update
- Update README and move the example main.go into it's own file
- Update TODO list
- Update README
- Update usage examples and todos
### Use
- Use codecov the recommended way
- Use correct string formatting types
### Pull Requests
- Merge pull request [#4](https://github.com/montanaflynn/stats/issues/4) from saromanov/sample
[Unreleased]: https://github.com/montanaflynn/stats/compare/v0.7.1...HEAD
[v0.7.1]: https://github.com/montanaflynn/stats/compare/v0.7.0...v0.7.1
[v0.7.0]: https://github.com/montanaflynn/stats/compare/v0.6.6...v0.7.0
[v0.6.6]: https://github.com/montanaflynn/stats/compare/v0.6.5...v0.6.6
[v0.6.5]: https://github.com/montanaflynn/stats/compare/v0.6.4...v0.6.5
[v0.6.4]: https://github.com/montanaflynn/stats/compare/v0.6.3...v0.6.4
[v0.6.3]: https://github.com/montanaflynn/stats/compare/v0.6.2...v0.6.3
[v0.6.2]: https://github.com/montanaflynn/stats/compare/v0.6.1...v0.6.2
[v0.6.1]: https://github.com/montanaflynn/stats/compare/v0.6.0...v0.6.1
[v0.6.0]: https://github.com/montanaflynn/stats/compare/v0.5.0...v0.6.0
[v0.5.0]: https://github.com/montanaflynn/stats/compare/v0.4.0...v0.5.0
[v0.4.0]: https://github.com/montanaflynn/stats/compare/0.3.0...v0.4.0
[0.3.0]: https://github.com/montanaflynn/stats/compare/0.2.0...0.3.0
[0.2.0]: https://github.com/montanaflynn/stats/compare/0.1.0...0.2.0
[0.1.0]: https://github.com/montanaflynn/stats/compare/0.0.9...0.1.0

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014-2023 Montana Flynn (https://montanaflynn.com)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Some files were not shown because too many files have changed in this diff Show More