Added helpers section

This commit is contained in:
ss-9984
2024-01-22 20:34:01 +05:30
committed by dmc
parent 33e2260149
commit 3fb2208123
6 changed files with 875 additions and 0 deletions

168
helpers/compact-encoding.md Normal file
View File

@@ -0,0 +1,168 @@
# Compact Encoding
A series of binary encoders/decoders for building small and fast parsers and serializers.
> [Github (Compact-Encoding)](https://github.com/compact-encoding/compact-encoding)
* [Compact-Encoding](compact-encoding.md#installation)
* Methods
* [state()](compact-encoding.md#state)
* [enc.preencode(state, val)](compact-encoding.md#encencodestate-val)
* [enc.encode(state, val)](compact-encoding.md#encpreencodestate-val)
* [enc.decode(state)](compact-encoding.md#val--encdecodestate)
* [Helpers](compact-encoding.md#helpers)
* [Bundled Encodings](compact-encoding.md#bundled-encodings)
### Installation
Install with [npm](https://www.npmjs.com/):
```bash
npm install compact-encoding
```
### Encoder API
#### **`state()`**
An object with the keys`{ start, end, buffer, cache }`.
| Keys | Description |
| -------- | --------------------------------------------- |
| `start` | Byte offset to start encoding/decoding at. |
| `end` | Byte offset indicating the end of the buffer. |
| `buffer` | Either a Node.js Buffer or Uint8Array. |
| `cache` | Used internally be codecs, starts as `null`. |
> Users can also get a blank state object using`cenc.state()`.
```javascript
const cenc = require('compact-encoding')
const state = cenc.state()
```
#### **`enc.preencode(state, val)`**
Performs a fast preencode dry-run that only sets `state.end`. Use this to figure out how big of a buffer you need.
```javascript
const cenc = require('compact-encoding')
const state = cenc.state()
// use preencode to figure out how big a buffer is needed
cenc.uint.preencode(state, 42)
cenc.string.preencode(state, 'hi')
console.log(state) // { start: 0, end: 4, buffer: null, cache: null }
```
#### **`enc.encode(state, val)`**
Encodes `val` into `state.buffer` at position `state.start` and updates `state.start` to point after the encoded value when done.
```javascript
state.buffer = Buffer.allocUnsafe(state.end)
// then use encode to actually encode it to the buffer
cenc.uint.encode(state, 42)
cenc.string.encode(state, 'hi')
```
#### **`val = enc.decode(state)`**
Decodes a value from `state.buffer` as position `state.start`and updates `state.start` to point after the decoded value when done in the buffer.
```javascript
// to decode it simply use decode instead
state.start = 0
cenc.uint.decode(state) // 42
cenc.string.decode(state) // 'hi'
```
### Helpers
To encode to a buffer or decode from one, use the `encode` and `decode` helpers to reduce your boilerplate.
```javascript
const buf = cenc.encode(cenc.bool, true)
const bool = cenc.decode(cenc.bool, buf)
```
### Bundled encodings
The following encodings are bundled as they are primitives that can be used to build others on top.
> Feel free to make a PR to add more encodings that are missing.
| Encodings | Description |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `cenc.raw` | Pass through encodes a buffer, i.e., a basic copy. |
| `cenc.uint` | Encodes a uint using [compact-uint](https://github.com/mafintosh/compact-uint). |
| `cenc.uint8` | Encodes a fixed size uint8. |
| `cenc.uint16` | Encodes a fixed size uint16. Useful for things like ports. |
| `cenc.uint24` | Encodes a fixed size uint24. Useful for message framing. |
| `cenc.uint32` | Encodes a fixed size uint32. Useful for very large message framing. |
| `cenc.uint40` | Encodes a fixed size uint40. |
| `cenc.uint48` | Encodes a fixed size uint48. |
| `cenc.uint56` | Encodes a fixed size uint56. |
| `cenc.uint64` | Encodes a fixed size uint64. |
| `cenc.int` | Encodes an int using `cenc.uint` with ZigZag encoding. |
| `cenc.int8` | Encodes a fixed size int8 using `cenc.uint8` with ZigZag encoding. |
| `cenc.int16` | Encodes a fixed size int16 using `cenc.uint16` with ZigZag encoding. |
| `cenc.int24` | Encodes a fixed size int24 using `cenc.uint24` with ZigZag encoding. |
| `cenc.int32` | Encodes a fixed size int32 using `cenc.uint32` with ZigZag encoding. |
| `cenc.int40` | Encodes a fixed size int40 using `cenc.uint40` with ZigZag encoding. |
| `cenc.int48` | Encodes a fixed size int48 using `cenc.uint48` with ZigZag encoding. |
| `cenc.int56` | Encodes a fixed size int56 using `cenc.uint56` with ZigZag encoding |
| `cenc.int64` | Encodes a fixed size int64 using `cenc.uint64` with ZigZag encoding. |
| `cenc.lexint` | Encodes an int using [lexicographic-integer](https://github.com/substack/lexicographic-integer) encoding so that encoded values are lexicographically sorted in ascending numerical order. |
| `cenc.float32` | Encodes a fixed size float32. |
| `cenc.float64` | Encodes a fixed size float64. |
| `cenc.buffer` | Encodes a buffer with its length uint prefixed. When decoding an empty buffer, `null` is returned. |
| `cenc.raw.buffer` | Encodes a buffer without a length prefixed. |
| `cenc.uint8array` | Encodes a uint8array with its element length uint prefixed. |
| `cenc.raw.uint8array` | Encodes a uint8array without a length prefixed. |
| `cenc.uint16array` | Encodes a uint16array with its element length uint prefixed. |
| `cenc.raw.uint16array` | Encodes a uint16array without a length prefixed. |
| `cenc.uint32array` | Encodes a uint32array with its element length uint prefixed. |
| `cenc.raw.uint32array` | Encodes a uint32array without a length prefixed. |
| `cenc.int8array` | Encodes a int8array with its element length uint prefixed. |
| `cenc.raw.int8array` | Encodes a int8array without a length prefixed. |
| `cenc.int16array` | Encodes a int16array with its element length uint prefixed. |
| `cenc.raw.int16array` | Encodes a int16array without a length prefixed. |
| `cenc.int32array` | Encodes a int32array with its element length uint prefixed. |
| `cenc.raw.int32array` | Encodes a int32array without a length prefixed. |
| `cenc.float32array` | Encodes a float32array with its element length uint prefixed. |
| `cenc.raw.float32array` | Encodes a float32array without a length prefixed. |
| `cenc.float64array` | Encodes a float64array with its element length uint prefixed. |
| `cenc.raw.float64array` | Encodes a float64array without a length prefixed. |
| `cenc.bool` | Encodes a boolean as 1 or 0. |
| `cenc.string`, `cenc.utf8` | Encodes a utf-8 string, similar to buffer. |
| `cenc.raw.string`, `cenc.raw.utf8` | Encodes a utf-8 string without a length prefixed. |
| `cenc.string.fixed(n)`, `cenc.utf8.fixed(n)` | Encodes a fixed sized utf-8 string. |
| `cenc.ascii` | Encodes an ascii string. |
| `cenc.raw.ascii` | Encodes an ascii string without a length prefixed. |
| `cenc.ascii.fixed(n)` | Encodes a fixed size ascii string. |
| `cenc.hex` | Encodes a hex string. |
| `cenc.raw.hex` | Encodes a hex string without a length prefixed. |
| `cenc.hex.fixed(n)` | Encodes a fixed size hex string. |
| `cenc.base64` | Encodes a base64 string. |
| `cenc.raw.base64` | Encodes a base64 string without a length prefixed. |
| `cenc.base64.fixed(n)` | Encodes a fixed size base64 string. |
| `cenc.utf16le`, `cenc.ucs2` | Encodes a utf16le string. |
| `cenc.raw.utf16le`, `cenc.raw.ucs2` | Encodes a utf16le string without a length prefixed. |
| `cenc.utf16le.fixed(n)`, `cenc.ucs2.fixed(n)` | Encodes a fixed size utf16le string. |
| `cenc.fixed32` | Encodes a fixed 32 byte buffer. |
| `cenc.fixed64` | Encodes a fixed 64 byte buffer. |
| `cenc.fixed(n)` | Makes a fixed sized encoder. |
| `cenc.array(enc)` | Makes an array encoder from another encoder. Arrays are uint prefixed with their length. |
| `cenc.raw.array(enc)` | Makes an array encoder from another encoder, without a length prefixed. |
| `cenc.json` | Encodes a JSON value as utf-8. |
| `cenc.raw.json` | Encodes a JSON value as utf-8 without a length prefixed. |
| `cenc.ndjson` | Encodes a JSON value as newline delimited utf-8. |
| `cenc.raw.ndjson` | Encodes a JSON value as newline delimited utf-8 without a length prefixed. |
| `cenc.from(enc)` | Makes a compact encoder from a [codec](https://github.com/mafintosh/codecs) or [abstract-encoding](https://github.com/mafintosh/abstract-encoding). |

136
helpers/corestore.md Normal file
View File

@@ -0,0 +1,136 @@
# Corestore
<mark style="background-color:green;">**stable**</mark>
Corestore is a Hypercore factory that makes it easier to manage large collections of named Hypercores. It is designed to efficiently store and replicate multiple sets of interlinked [hypercore.md](../building-blocks/hypercore.md "mention")(s), such as those used by [hyperdrive.md](../building-blocks/hyperdrive.md "mention"), removing the responsibility of managing custom storage/replication code from these higher-level modules.
> [Github (Corestore)](https://github.com/holepunchto/corestore)
* [Corestore](corestore.md#installation)
* [Create a new instance](corestore.md#const-store--new-corestorestorage-options)
* Basic:
* Methods:
* [store.get(key | { key, name, exclusive, \[options\] })](corestore.md#const-core--storegetkey---key-name-exclusive-options)
* [store.replicate(options|stream)](corestore.md#const-stream--storereplicateoptionsstream)
* [store.namespace(name)](corestore.md#const-store--storenamespacename)
* [store.session(\[options\])](corestore.md#const-session--storesessionoptions)
### Installation
Install with [npm](https://www.npmjs.com/):
```bash
npm install corestore
```
### API
#### **`const store = new Corestore(storage, [options])`**
Creates a new Corestore instance.
`storage` can be either a random-access-storage module, a string, or a function that takes a path and returns a random-access-storage instance.
```javascript
const Corestore = require('corestore')
const store = new Corestore('./my-storage')
```
`options` can include:
| Property | Description | Type | Default |
| ---------------- | -------------------------------------------------------- | ------ | --------------------------------------------------------- |
| **`primaryKey`** | The primary key used to generate new Hypercore key pairs | Buffer | Randomly generated and persisted in the storage directory |
#### **`const core = store.get(key | { key, name, exclusive, [options] })`**
Loads a Hypercore, either by name (if the `name` option is provided), or from the provided key (if the first argument is a Buffer or String with hex/z32 key, or if the `key` option is set).
If that Hypercore has previously been loaded, subsequent calls to `get` will return a new Hypercore session on the existing core.
If the `exclusive` option is set and a writable session is opened, it will wait for all other exclusive writable to close before
opening the Hypercore. In other words, any operation on the core will wait until it is exclusive.
All other options besides `name` and `key` and `exclusive` will be forwarded to the Hypercore constructor.
```javascript
// assuming store is a Corestore instance
const core1 = store.get({ name: 'my-core-1' })
const core2 = store.get({ name: 'my-core-2' })
// awaiting ready so that we can access core1.key
await core1.ready()
const core3 = store.get({ key: core1.key }) // will open another session on core1
// assuming otherKey is the key to a non-writable core
// these are equivalent and will both return sessions on that same non-writable core
const core4 = store.get({ key: otherKey })
const core5 = store.get(otherKey)
```
> The names you provide are only relevant **locally**, in that they are used to deterministically generate key pairs. Whenever you load a core by name, that core will be writable. Names are not shared with remote peers.
#### **`const stream = store.replicate(options|stream)`**
Creates a replication stream that's capable of replicating all Hypercores that are managed by the Corestore, assuming the remote peer has the correct capabilities.
`options` will be forwarded to Hypercore's `replicate` function.
Corestore replicates in an 'all-to-all' fashion, meaning that when replication begins, it will attempt to replicate every Hypercore that's currently loaded and in memory. These attempts will fail if the remote side doesn't have a Hypercore's capability -- Corestore replication does not exchange Hypercore keys.
If the remote side dynamically adds a new Hypercore to the replication stream (by opening that core with a `get` on their Corestore, for example), Corestore will load and replicate that core if possible.
Using [hyperswarm.md](../building-blocks/hyperswarm.md "mention") one can replicate Corestores as follows:
```javascript
const swarm = new Hyperswarm()
// join the relevant topic
swarm.join(...)
// simply pass the connection stream to corestore
swarm.on('connection', conn => store.replicate(conn))
```
As with Hypercore, users can also create new protocol streams by treating `options` as the `isInitiator` boolean and then replicate these streams over a transport layer of their choosing:
```javascript
// assuming store1 and store2 are corestore instances
const s1 = store1.replicate(true)
const s2 = store2.replicate(false)
s1.pipe(s2).pipe(s1)
```
#### **`const store = store.namespace(name)`**
Creates a new namespaced Corestore. Namespacing is useful for sharing a single Corestore instance between many applications or components, as it prevents name collisions.
Namespaces can be chained:
```javascript
const ns1 = store.namespace('a')
const ns2 = ns1.namespace('b')
const core1 = ns1.get({ name: 'main' }) // These will load different Hypercores
const core2 = ns2.get({ name: 'main' })
```
Namespacing is particularly useful if your application needs to create many different data structures, such as [hyperdrive.md](../building-blocks/hyperdrive.md "mention")s, that all share a common storage location:
```javascript
const store = new Corestore('./my-storage-dir')
// Neither drive1 nor drive2 care that they're being passed a namespaced store.
// But the top-level application can safely reuse my-storage-dir between both.
const drive1 = new Hyperdrive(store.namespace('drive-a'))
const drive2 = new Hyperdrive(store.namespace('drive-b'))
```
#### `const session = store.session([options])`
Creates a new Corestore that shares resources with the original, like cache, cores, replication streams, and storage, while optionally resetting the namespace, overriding `primaryKey`. Useful when an application wants to accept an optional Corestore, but needs to maintain a predictable key derivation.
`options` are the same as the constructor options:
| Property | Description | Type | Default |
| ---------------- | --------------------------------------------------------------------------------------- | ------ | -------------------------------- |
| **`primaryKey`** | Overrides the default `primaryKey` for this session | Buffer | The store's current `primaryKey` |
| **`namespace`** | Overrides the namespace for this session. If `null`, the default namespace will be used. | Buffer | The store's current namespace. |
| **`detach`** | By disabling this, closing the session will also close the store that created the session. | Boolean | `true` |

203
helpers/localdrive.md Normal file
View File

@@ -0,0 +1,203 @@
# Localdrive
A file system API that is similar to [hyperdrive.md](../building-blocks/hyperdrive.md "mention"). This tool comes in handy when mirroring files from user filesystem to a drive, and vice-versa.
> [Github (Localdrive)](https://github.com/holepunchto/localdrive)
* [Installation](localdrive.md#installation)
* [Usage](localdrive.md#usage)
* [API](localdrive.md#api)
* [Examples](localdrive.md#examples)
### Installation
Install with [npm](https://www.npmjs.com/):
```bash
npm install localdrive
```
### Usage
```javascript
const Localdrive = require('localdrive')
const drive = new Localdrive('./my-project')
await drive.put('/blob.txt', Buffer.from('example'))
await drive.put('/images/logo.png', Buffer.from('..'))
await drive.put('/images/old-logo.png', Buffer.from('..'))
const buffer = await drive.get('/blob.txt')
console.log(buffer) // => <Buffer ..> 'example'
const entry = await drive.entry('/blob.txt')
console.log(entry) // => { key, value: { executable, linkname, blob, metadata } }
await drive.del('/images/old-logo.png')
await drive.symlink('/images/logo.shortcut', '/images/logo.png')
for await (const file of drive.list('/images')) {
console.log('list', file) // => { key, value }
}
const rs = drive.createReadStream('/blob.txt')
for await (const chunk of rs) {
console.log('rs', chunk) // => <Buffer ..>
}
const ws = drive.createWriteStream('/blob.txt')
ws.write('new example')
ws.end()
ws.once('close', () => console.log('file saved'))
```
### API
**`const drive = new Localdrive(root, [options])`**
Creates a drive based on a `root` directory. `root` can be relative or absolute.
`options` include:
| Property | Description | Type | Default |
|-------------------|----------------------------------------------------------|---------|----------|
| **`followLinks`** | If enabled then `entry(key)` will follow the `linkname`. | Boolean | `false` |
| **`metadata`** | Hook functions are called accordingly. | Object | `null` |
| **`atomic`** | Enables atomicity for file writing (tmp file and rename). | Boolean | `false` |
| **`roots`** | For mapping key prefixes to different roots. | Object | `{}` |
> The metadata hook `del()` could be called with non-existing metadata keys.
**`drive.root`**
String with the resolved (absolute) drive path.
**`drive.supportsMetadata`**
Boolean that indicates if the drive handles or not metadata. Default `false`.
If you pass `options.metadata` hooks then `supportsMetadata` becomes true.
**`await drive.put(key, buffer, [options])`**
Creates a file at `key` path in the drive. `options` are the same as in `createWriteStream`.
**`const buffer = await drive.get(key)`**
Returns the blob at `key` path in the drive. If no blob exists, returns null.
> It also returns null for symbolic links.
**`const entry = await drive.entry(key, [options])`**
Returns the entry at `key` path in the drive. It looks like this:
```javascript
{
key: String,
value: {
executable: Boolean,
linkname: null,
blob: {
byteOffset: Number,
blockOffset: Number,
blockLength: Number,
byteLength: Number
},
metadata: null
},
mtime: Number
}
```
Available `options`:
```js
{
follow: false // Follow symlinks, 16 max or throws an error
}
```
**`await drive.del(key)`**
Deletes the file at `key` path from the drive.
**`await drive.symlink(key, linkname)`**
Creates an entry in drive at `key` path that points to the entry at `linkname`.
> If a blob entry currently exists at `key` path then it will be overwritten and `drive.get(key)` will return null, while `drive.entry(key)` will return the entry with symlink information.
#### **`const comparison = drive.compare(entryA, entryB)`**
Returns `0` if entries are the same, `1` if `entryA` is older, and `-1` if `entryB` is older.
**`const iterator = drive.list([folder])`**
Returns a stream of all entries in the drive inside of specified `folder`.
**`const iterator = drive.readdir([folder])`**
Returns a stream of all subpaths of entries in drive stored at paths prefixed by `folder`.
**`const mirror = drive.mirror(out, [options])`**
Mirrors this drive into another. Returns a [mirrordrive.md](../helpers/mirrordrive.md "mention") instance constructed with `options`.
Call [`await mirror.done()`](../helpers/mirrordrive.md#await-mirrordone) to wait for the mirroring to finish.
**`const rs = drive.createReadStream(key, [options])`**
Returns a stream to read out the blob stored in the drive at `key` path.
`options` include:
| Property | Description | Type | Default |
| ------------ | -------------------------------------------------- | ------- | ---------- |
| **`start`** | Starting offset of the desired readstream interval | Integer | **`null`** |
| **`end`** | Ending offset of the desired readstream interval | Integer | **`null`** |
| **`length`** | Length of the desired readstream interval | Integer | **`null`** |
> `start` and `end` are inclusive.
>
> `length` overrides `end`, they're not meant to be used together.
**`const ws = drive.createWriteStream(key, [options])`**
Streams a blob into the drive at `key` path.
`options` include:
| Property | Description | Type | Default |
| ---------------- | ------------------------------------- | ------- | ------- |
| **`executable`** | whether the blob is executable or not | Boolean | `true` |
### Examples
#### Metadata hooks
Metadata backed by `Map`:
```javascript
const meta = new Map()
const metadata = {
get: (key) => meta.has(key) ? meta.get(key) : null,
put: (key, value) => meta.set(key, value),
del: (key) => meta.delete(key)
}
const drive = new Localdrive('./my-app', { metadata })
// ...
```
> `metadata.del()` will also be called when metadata is `null`
```javascript
await drive.put('/file.txt', Buffer.from('a')) // Default metadata is null
```

80
helpers/mirrordrive.md Normal file
View File

@@ -0,0 +1,80 @@
# MirrorDrive
Mirrors a [hyperdrive.md](../building-blocks/hyperdrive.md "mention") or a [localdrive.md](../helpers/localdrive.md "mention") into another one.
> [Github (Mirrordrive)](https://github.com/holepunchto/mirror-drive)
* [Installation](./mirrordrive.md#installation)
* [Basic usage](mirrordrive.md#basic-usage)
* [API](mirrordrive.md#api)
### Installation
Install with [npm](https://www.npmjs.com/):
```bash
npm install mirror-drive
```
### Basic usage
```javascript
import MirrorDrive from 'mirror-drive'
const src = new Localdrive('./src')
const dst = new Hyperdrive(store)
const mirror = new MirrorDrive(src, dst)
console.log(mirror.count) // => { files: 0, add: 0, remove: 0, change: 0 }
for await (const diff of mirror) {
console.log(diff) /* {
op: 'add',
key: '/new-file.txt',
bytesRemoved: 0,
bytesAdded: 4
}*/
}
console.log(mirror.count) // => { files: 1, add: 1, remove: 0, change: 0 }
```
### API
#### **`const mirror = new MirrorDrive(src, dst, [options])`**
Creates a mirror instance to move `src` drive into `dst` drive.
`options` include:
| Property | Type | Default |
| -------------------- | -------- | --------------------------------------- |
| **`prefix`** | String | `'/'` |
| **`dryRun`** | Boolean | `false` |
| **`prune`** | Boolean | `true` |
| **`includeEquals`** | Boolean | `false` |
| **`filter`** | Function | `(key) => true` |
| **`metadataEquals`** | Function | `(srcMetadata, dstMetadata) => { ... }` |
| **`batch`** | Boolean | `false` |
| **`entries`** | Array | `null` |
#### **`mirror.count`**
It counts the total files processed, added, removed, and changed.
Default: `{ files: 0, add: 0, remove: 0, change: 0 }`
```javascript
const mirror = new MirrorDrive(src, dst)
console.log(mirror.count) // => { files: 0, add: 0, remove: 0, change: 0 }
```
#### **`await mirror.done()`**
It starts processing all the diffing until is done.
```javascript
const mirror = new MirrorDrive(src, dst)
await mirror.done()
console.log(mirror.count) // => { files: 1, add: 1, remove: 0, change: 0 }
```

183
helpers/protomux.md Normal file
View File

@@ -0,0 +1,183 @@
# Protomux
Multiplex multiple message-oriented protocols over a stream
>[Github (Protomux)](https://github.com/mafintosh/protomux)
* [Installation](protomux.md#installation)
* [Basic usage](protomux.md#basic-usage)
* [API](protomux.md#api)
### Installation
Install with [npm](https://www.npmjs.com/):
```bash
npm install protomux
```
### Basic usage
```javascript
const Protomux = require('protomux')
const c = require('compact-encoding')
// By framed stream, it has be a stream that preserves the messages, ie something that length prefixes
// like @hyperswarm/secret-stream
const mux = new Protomux(aStreamThatFrames)
// Now add some protocol channels
const cool = mux.createChannel({
protocol: 'cool-protocol',
id: Buffer.from('optional binary id'),
onopen () {
console.log('the other side opened this protocol!')
},
onclose () {
console.log('either side closed the protocol')
}
})
// And add some messages
const one = cool.addMessage({
encoding: c.string,
onmessage (m) {
console.log('recv message (1)', m)
}
})
const two = cool.addMessage({
encoding: c.bool,
onmessage (m) {
console.log('recv message (2)', m)
}
})
// open the channel
cool.open()
// And send some data
one.send('a string')
two.send(true)
```
### API
#### **`mux = new Protomux(stream, [options])`**
Makes a new instance. `stream` should be a framed stream, preserving the messages written.
`options` include:
```javascript
{
// Called when the muxer wants to allocate a message that is written, defaults to Buffer.allocUnsafe.
alloc (size) {}
}
```
#### **`mux = Protomux.from(stream | muxer, [options])`**
Helper to accept either an existing muxer instance or a stream (which creates a new one).
**`const channel = mux.createChannel([options])`**
Adds a new protocol channel.
`options` include:
```javascript
{
// Used to match the protocol
protocol: 'name of the protocol',
// Optional additional binary id to identify this channel
id: buffer,
// Optional encoding for a handshake
handshake: encoding,
// Optional array of message types you want to send/receive.
messages: [],
// Called when the remote side adds this protocol.
// Errors here are caught and forwarded to stream.destroy
async onopen (handshake) {},
// Called when the channel closes - ie the remote side closes or rejects this protocol or we closed it.
// Errors here are caught and forwarded to stream.destroy
async onclose () {},
// Called after onclose when all pending promises have been resolved.
async ondestroy () {}
}
```
Sessions are paired based on a queue, so the first remote channel with the same `protocol` and `id`.
> `mux.createChannel` returns `null` if the channel should not be opened, it's a duplicate channel or the remote has already closed this one. To have multiple sessions with the same `protocol` and `id`, set `unique: false` as an option.
#### **`const opened = mux.opened({ protocol, id })`**
Boolean that indicates if the channel is opened.
#### **`mux.pair({ protocol, id }, callback)`**
Registers a callback to be called every time a new channel is requested.
#### **`mux.unpair({ protocol, id })`**
Unregisters the pair callback.
#### **`channel.open([handshake])`**
Opens the channel.
#### **`const m = channel.addMessage([options])`**
Adds/registers a message type for a specific encoding. Options include:
```javascript
{
// compact-encoding specifying how to encode/decode this message
encoding: c.binary,
// Called when the remote side sends a message.
// Errors here are caught and forwared to stream.destroy
async onmessage (message) { }
}
```
#### **`m.send(data)`**
Sends a message.
#### **`m.onmessage`**
The function that is called when a message arrives.
#### **`m.encoding`**
The encoding for this message.
#### **`channel.close()`**
Closes the protocol channel.
#### **`channel.cork()`**
Corking the protocol channel, makes it buffer messages and sends them all in a batch when it uncorks.
#### **`channel.uncork()`**
Uncorks and send the batch.
#### **`mux.cork()`**
Same as `channel.cork` but on the muxer instance.
#### **`mux.uncork()`**
Same as `channel.uncork` but on the muxer instance.
#### **`for (const channel of muxer) { ... }`**
The muxer instance is iterable, so you can iterate over all the channels.

105
helpers/secretstream.md Normal file
View File

@@ -0,0 +1,105 @@
# SecretStream
SecretStream is used to securely create connections between two peers in Hyperswarm. It is powered by Noise and libsodium's SecretStream. SecretStream can be used as a standalone module to provide encrypted communication between two parties.
The SecretStream instance is a Duplex stream that supports usability as a normal stream for standard read/write operations. Furthermore, its payloads are encrypted with libsodium's SecretStream for secure transmission.
>[Github (SecretStream)](https://github.com/holepunchto/hyperswarm-secret-stream)
* [SecretStream](secretstream.md#installation)
* [Create a new instance](secretstream.md#const-s--new-secretstreamisinitiator-rawstream-options)
* Basic:
* Properties:
* [s.publicKey](secretstream.md#spublickey)
* [s.remotePublicKey](secretstream.md#sremotepublickey)
* [s.handshakeHash](secretstream.md#shandshakehash)
* Methods:
* [s.start(rawStream, \[options\])](secretstream.md#sstartrawstream-options)
* [s.setTimeout(ms)](secretstream.md#ssettimeoutms)
* [s.setKeepAlive(ms)](secretstream.md#ssetkeepalivems)
* [SecretStream.keyPair(\[seed\])](secretstream.md#const-keypair--secretstreamkeypairseed)
* Events:
* [connect](secretstream.md#sonconnect-onconnecthandler)
### Installation
Install with [npm](https://www.npmjs.com/):
```bash
npm install @hyperswarm/secret-stream
```
### API
#### **`const s = new SecretStream(isInitiator, [rawStream], [options])`**
Makes a new stream.
`isInitiator` is a boolean indicating whether you are the client or the server.
`rawStream` can be set to an underlying transport stream to run the noise stream over.
`options` include:
| Property | Description | Type |
| :-------------------: | -------------------------------------------------------------------------- | ----------------------------------------------------- |
| **`pattern`** | Accept server connections for this topic by announcing yourself to the DHT | String |
| **`remotePublicKey`** | PublicKey of the other party | String |
| **`keyPair`** | Combination of PublicKey and SecretKey | { publicKey, secretKey } |
| **`handshake`** | To use a handshake performed elsewhere, pass it here | { tx, rx, handshakeHash, publicKey, remotePublicKey } |
The SecretStream returned is a Duplex stream that you use as a normal stream, to write/read data from, except its payloads are encrypted using the libsodium secretstream.
> By default, the above process uses ed25519 for the handshakes.
To load the key pair asynchronously, the secret stream also supports passing in a promise instead of the keypair that later resolves to `{ publicKey, secretKey }`. The stream lifecycle will wait for the resolution and auto-destroy the stream if the promise gives an error.
#### Properties
#### **`s.publicKey`**
Gets the local public key.
#### **`s.remotePublicKey`**
Gets the remote's public key. Populated after `open` is emitted.
#### **`s.handshakeHash`**
Gets the unique hash of this handshake. Populated after `open` is emitted.
#### Methods
#### **`s.start(rawStream, [options])`**
Starts a SecretStream from a rawStream asynchronously.
```javascript
const s = new SecretStream({
autoStart: false // call start manually
})
// ... do async stuff or destroy the stream
s.start(rawStream, {
... options from above
})
```
#### **`s.setTimeout(ms)`**
Sets the stream timeout. If no data is received within a `ms` window, the stream is auto-destroyed.
#### **`s.setKeepAlive(ms)`**
Sends a heartbeat (empty message) every time the socket is idle for `ms` milliseconds.
#### **`const keyPair = SecretStream.keyPair([seed])`**
Generates an ed25519 key pair.
#### Events
#### **`s.on('connect', onConnectHandler)`**
Emitted when the handshake is fully done. It is safe to write to the stream immediately though, as data is buffered internally before the handshake has been completed.