diff --git a/.gitignore b/.gitignore
index 42a2401..11ad6e9 100644
--- a/.gitignore
+++ b/.gitignore
@@ -91,12 +91,6 @@ out
.nuxt
dist
-# Gatsby files
-.cache/
-# Comment in the public line in if your project uses Gatsby and not Next.js
-# https://nextjs.org/blog/next-9-1#public-directory-support
-# public
-
# vuepress build output
.vuepress/dist
diff --git a/building-blocks/autobase.md b/building-blocks/autobase.md
index 625f732..61a0392 100644
--- a/building-blocks/autobase.md
+++ b/building-blocks/autobase.md
@@ -7,7 +7,7 @@ Autobase is used to automatically rebase multiple causally-linked Hypercores int
> Although Autobase is still under development, it finds application in many active projects. Keet rooms, for example, are powered by Autobase! This is a testament to the potential of Autobase, and we are excited to see what else it can achieve.
-> [Github (Autobase)](https://github.com/holepunchto/autobase)
+> [GitHub (Autobase)](https://github.com/holepunchto/autobase)
- [Autobase](../building-blocks/autobase.md)
- [Create a new instance](autobase.md#installation)
@@ -121,7 +121,7 @@ Generate a causal clock linking the latest entries of each input.
`latest` will update the input Hypercores (`input.update()`) prior to returning the clock.
-You generally will not need to use this, and can instead just use [`append`](autobase.md#await-baseappendvalue-clock-input) with the default clock:
+This is unlikely to be needed generally, prefer to use [`append`](autobase.md#await-baseappendvalue-clock-input) with the default clock:
```javascript
await base.append('hello world')
@@ -191,7 +191,7 @@ Similar to `Hypercore.createReadStream()`, this stream starts at the beginning o
Generate a Readable stream of input blocks, from earliest to latest.
-Unlike `createCausalStream`, the ordering of `createReadStream` is not deterministic. The read stream only gives you the guarantee that every node it yields will **not** be causally-dependent on any node yielded later.
+Unlike `createCausalStream`, the ordering of `createReadStream` is not deterministic. The read stream only gives the guarantee that every node it yields will **not** be causally-dependent on any node yielded later.
Read streams have a public property `checkpoint`, which can be used to create new read streams that resume from the checkpoint's position:
@@ -204,8 +204,8 @@ const stream2 = base.createReadStream({ checkpoint: stream1.checkpoint }) // Res
`createReadStream` can be passed two custom async hooks:
* `onresolve`: Called when an unsatisfied node (a node that links to an unknown input) is encountered. Can be used to add inputs to the Autobase dynamically.
- * Returning `true` indicates that you added new inputs to the Autobase, and so the read stream should begin processing those inputs.
- * Returning `false` indicates that you did not resolve the missing links, and so the node should be yielded immediately as is.
+ * Returning `true` indicates that new inputs were added to the Autobase, and so the read stream should begin processing those inputs.
+ * Returning `false` indicates that the missing links were not resolved, and so the node should be yielded immediately as is.
* `onwait`: Called after each node is yielded. Can be used to add inputs to the Autobase dynamically.
`options` include:
@@ -226,9 +226,9 @@ Autobase is designed for computing and sharing linearized views over many input
These views, instances of the `LinearizedView` class, in many ways look and feel like normal Hypercores. They support `get`, `update`, and `length` operations.
-By default, a view is a persisted version of an Autobase's causal stream, saved into a Hypercore. But a lot more can be done with them: by passing a function into `linearize`'s `apply` option, you can define your own indexing strategies.
+By default, a view is a persisted version of an Autobase's causal stream, saved into a Hypercore. But a lot more can be done with them: by passing a function into `linearize`'s `apply` option, we can define our own indexing strategies.
-Linearized views are incredibly powerful as they can be persisted to a Hypercore using the new `truncate` API added in Hypercore 10. This means that peers querying a multiwriter data structure don't need to read in all changes and apply them themself. Instead, they can start from an existing view that's shared by another peer. If that view is missing indexing any data from inputs, Autobase will create a 'view over the remote view', applying only the changes necessary to bring the remote view up-to-date. The best thing is that this all happens automatically for you!
+Linearized views are incredibly powerful as they can be persisted to a Hypercore using the new `truncate` API added in Hypercore 10. This means that peers querying a multiwriter data structure don't need to read in all changes and apply them themself. Instead, they can start from an existing view that's shared by another peer. If that view is missing indexing any data from inputs, Autobase will create a 'view over the remote view', applying only the changes necessary to bring the remote view up-to-date. Best of all, is that this all happens automatically.
#### Customizing Views with `apply`
@@ -256,11 +256,11 @@ More sophisticated indexing might require multiple appends per input node, or re
#### **`base.start({ apply, unwrap } = {})`**
-Creates a new linearized view, and sets it on `base.view`. The view mirrors the Hypercore API wherever possible, meaning it can be used where ever you would normally use a Hypercore.
+Creates a new linearized view, and sets it on `base.view`. The view mirrors the Hypercore API wherever possible, meaning it can be used as a drop-in replacement for a Hypercore instance.
-You can either call `base.start` manually when you want to start using `base.view`, or pass either `apply` or `autostart` options to the Autobase constructor. If these constructor options are present, Autobase will start immediately.
+Either call `base.start` manually when to start using `base.view`, or pass either `apply` or `autostart` options to the Autobase constructor. If these constructor options are present, Autobase will start immediately.
-If you choose to call `base.start` manually, it must only be called once.
+When calling `base.start` manually, it must only be called once.
`options` include:
diff --git a/building-blocks/hyperbee.md b/building-blocks/hyperbee.md
index 165b9e8..7d5ce3a 100644
--- a/building-blocks/hyperbee.md
+++ b/building-blocks/hyperbee.md
@@ -2,12 +2,12 @@
**stable**
-Hyperbee is an append only B-tree based on [hypercore.md](hypercore.md "mention"). It provides a key/value-store API, with methods for inserting and getting key-value pairs, atomic batch insertions, and creating sorted iterators. It uses a single Hypercore for storage, using a technique called embedded indexing. It provides features like cache warmup extension, efficient diffing, version control, sorted iteration, and sparse downloading.
+Hyperbee is an append only B-tree based on [hypercore.md](hypercore.md). It provides a key/value-store API, with methods for inserting and getting key-value pairs, atomic batch insertions, and creating sorted iterators. It uses a single Hypercore for storage, using a technique called embedded indexing. It provides features like cache warmup extension, efficient diffing, version control, sorted iteration, and sparse downloading.
> As with the Hypercore, a Hyperbee can only have a **single writer on a single machine**; the creator of the Hyperdrive is the only person who can modify it as they're the only one with the private key. That said, the writer can replicate to **many readers**, in a manner similar to BitTorrent.
-> [Github (Hyperbee)](https://github.com/holepunchto/hyperbee)
+> [GitHub (Hyperbee)](https://github.com/holepunchto/hyperbee)
* [Hyperbee](../building-blocks/hyperbee.md):
* [Create a new instance](hyperbee.md#installation):
@@ -58,7 +58,7 @@ npm install hyperbee
#### **`const db = new Hyperbee(core, [options])`**
-Make a new Hyperbee instance. `core` should be a [hypercore.md](hypercore.md "mention").
+Make a new Hyperbee instance. `core` should be a [hypercore.md](hypercore.md).
`options` include:
@@ -93,7 +93,7 @@ Buffer containing the public key identifying this bee.
Buffer containing a key derived from `db.key`.
-> This discovery key does not allow you to verify the data, it's only to announce or look for peers that are sharing the same bee, without leaking the bee key.
+> This discovery key is not for verifying the data, it's only to announce or look for peers that are sharing the same bee, without leaking the bee key.
#### **`db.writable`**
@@ -110,7 +110,7 @@ Boolean indicating if we can read from this bee. After closing the bee this will
Waits until the internal state is loaded.
-Use it once before reading synchronous properties like `db.version`, unless you called any of the other APIs.
+Use it once before reading synchronous properties like `db.version`, unless any of the other APIs have been called first.
#### **`await db.close()`**
@@ -120,7 +120,7 @@ Fully close this bee, including its core.
Inserts a new key. Value can be optional.
-> If inserting a series of data atomically or want more performance then check the `db.batch` API.
+> If inserting a series of data atomically or high performance is needed then check the `db.batch` API.
**`options`** includes:
@@ -151,7 +151,7 @@ await db.put('number', '456', { cas })
console.log(await db.get('number')) // => { seq: 2, key: 'number', value: '456' }
function cas (prev, next) {
- // You can use same-data or same-object lib, depending on the value complexity
+ // can use same-data or same-object lib, depending on the value complexity
return prev.value !== next.value
}
```
@@ -264,7 +264,7 @@ A batch is atomic: it is either processed fully or not at all.
A Hyperbee has a single write lock. A batch acquires this write lock with its first modifying operation (**`put`**, **`del`**), and releases it when it flushes. We can also explicitly acquire the lock with **`await batch.lock()`**. If using the batch only for read operations, the write lock is never acquired. Once the write lock is acquired, the batch must flush before any other writes to the Hyperbee can be processed.
-A batch's state snaps at creation time, so write operations applied outside of the batch are not taken into account when reading. Write operations within the batch do get taken into account, as is to be expected — if you first run **`await batch.put('myKey', 'newValue')`** and later run **`await batch.get('myKey')`**, you will observe **`'newValue'`**.
+A batch's state snaps at creation time, so write operations applied outside of the batch are not taken into account when reading. Write operations within the batch do get taken into account, as is to be expected — if we first run **`await batch.put('myKey', 'newValue')`** and later run **`await batch.get('myKey')`**, then **`'newValue'`** should be observed.
@@ -272,7 +272,7 @@ A batch's state snaps at creation time, so write operations applied outside of t
Make a read stream. Sort order is based on the binary value of the keys. All entries in the stream are similar to the ones returned from **`db.get`**.
-`range` should specify the range you want to read and looks like this:
+`range` should specify the range to read and looks like this:
```javascript
{
@@ -288,7 +288,7 @@ Make a read stream. Sort order is based on the binary value of the keys. All ent
| Property | Description | Type | Default |
| ------------- | ---------------------------------- | ------- | ------- |
| **`reverse`** | determine order of the keys | Boolean | `false` |
-| **`limit`** | maximum number of entries you want | Integer | `-1` |
+| **`limit`** | maximum number of entries needed | Integer | `-1` |
#### **`const { seq, key, value } = await db.peek([range], [options])`**
@@ -308,7 +308,7 @@ Create a stream of all entries ever inserted or deleted from the `db`. Each entr
| **`gte`** | start with this seq (inclusive) | Integer | `null` |
| **`lt`** | stop before this index | Integer | `null` |
| **`lte`** | stop after this index | Integer | `null` |
-| **`limit`** | maximum number of entries you want | Integer | `-1` |
+| **`limit`** | maximum number of entries needed | Integer | `-1` |
> If any of the gte, gt, lte, lt arguments are `< 0` then they'll implicitly be added with the version before starting so doing `{ gte: -1 }` makes a stream starting at the last index.
@@ -340,7 +340,7 @@ Returns a watcher which listens to changes on the given key.
`entryWatcher.node` contains the current entry in the same format as the result of `bee.get(key)`, and will be updated as it changes.
-> By default, the node will have the bee's key encoding and value encoding, but you can overwrite it by setting the `keyEncoding` and `valueEncoding` options.
+> By default, the node will have the bee's key encoding and value encoding, but it can be overwritten by setting the `keyEncoding` and `valueEncoding` options.
>
>Listen to `entryWatcher.on('update')` to be notified when the value of node has changed.
@@ -374,11 +374,10 @@ Waits until the watcher is loaded and detects changes.
`await watcher.destroy()`
-Stops the watcher. You could also stop it by using `break` inside the loop.
+Stops the watcher. Using `break` inside the `for await` loop will also destroy the watcher.
+> Do not attempt to manually close the snapshots. Since they're used internally, let them be auto-closed.
-> Do not attempt to close the snapshots yourself. Since they're used internally, let them be auto-closed.
->
> Watchers are not supported on subs and checkouts. Instead, use the `range` option to limit the scope.
diff --git a/building-blocks/hypercore.md b/building-blocks/hypercore.md
index e793ebc..5a889fa 100644
--- a/building-blocks/hypercore.md
+++ b/building-blocks/hypercore.md
@@ -4,7 +4,7 @@
Hypercore is a secure, distributed append-only log built for sharing large datasets and streams of real-time data. It comes with a secure transport protocol, making it easy to build fast and scalable peer-to-peer applications.
-> [Github (Hypercore)](https://github.com/holepunchto/hypercore)
+> [GitHub (Hypercore)](https://github.com/holepunchto/hypercore)
* [Hypercore](../building-blocks/hypercore.md)
* [Creating a new instance](hypercore.md#installation)
@@ -69,27 +69,27 @@ A Hypercore can only be modified by its creator; internally it signs updates wit
Creates a new Hypercore instance.
-`storage` should be set to a directory where you want to store the data and core metadata.
+`storage` should be set to a directory where to store the data and core metadata.
```javascript
const core = new Hypercore('./directory') // store data in ./directory
```
-> Alternatively, the user can pass a function instead that is called with every filename Hypercore needs to function and return your own [abstract-random-access](https://github.com/random-access-storage/abstract-random-access) instance that is used to store the data.
+> Alternatively, the user can pass a function instead that is called with every filename Hypercore needs to function and return a [abstract-random-access](https://github.com/random-access-storage/abstract-random-access) instance that is used to store the data.
```javascript
const RAM = require('random-access-memory')
const core = new Hypercore((filename) => {
// Filename will be one of: data, bitfield, tree, signatures, key, secret_key
- // The data file will contain all your data concatenated.
+ // The data file will contain all the data concatenated.
// Store all files in ram by returning a random-access-memory instance
return new RAM()
})
```
-By default Hypercore uses [random-access-file](https://github.com/random-access-storage/random-access-file). This is also useful if users want to store specific files in other directories.
+By default Hypercore uses [random-access-file](https://github.com/random-access-storage/random-access-file). This is also useful for storing specific files in other directories.
Hypercore will produce the following files:
@@ -102,7 +102,7 @@ Hypercore will produce the following files:
> `tree`, `data`, and `bitfield` are normally very sparse files.
-`key` can be set to a Hypercore public key. If you do not set this the public key will be loaded from storage. If no key exists a new key pair will be generated.
+`key` can be set to a Hypercore public key. When unset this the public key will be loaded from storage. If no key exists a new key pair will be generated.
`options` include:
@@ -160,7 +160,7 @@ An object containing buffers of the core's public and secret key
#### **`core.discoveryKey`**
-Buffer containing a key derived from the core's public key. In contrast to `core.key,` this key does not allow you to verify the data. It can be used to announce or look for peers that are sharing the same core, without leaking the core key.
+Buffer containing a key derived from the core's public key. In contrast to `core.key,` this key can not be used to verify the data. It can be used to announce or look for peers that are sharing the same core, without leaking the core key.
> The above properties are populated after [`ready`](hypercore.md#await-core.ready) has been emitted. Will be `null` before the event.
@@ -338,11 +338,11 @@ Clears stored blocks between `start` and `end`, reclaiming storage when possible
| Property | Description | Type | Default |
| ----------------- | --------------------------------------------------------------------- | ------- | ------- |
-| **`diff`** | Returned `cleared` bytes object is null unless you enable this | Boolean | `false` |
+| **`diff`** | Returned `cleared` bytes object is null unless enabled | Boolean | `false` |
```javascript
-await core.clear(4) // clear block 4 from your local cache
-await core.clear(0, 10) // clear block 0-10 from your local cache
+await core.clear(4) // clear block 4 from local cache
+await core.clear(0, 10) // clear block 0-10 from local cache
```
The core will also 'gossip' with peers it is connected to, that is no longer has these blocks.
@@ -355,7 +355,7 @@ Per default, this will update the fork ID of the core to `+ 1`, but we can set t
#### `await core.purge()`
-Purge the Hypercore from your storage, completely removing all data.
+Purge the Hypercore from storage, completely removing all data.
#### **`const hash = await core.treeHash([length])`**
@@ -365,7 +365,7 @@ Get the Merkle Tree hash of the core at a given length, defaulting to the curren
Download a range of data.
-You can await until the range has been fully downloaded by doing:
+We can await until the range has been fully downloaded by doing:
```javascript
await range.done()
@@ -408,7 +408,7 @@ Creates a new Hypercore instance that shares the same underlying core. Options a
`options` are the same as in the constructor.
-> You must close any session you make.
+> Be sure to close any sessions made.
#### **`const info = await core.info([options])`**
@@ -450,22 +450,22 @@ Waits for the core to open.
After this has been called `core.length` and other properties have been set.
-> ℹ️ In general, you do not need to wait for `ready` unless you're checking a synchronous property (like `key` or `discoverykey`), as all async methods on the public API, will await this internally.
+> ℹ️ In general, waiting for `ready` is unnecessary unless there's a need to check a synchronous property (like `key` or `discoverykey`) before any other async API method has been called. All async methods on the public API, await `ready` internally.
#### **`const stream = core.replicate(isInitiator|stream, options)`**
Creates a replication stream. We should pipe this to another Hypercore instance.
-The `isInitiator` argument is a boolean indicating whether you are the initiator of the connection (ie the client) or if you are the passive part (i.e., the server).
+The `isInitiator` argument is a boolean indicating whether a peer is the initiator of the connection (ie the client) or the passive peer waiting for connections (i.e., the server).
-> If a P2P swarm like Hyperswarm is being used, you can know this by checking if the swarm connection is a client socket or a server socket. In Hyperswarm, a user can check that using the [client property on the peer details object](https://github.com/hyperswarm/hyperswarm#swarmonconnection-socket-details--).
+> If a P2P swarm like Hyperswarm is being used, whether a peer is an initiator can be determined by checking if the swarm connection is a client socket or a server socket. In Hyperswarm, a user can check that using the [client property on the peer details object](https://github.com/hyperswarm/hyperswarm#swarmonconnection-socket-details--).
To multiplex the replication over an existing Hypercore replication stream, another stream instance can be passed instead of the `isInitiator` Boolean.
-To replicate a Hypercore using [hyperswarm.md](hyperswarm.md "mention"):
+To replicate a Hypercore using [hyperswarm.md](hyperswarm.md):
```javascript
// assuming swarm is a Hyperswarm instance and core is a Hypercore
@@ -474,10 +474,10 @@ swarm.on('connection', conn => {
})
```
-> If you want to replicate many Hypercores over a single Hyperswarm connection, you probably want to be using [corestore.md](../helpers/corestore.md "mention").
+> To replicate many Hypercores over a single Hyperswarm connection, see [corestore.md](../helpers/corestore.md).
-If not using [hyperswarm.md](hyperswarm.md "mention") or [corestore.md](../helpers/corestore.md "mention"), specify the `isInitiator` field, which will create a fresh protocol stream that can be piped over any transport you'd like:
+If not using [hyperswarm.md](hyperswarm.md) or [corestore.md](../helpers/corestore.md), specify the `isInitiator` field, which will create a fresh protocol stream that can be piped over any transport:
```javascript
// assuming we have two cores, localCore + remoteCore, sharing the same key
@@ -492,7 +492,7 @@ const socket = net.connect(...)
socket.pipe(localCore.replicate(true)).pipe(socket)
```
-> In almost all cases, the use of both Hyperswarm and Corestore Replication is advised and will meet all your needs.
+> In almost all cases, the use of both Hyperswarm and Corestore Replication is advised and will meet all needs.
#### **`const done = core.findingPeers()`**
@@ -543,7 +543,7 @@ await session1.close() // will close the Hypercore
#### **`core.snapshot([options])`**
-Returns a snapshot of the core at that particular time. This is useful if you want to ensure that multiple `get` operations are acting on a consistent view of the Hypercore (i.e., if the core forks in between two reads, the second should throw an error).
+Returns a snapshot of the core at that particular time. This is useful for ensuring that multiple `get` operations are acting on a consistent view of the Hypercore (i.e. if the core forks in between two reads, the second should throw an error).
If [`core.update()`](hypercore.md#const-updated--await-coreupdateoptions) is explicitly called on the snapshot instance, it will no longer be locked to the previous data. Rather, it will get updated with the current state of the Hypercore instance.
diff --git a/building-blocks/hyperdht.md b/building-blocks/hyperdht.md
index 4569714..ce57a18 100644
--- a/building-blocks/hyperdht.md
+++ b/building-blocks/hyperdht.md
@@ -4,9 +4,9 @@
The DHT powering Hyperswarm and built on top of [dht-rpc](https://github.com/mafintosh/dht-rpc). The HyperDHT uses a series of holepunching techniques to ensure connectivity works on most networks and is mainly used to facilitate finding and connecting to peers using end-to-end encrypted Noise streams.
-In the HyperDHT, peers are identified by a public key, not by an IP address. If you know someone's public key, you can connect to them regardless of where they're located, even if they move between different networks.
+In the HyperDHT, peers are identified by a public key, not by an IP address. A public key can be connected regardless of where the peers are located, even if they move between different networks.
-> [Github (Hyperdht)](https://github.com/holepunchto/hyperdht)
+> [GitHub (Hyperdht)](https://github.com/holepunchto/hyperdht)
* [HyperDHT](../building-blocks/hyperdht.md)
* [Create a new instance](hyperdht.md#installation)
@@ -68,7 +68,7 @@ Create a new DHT node.
See [dht-rpc](https://github.com/mafintosh/dht-rpc) for more options as HyperDHT inherits from that.
-> ℹ️ The default bootstrap servers are publicly served on behalf of the commons. To run a fully isolated DHT, start one or more DHT nodes with an empty bootstrap array (`new DHT({bootstrap:[]})`) and then use the addresses of those nodes as the `bootstrap` option in all other DHT nodes. You'll need at least one persistent node for the network to be completely operational.
+> ℹ️ The default bootstrap servers are publicly served on behalf of the commons. To run a fully isolated DHT, start one or more DHT nodes with an empty bootstrap array (`new DHT({bootstrap:[]})`) and then use the addresses of those nodes as the `bootstrap` option in all other DHT nodes. At least one persistent node is needed for the network to be completely operational.
#### Methods
@@ -82,13 +82,13 @@ Any options passed are forwarded to dht-rpc.
#### `node = DHT.bootstrapper(port, host, [options])`
-To run your own Hyperswarm network use this method to easily create a bootstrap node.
+Use this method to create a bootstrap node for in order to run a Hyperswarm network.
#### **`await node.destroy([options])`**
Fully destroy this DHT node.
-> This will also unannounce any running servers. If you want to force close the node without waiting for the servers to unannounce pass `{ force: true }`.
+> This will also unannounce any running servers. To force close the node without waiting for the servers to unannounce pass `{ force: true }`.
### Creating P2P Servers
@@ -101,8 +101,8 @@ Creates a new server for accepting incoming encrypted P2P connections.
```javascript
{
firewall (remotePublicKey, remoteHandshakePayload) {
- // validate if you want a connection from remotePublicKey
- // if you do return false, else return true
+ // validate if connection from remotePublicKey is accepted
+ // if it is accepted return false, else return true
// remoteHandshakePayload contains their ip and some more info
return true
}
@@ -148,7 +148,7 @@ Emitted when a new encrypted connection has passed the firewall check.
`socket` is a [NoiseSecretStream](https://github.com/holepunchto/hyperswarm-secret-stream) instance.
-To check about user you are connected to using `socket.remotePublicKey` and `socket.handshakeHash` contains a unique hash representing this crypto session (same on both sides).
+User connections are identifiable by `socket.remotePublicKey` and `socket.handshakeHash` contains a unique hash representing this crypto session (same on both sides).
#### **`server.on('listening')`**
@@ -212,7 +212,7 @@ The returned stream looks like this
{
// Who sent the response?
from: { id, host, port },
- // What address they responded to (i.e., your address)
+ // What address they responded to
to: { host, port },
// List of peers announcing under this topic
peers: [ { publicKey, nodes: [{ host, port }, ...] } ]
@@ -227,13 +227,13 @@ Any passed options are forwarded to dht-rpc.
#### **`const stream = node.announce(topic, keyPair, [relayAddresses], [options])`**
-Announces that users are listening on a key pair to the DHT under a specific topic. An announce does a parallel lookup so the stream returned looks like the lookup stream.
+Announces that users are listening on a key pair to the DHT under a specific topic. An announce does a parallel lookup so the stream returned that looks like the lookup stream.
-Any passed options are forwarded to dht-rpc.
+Any passed options are forwarded to `dht-rpc`.
-> When announcing you'll send a signed proof to peers that you own the key pair and wish to announce under the specific topic. Optionally you can provide up to 3 nodes, indicating which DHT nodes can relay messages to you - this speeds up connects later on for other users.
+> When announcing, a signed proof is sent to peers that the peer owns the key pair and wishes to announce under the specific topic. Optionally up to 3 nodes can be provided, indicating which DHT nodes can relay messages to the peer - this speeds up connects later on for other users.
>
-> Creating a server using `dht.createServer` automatically announces itself periodically on the key pair it is listening on. When announcing the server under a specific topic, you can access the nodes it is close to using `server.nodes`.
+> Creating a server using `dht.createServer` automatically announces itself periodically on the key pair it is listening on. When announcing the server under a specific topic, access the nodes it is close to using `server.nodes`.
#### **`await node.unannounce(topic, keyPair, [options])`**
diff --git a/building-blocks/hyperdrive.md b/building-blocks/hyperdrive.md
index 58707db..7fce75a 100644
--- a/building-blocks/hyperdrive.md
+++ b/building-blocks/hyperdrive.md
@@ -4,7 +4,7 @@
Hyperdrive is a secure, real-time distributed file system designed for easy P2P file sharing. We use it extensively inside Holepunch; apps like Keet are distributed to users as Hyperdrives, as is the Holepunch platform itself.
-> [Github (Hyperdrive)](https://github.com/holepunchto/hyperdrive)
+> [GitHub (Hyperdrive)](https://github.com/holepunchto/hyperdrive)
* [Hyperdrive](../building-blocks/hyperdrive.md)
* [Create a new instance](hyperdrive.md#installation)
@@ -65,9 +65,9 @@ npm install hyperdrive
#### **`const drive = new Hyperdrive(store, [key])`**
-Creates a new Hyperdrive instance. `store` must be an instance of [corestore.md](../helpers/corestore.md "mention").
+Creates a new Hyperdrive instance. `store` must be an instance of [corestore.md](../helpers/corestore.md).
-By default, it uses the core at `{ name: 'db' }` from `store`, unless you set the public `key`.
+By default, it uses the core at `{ name: 'db' }` from `store`, unless the public `key` is set.
#### Properties
@@ -121,7 +121,9 @@ Boolean indicating if the drive handles or not metadata. Always `true`.
Waits until the internal state is loaded.
-Use it once before reading synchronous properties like `drive.discoveryKey`, unless you called any of the other APIs.
+Use it once before reading synchronous properties like `drive.discoveryKey`.
+If any of the other APIs are called first they will wait for readiness so this is only needed to lookup
+synchronous properties before any API call.
#### **`await drive.close()`**
@@ -201,7 +203,7 @@ Deletes the blob from storage to free up space, but the file structure reference
| Property | Description | Type | Default |
| ----------------- | --------------------------------------------------------------------- | ------- | ------- |
-| **`diff`** | Returned `cleared` bytes object is null unless you enable this | Boolean | `false` |
+| **`diff`** | Returned `cleared` bytes object is null unless enabled | Boolean | `false` |
#### `const cleared = await drive.clearAll([options])`
@@ -211,11 +213,11 @@ Deletes all the blobs from storage to free up space, similar to how `drive.clear
| Property | Description | Type | Default |
| ----------------- | --------------------------------------------------------------------- | ------- | ------- |
-| **`diff`** | Returned `cleared` bytes object is null unless you enable this | Boolean | `false` |
+| **`diff`** | Returned `cleared` bytes object is null unless enabled | Boolean | `false` |
#### `await drive.purge()`
-Purges both cores (db and blobs) from your storage, completely removing all the drive's data.
+Purges both cores (db and blobs) from storage, completely removing all the drive's data.
#### **`await drive.symlink(path, linkname)`**
@@ -253,7 +255,7 @@ Returns a read stream of entries in the drive.
#### **`const mirror = drive.mirror(out, [options])`**
-Mirrors this drive into another. Returns a [mirrordrive.md](../helpers/mirrordrive.md "mention") instance constructed with `options`.
+Mirrors this drive into another. Returns a [mirrordrive.md](../helpers/mirrordrive.md) instance constructed with `options`.
Call `await mirror.done()` to wait for the mirroring to finish.
@@ -283,7 +285,7 @@ Waits until the watcher is loaded and detecting changes.
`await watcher.destroy()`
-Stops the watcher. You could also stop it by using `break` in the loop.
+Stops the watcher. I can also be stopped by using `break` in the `for await` loop.
#### **`const rs = drive.createReadStream(path, [options])`**
diff --git a/building-blocks/hyperswarm.md b/building-blocks/hyperswarm.md
index 8c25bcb..45b9089 100644
--- a/building-blocks/hyperswarm.md
+++ b/building-blocks/hyperswarm.md
@@ -6,7 +6,7 @@ Hyperswarm helps to find and connect to peers announcing a common 'topic' that c
Hyperswarm offers a simple interface to abstract away the complexities of underlying modules such as [HyperDHT](hyperdht.md) and [SecretStream](../helpers/secretstream.md). These modules can also be used independently for specialized tasks.
-> [Github (Hyperswarm)](https://github.com/hyperswarm/hyperswarm)
+> [GitHub (Hyperswarm)](https://github.com/hyperswarm/hyperswarm)
* [Hyperswarm](../building-blocks/hyperswarm.md)
* [Create a new instance](hyperswarm.md#installation)
@@ -84,7 +84,7 @@ See the [`PeerInfo`](hyperswarm.md#peerinfo) API for more details.
#### **`swarm.dht`**
-A [`HyperDHT`](./hyperdht.md) instance. Useful if you want lower-level control over Hyperswarm's networking.
+A [`HyperDHT`](./hyperdht.md) instance. Useful for lower-level control over Hyperswarm's networking.
#### Methods
@@ -100,7 +100,7 @@ Start discovering and connecting to peers sharing a common topic. As new peers a
| Property | Description | Type | Default |
| :----------: | -------------------------------------------------------------------------- | ------- | ------- |
-| **`server`** | Accept server connections for this topic by announcing yourself to the DHT | Boolean | `true` |
+| **`server`** | Accept server connections for this topic by self-announcing to the DHT | Boolean | `true` |
| **`client`** | Actively search for and connect to discovered servers | Boolean | `true` |
> Calling `swarm.join()` makes this core directly discoverable. To ensure that this core remains discoverable, Hyperswarm handles the periodic refresh of the join. For maximum efficiency, fewer joins should be called; if sharing a single Hypercore that links to other Hypercores, only join a `topic` for the first one.
@@ -123,7 +123,7 @@ Emitted when internal values are changed, useful for user interfaces.
### **Clients and Servers**
-In Hyperswarm, there are two ways for peers to join the swarm: client mode and server mode. If you've previously used Hyperswarm v2, these were called 'lookup' and 'announce', but we now think 'client' and 'server' are more descriptive.
+In Hyperswarm, there are two ways for peers to join the swarm: client mode and server mode. Previously in Hyperswarm v2, these were called 'lookup' and 'announce', but we now think 'client' and 'server' are more descriptive.
When user joins a topic as a server, the swarm will start accepting incoming connections from clients (peers that have joined the same topic in client mode). Server mode will announce this user keypair to the DHT so that other peers can discover the user server. When server connections are emitted, they are not associated with a specific topic -- the server only knows it received an incoming connection.
@@ -203,7 +203,7 @@ Ban or unban the peer. Banning will prevent any future reconnection attempts, bu
### Peer Discovery
-`swarm.join` returns a `PeerDiscovery` instance which allows you to both control discovery behavior, and respond to lifecycle changes during discovery.
+`swarm.join` returns a `PeerDiscovery` instance which allows for both the controlling of discovery behavior and responding to lifesycle changes during discovery.
#### Methods
diff --git a/guide/connecting-two-peers.md b/guide/connecting-two-peers.md
index 9451279..489e3d0 100644
--- a/guide/connecting-two-peers.md
+++ b/guide/connecting-two-peers.md
@@ -31,7 +31,7 @@ import b4a from 'b4a'
const dht = new DHT()
-// This keypair is your peer identifier in the DHT
+// This keypair is the peer identifier in the DHT
const keyPair = DHT.keyPair()
const server = dht.createServer(conn => {
diff --git a/helpers/compact-encoding.md b/helpers/compact-encoding.md
index 5dbcbf9..708bad8 100644
--- a/helpers/compact-encoding.md
+++ b/helpers/compact-encoding.md
@@ -2,7 +2,7 @@
A series of binary encoders/decoders for building small and fast parsers and serializers.
-> [Github (Compact-Encoding)](https://github.com/compact-encoding/compact-encoding)
+> [GitHub (Compact-Encoding)](https://github.com/compact-encoding/compact-encoding)
* [Compact-Encoding](compact-encoding.md#installation)
* Methods
@@ -43,7 +43,7 @@ const state = cenc.state()
#### **`enc.preencode(state, val)`**
-Performs a fast preencode dry-run that only sets `state.end`. Use this to figure out how big of a buffer you need.
+Performs a fast preencode dry-run that only sets `state.end`. Use this to figure out how big the buffer needs to be.
```javascript
const cenc = require('compact-encoding')
@@ -84,7 +84,7 @@ cenc.string.decode(state) // 'hi'
### Helpers
-To encode to a buffer or decode from one, use the `encode` and `decode` helpers to reduce your boilerplate.
+To encode to a buffer or decode from one, use the `encode` and `decode` helpers to reduce boilerplate.
```javascript
const buf = cenc.encode(cenc.bool, true)
diff --git a/helpers/corestore.md b/helpers/corestore.md
index 32dc9a2..800b3df 100644
--- a/helpers/corestore.md
+++ b/helpers/corestore.md
@@ -2,9 +2,9 @@
**stable**
-Corestore is a Hypercore factory that makes it easier to manage large collections of named Hypercores. It is designed to efficiently store and replicate multiple sets of interlinked [hypercore.md](../building-blocks/hypercore.md "mention")(s), such as those used by [hyperdrive.md](../building-blocks/hyperdrive.md "mention"), removing the responsibility of managing custom storage/replication code from these higher-level modules.
+Corestore is a Hypercore factory that makes it easier to manage large collections of named Hypercores. It is designed to efficiently store and replicate multiple sets of interlinked [hypercore.md](../building-blocks/hypercore.md)(s), such as those used by [hyperdrive.md](../building-blocks/hyperdrive.md), removing the responsibility of managing custom storage/replication code from these higher-level modules.
-> [Github (Corestore)](https://github.com/holepunchto/corestore)
+> [GitHub (Corestore)](https://github.com/holepunchto/corestore)
* [Corestore](corestore.md#installation)
* [Create a new instance](corestore.md#const-store--new-corestorestorage-options)
@@ -68,7 +68,7 @@ const core4 = store.get({ key: otherKey })
const core5 = store.get(otherKey)
```
-> The names you provide are only relevant **locally**, in that they are used to deterministically generate key pairs. Whenever you load a core by name, that core will be writable. Names are not shared with remote peers.
+> The names provided are only relevant **locally**, in that they are used to deterministically generate key pairs. Whenever a core is loaded by name, that core will be writable. Names are not shared with remote peers.
#### **`const stream = store.replicate(options|stream)`**
@@ -80,7 +80,7 @@ Corestore replicates in an 'all-to-all' fashion, meaning that when replication b
If the remote side dynamically adds a new Hypercore to the replication stream (by opening that core with a `get` on their Corestore, for example), Corestore will load and replicate that core if possible.
-Using [hyperswarm.md](../building-blocks/hyperswarm.md "mention") one can replicate Corestores as follows:
+Using [hyperswarm.md](../building-blocks/hyperswarm.md) one can replicate Corestores as follows:
```javascript
const swarm = new Hyperswarm()
@@ -112,7 +112,7 @@ const core1 = ns1.get({ name: 'main' }) // These will load different Hypercores
const core2 = ns2.get({ name: 'main' })
```
-Namespacing is particularly useful if your application needs to create many different data structures, such as [hyperdrive.md](../building-blocks/hyperdrive.md "mention")s, that all share a common storage location:
+Namespacing is particularly useful if an application needs to create many different data structures, such as [hyperdrive.md](../building-blocks/hyperdrive.md)s, that all share a common storage location:
```javascript
const store = new Corestore('./my-storage-dir')
@@ -125,7 +125,7 @@ const drive2 = new Hyperdrive(store.namespace('drive-b'))
#### `const session = store.session([options])`
-Creates a new Corestore that shares resources with the original, like cache, cores, replication streams, and storage, while optionally resetting the namespace, overriding `primaryKey`. Useful when an application wants to accept an optional Corestore, but needs to maintain a predictable key derivation.
+Creates a new Corestore that shares resources with the original, like cache, cores, replication streams, and storage, while optionally resetting the namespace, overriding `primaryKey`. Useful when an application needs to accept an optional Corestore, but needs to maintain a predictable key derivation.
`options` are the same as the constructor options:
diff --git a/helpers/localdrive.md b/helpers/localdrive.md
index f6e462d..eedb1cf 100644
--- a/helpers/localdrive.md
+++ b/helpers/localdrive.md
@@ -1,8 +1,8 @@
# Localdrive
-A file system API that is similar to [hyperdrive.md](../building-blocks/hyperdrive.md "mention"). This tool comes in handy when mirroring files from user filesystem to a drive, and vice-versa.
+A file system API that is similar to [hyperdrive.md](../building-blocks/hyperdrive.md). This tool comes in handy when mirroring files from user filesystem to a drive, and vice-versa.
-> [Github (Localdrive)](https://github.com/holepunchto/localdrive)
+> [GitHub (Localdrive)](https://github.com/holepunchto/localdrive)
* [Installation](localdrive.md#installation)
* [Usage](localdrive.md#usage)
@@ -80,7 +80,7 @@ String with the resolved (absolute) drive path.
Boolean that indicates if the drive handles or not metadata. Default `false`.
-If you pass `options.metadata` hooks then `supportsMetadata` becomes true.
+If `options.metadata` hooks are passed then `supportsMetadata` becomes `true`.
**`await drive.put(key, buffer, [options])`**
@@ -146,7 +146,7 @@ Returns a stream of all subpaths of entries in drive stored at paths prefixed by
**`const mirror = drive.mirror(out, [options])`**
-Mirrors this drive into another. Returns a [mirrordrive.md](../helpers/mirrordrive.md "mention") instance constructed with `options`.
+Mirrors this drive into another. Returns a [mirrordrive.md](../helpers/mirrordrive.md) instance constructed with `options`.
Call [`await mirror.done()`](../helpers/mirrordrive.md#await-mirrordone) to wait for the mirroring to finish.
diff --git a/helpers/mirrordrive.md b/helpers/mirrordrive.md
index d300d07..6f25dea 100644
--- a/helpers/mirrordrive.md
+++ b/helpers/mirrordrive.md
@@ -1,8 +1,8 @@
# MirrorDrive
-Mirrors a [hyperdrive.md](../building-blocks/hyperdrive.md "mention") or a [localdrive.md](../helpers/localdrive.md "mention") into another one.
+Mirrors a [hyperdrive.md](../building-blocks/hyperdrive.md) or a [localdrive.md](../helpers/localdrive.md) into another one.
-> [Github (Mirrordrive)](https://github.com/holepunchto/mirror-drive)
+> [GitHub (Mirrordrive)](https://github.com/holepunchto/mirror-drive)
* [Installation](./mirrordrive.md#installation)
* [Basic usage](mirrordrive.md#basic-usage)
diff --git a/helpers/protomux.md b/helpers/protomux.md
index fe8d941..db2977e 100644
--- a/helpers/protomux.md
+++ b/helpers/protomux.md
@@ -2,7 +2,7 @@
Multiplex multiple message-oriented protocols over a stream
->[Github (Protomux)](https://github.com/mafintosh/protomux)
+>[GitHub (Protomux)](https://github.com/mafintosh/protomux)
* [Installation](protomux.md#installation)
* [Basic usage](protomux.md#basic-usage)
@@ -76,7 +76,7 @@ Makes a new instance. `stream` should be a framed stream, preserving the message
```javascript
{
- // Called when the muxer wants to allocate a message that is written, defaults to Buffer.allocUnsafe.
+ // Called when the muxer needs to allocate a message that is written, defaults to Buffer.allocUnsafe.
alloc (size) {}
}
```
@@ -99,7 +99,7 @@ Adds a new protocol channel.
id: buffer,
// Optional encoding for a handshake
handshake: encoding,
- // Optional array of message types you want to send/receive.
+ // Optional array of message types to send/receive.
messages: [],
// Called when the remote side adds this protocol.
// Errors here are caught and forwarded to stream.destroy
@@ -180,4 +180,4 @@ Same as `channel.uncork` but on the muxer instance.
#### **`for (const channel of muxer) { ... }`**
-The muxer instance is iterable, so you can iterate over all the channels.
+The muxer instance is iterable so all channels can be iterated.
diff --git a/helpers/secretstream.md b/helpers/secretstream.md
index e078729..4273b71 100644
--- a/helpers/secretstream.md
+++ b/helpers/secretstream.md
@@ -4,7 +4,7 @@ SecretStream is used to securely create connections between two peers in Hypersw
The SecretStream instance is a Duplex stream that supports usability as a normal stream for standard read/write operations. Furthermore, its payloads are encrypted with libsodium's SecretStream for secure transmission.
->[Github (SecretStream)](https://github.com/holepunchto/hyperswarm-secret-stream)
+>[GitHub (SecretStream)](https://github.com/holepunchto/hyperswarm-secret-stream)
* [SecretStream](secretstream.md#installation)
* [Create a new instance](secretstream.md#const-s--new-secretstreamisinitiator-rawstream-options)
@@ -35,7 +35,7 @@ npm install @hyperswarm/secret-stream
Makes a new stream.
-`isInitiator` is a boolean indicating whether you are the client or the server.
+`isInitiator` is a boolean indicating whether the process is a client or the server.
`rawStream` can be set to an underlying transport stream to run the noise stream over.
@@ -43,12 +43,12 @@ Makes a new stream.
| Property | Description | Type |
| :-------------------: | -------------------------------------------------------------------------- | ----------------------------------------------------- |
-| **`pattern`** | Accept server connections for this topic by announcing yourself to the DHT | String |
+| **`pattern`** | Accept server connections for this topic by announcing it to the DHT | String |
| **`remotePublicKey`** | PublicKey of the other party | String |
| **`keyPair`** | Combination of PublicKey and SecretKey | { publicKey, secretKey } |
| **`handshake`** | To use a handshake performed elsewhere, pass it here | { tx, rx, handshakeHash, publicKey, remotePublicKey } |
-The SecretStream returned is a Duplex stream that you use as a normal stream, to write/read data from, except its payloads are encrypted using the libsodium secretstream.
+The SecretStream returned is a Duplex stream to write data to and read data from, it's a normal stream with payloads that are encrypted using the libsodium secretstream.
> By default, the above process uses ed25519 for the handshakes.
diff --git a/howto/connect-two-peers-by-key-with-hyperdht.md b/howto/connect-two-peers-by-key-with-hyperdht.md
index 67a336b..8ed00c0 100644
--- a/howto/connect-two-peers-by-key-with-hyperdht.md
+++ b/howto/connect-two-peers-by-key-with-hyperdht.md
@@ -31,7 +31,7 @@ import b4a from 'b4a'
const dht = new DHT()
-// This keypair is your peer identifier in the DHT
+// This keypair is the peer identifier in the DHT
const keyPair = DHT.keyPair()
const server = dht.createServer(conn => {
diff --git a/readme.md b/readme.md
index e168b8d..29119d0 100644
--- a/readme.md
+++ b/readme.md
@@ -4,9 +4,7 @@
Pear by Holepunch is a combined Peer-to-Peer (P2P) Runtime, Development & Deployment tool.
-Pear makes it possible to build, share and extend P2P applications using common Web and Mobile technology.
-
-Herein is everything needed to create unstoppable, zero-infrastructure P2P applications for Desktop, Terminal & Mobile (soon).
+Build, share & extend unstoppable, zero-infrastructure P2P applications for Desktop, Terminal & Mobile.
Welcome to the Internet of Peers
@@ -97,7 +95,7 @@ The `hyperdht` module is the Distributed Hash Table (DHT) powering Hyperswarm. T
Notable features include:
-* lower-level module provides direct access to the DHT for connecting peers using keypairs
+* lower-level module provides direct access to the DHT for connecting peers using key pairs
## Helpers
diff --git a/tools/drives.md b/tools/drives.md
index c96b5e7..dd7a755 100644
--- a/tools/drives.md
+++ b/tools/drives.md
@@ -2,7 +2,7 @@
CLI to download, seed, and mirror a Hyperdrive or Localdrive.
->[Github (drives)](https://github.com/holepunchto/drives)
+>[GitHub (drives)](https://github.com/holepunchto/drives)
* [Installation](drives.md#installation)
* [Basic usage](drives.md#basic-usage)
diff --git a/tools/hyperbeam.md b/tools/hyperbeam.md
index a59d9ab..cf0984c 100644
--- a/tools/hyperbeam.md
+++ b/tools/hyperbeam.md
@@ -1,6 +1,6 @@
# Hyperbeam
-An end-to-end encrypted pipeline for the Internet, utilizing the [hyperswarm.md](../building-blocks/hyperswarm.md "mention") and Noise Protocol for secure communications.
+An end-to-end encrypted pipeline for the Internet, utilizing the [hyperswarm.md](../building-blocks/hyperswarm.md) and Noise Protocol for secure communications.
> [GitHub (Hyperbeam)](https://github.com/mafintosh/hyperbeam)
diff --git a/tools/hypertele.md b/tools/hypertele.md
index 53193b5..12bd269 100644
--- a/tools/hypertele.md
+++ b/tools/hypertele.md
@@ -123,4 +123,4 @@ Hypertele also provides support for the hyper-cmd system!
Learn more about identity management and host resolution using hyper-cmd:
-> [Github (Hyper-cmd-docs)](https://github.com/prdn/hyper-cmd-docs)
+> [GitHub (Hyper-cmd-docs)](https://github.com/prdn/hyper-cmd-docs)