mirror of
https://github.com/aljazceru/pear-docs.git
synced 2025-12-17 14:34:19 +01:00
releasing a pear, readme iterate, howtos
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
### Hyperswarm: Connecting to Many Peers by Topic
|
||||
### How to connect to many peers by topic with Hyperswarm
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
|
||||
### Hyperswarm's DHT: Connecting Two Peers by Key
|
||||
### How to connect two Peers by key with Hyperdht
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
|
||||
### Corestore: Working with Many Hypercores
|
||||
### How to work with many Hypercores using Corestore
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
|
||||
### Hyperbee: Sharing Append-Only Databases
|
||||
### How to share Append-Only Databases with Hyperbee
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
```bash
|
||||
@@ -85,7 +86,7 @@ async function loadDictionary() {
|
||||
```
|
||||
|
||||
|
||||
`bee-reader.js` creates a Corestore instance and replicates it using the Hyperswarm instance to the same topic as the above file. On every word entered in the command line, it will download the respective data to the local Hyperbee instance.
|
||||
`bee-reader.js` creates a Corestore instance and replicates it using the Hyperswarm instance to the same topic as `writer.js`. On every word entered in the command line, it will download the respective data to the local Hyperbee instance.
|
||||
|
||||
Try looking at disk space the `reader-storage` directory is using after each query. notice that it's significantly smaller than `writer-storage`! This is because Hyperbee only downloads the Hypercore blocks it needs to satisfy each query, a feature we call **sparse downloading.**
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
### Hypercore: The Basics
|
||||
### How to replicate and persist with Hypercore
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
|
||||
|
||||
### Hyperdrive: A Full P2P Filesystem
|
||||
### How to create a full P2P filesystem with Hyperdrive
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
|
||||
@@ -1,21 +1,95 @@
|
||||
# Releasing a Pear Application
|
||||
|
||||
As covered in [Sharing a Pear App](./sharing-a-pear-app.md), Pear use release channels in a similar way that git use branches. When the app has been tested, and it's ready to release it, it's really simple.
|
||||
Pear Applications are stored in an append-only log ([hypercore](../building-blocks/hypercore.md)) and the log has a length.
|
||||
|
||||
Pear versions takes the form `<fork>.<length>.<key>`. The version length of a Pear application is the length of its append-only log.
|
||||
|
||||
* prerelease strategy
|
||||
* dump strategy
|
||||
When an application has not been marked with a release, `pear run <key>` opens the the application at its latest version length. This is excellent in development, both locally and for other peers to preview prereleased work.
|
||||
|
||||
## Previewing prerelease
|
||||
However, once a release has been marked `pear run <key>` will only open the latest marked release.
|
||||
|
||||
## Marking a Release
|
||||
## Step 1: Staging Production
|
||||
|
||||
Assume that the app was staged into `example`, then releasing it is simply:
|
||||
The `pear stage` command derives an application key from the application name as defined in the project `package.json` file and the specified `channel` name. The `pear stage dev` convention for development can be complemented with a `production` channel name for production. Running the following command in a Pear Project folder will output an application key.
|
||||
|
||||
```
|
||||
pear release example
|
||||
```sh
|
||||
pear stage production
|
||||
```
|
||||
|
||||
This moves the example channel to the released version. The seeders who are already seeding that channel, will still be seeding.
|
||||
Using separate channels for development and production means there's an application key for trusted peers and an application key for public peers. The development key can remain unreleased so that `pear run <key>` loads the latest staged changes by default while releases can be marked on the production key so that `pear run <key>` loads the latest stable release by default for production.
|
||||
|
||||
## Dump to Stage Production Key Deployment Strategy
|
||||
## Step 2: Marking a Release
|
||||
|
||||
Changes to an application can only propagate to peers if the application is being seeded:
|
||||
|
||||
```
|
||||
pear seed production
|
||||
```
|
||||
|
||||
To view the help for the `pear release` command run `pear release help`.
|
||||
|
||||
To indicate the latest staged changes on the production channel is at a release point run:
|
||||
|
||||
```
|
||||
pear release production
|
||||
```
|
||||
|
||||
## Step 3: Running staged from a released app
|
||||
|
||||
After marking a release, make a trivial change to the project (e.g. add a console.log somewhere), check it works with the `pear dev` command and then stage it with `pear stage production`.
|
||||
|
||||
Opening the application with `pear run <key>` will **not** result in the log being output because `pear run` will load the latest marked release before the added log was staged.
|
||||
|
||||
The latest staged changes on a released application can be previewed using the `--checkout` flag:
|
||||
|
||||
```
|
||||
pear run <key> --checkout=staged
|
||||
```
|
||||
|
||||
The value of the `--checkout` flag may be `staged`, `released` (default) or a number representing the specific version length to checkout.
|
||||
|
||||
## Discussion
|
||||
|
||||
### The dump-stage-release strategy
|
||||
|
||||
A development application key can be shared among trusted peers - at which point it could be referred to as an internal application key (internal to that group of peers who have the key).
|
||||
|
||||
While using different channel names by convention makes good sense, using `pear stage dev` and `pear stage production` on the same machine can have practical limitations.
|
||||
|
||||
A dump-stage-release strategy can be employed to futher seperate the concerns between development and production and enable different machines to own internal vs production keys.
|
||||
|
||||
The machine that will hold the production key can run:
|
||||
|
||||
```
|
||||
pear dump <internal-key> <path-to-app-production-dir>
|
||||
```
|
||||
|
||||
This will synchronize the application files to disk. It's a reverse stage.
|
||||
|
||||
Once complete the project can be staged from the production machine with:
|
||||
|
||||
```
|
||||
pear stage production
|
||||
```
|
||||
|
||||
Then released with:
|
||||
|
||||
```
|
||||
pear release production
|
||||
```
|
||||
|
||||
A `pear seed production` process would also need to be running for other peers to access the application.
|
||||
|
||||
The same three commands in order, `pear dump`, `pear stage` and `pear release`, can be used to carve a release from an internal key to a production key across different machines at any time.
|
||||
|
||||
### Distribution Packages
|
||||
|
||||
Asset-building of Distribution Packages (.dmg, .msi, .appimage) is currently not featured by Pear but is a feature for the future.
|
||||
|
||||
## Next
|
||||
|
||||
* [Starting a Pear Desktop Project](./starting-a-pear-desktop-project.md)
|
||||
* [Making a Pear Desktop Application](./making-a-pear-desktop-app.md)
|
||||
* [Starting a Pear Terminal Project](./starting-a-pear-terminal-project.md)
|
||||
* [Making a Pear Terminal Application](./making-a-pear-terminal-app.md)
|
||||
* [Sharing a Pear Application](./sharing-a-pear-app.md)
|
||||
@@ -14,7 +14,7 @@ If starting from [Making a Pear Desktop Application](./making-a-pear-desktop-app
|
||||
|
||||
## Step 1. Stage the app
|
||||
|
||||
To view the help for the `pear stage` command we can run `pear stage help`.
|
||||
To view the help for the `pear stage` command run `pear stage help`.
|
||||
|
||||
The command signature for `pear stage` is `pear stage <channel|key> [dir]`.
|
||||
|
||||
@@ -38,7 +38,7 @@ If the application is a desktop application there will also be a warmup step whe
|
||||
|
||||
## Step 2. Run the app on the same machine
|
||||
|
||||
To view the help for the `pear run` command we can run `pear run help`.
|
||||
To view the help for the `pear run` command run `pear run help`.
|
||||
|
||||
The command signature for `pear run` is `pear run <key>`.
|
||||
|
||||
@@ -54,7 +54,7 @@ Where `pear dev` opens an application from the filesystem, `pear run` opens the
|
||||
|
||||
The application can be shared with other peers by announcing the application to the DHT and then supplying the application key to other peers.
|
||||
|
||||
To view the help for the `pear seed` command we can run `pear seed help`.
|
||||
To view the help for the `pear seed` command run `pear seed help`.
|
||||
|
||||
The command signature for `pear seed` is `pear seed <channel|key> [dir]`.
|
||||
|
||||
|
||||
57
howto/connect-to-many-peers-by-topic-with-hyperswarm.md
Normal file
57
howto/connect-to-many-peers-by-topic-with-hyperswarm.md
Normal file
@@ -0,0 +1,57 @@
|
||||
### How to connect to many peers by topic with Hyperswarm
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
```bash
|
||||
mkdir connect-two-peers
|
||||
cd connect-two-peers
|
||||
pear init -y -t terminal
|
||||
npm install hyperswarm b4a graceful-goodbye hypercore-crypto
|
||||
```
|
||||
|
||||
In the former example, two peers connected directly using the first peer's public key. Hyperswarm helps to discover peers swarming a common topic, and connect to as many of them as possible. This will become clearer in the Hypercore example, but it's the best way to distribute peer-to-peer data structures.
|
||||
|
||||
The [Hyperswarm](../building-blocks/hyperswarm.md) module provides a higher-level interface over the underlying DHT, abstracting away the mechanics of establishing and maintaining connections. Instead, 'join' topics, and the swarm discovers peers automatically. It also handles reconnections in the event of failures.
|
||||
|
||||
In the previous example, we needed to explicitly indicate which peer was the server and which was the client. By using Hyperswarm, we create two peers, have them join a common topic, and let the swarm deal with connections.
|
||||
|
||||
This example consists of a single file, `peer.js`. In one terminal, type `node peer.js`, it will display the topic. Copy/paste that topic into N additional terminals with `node peer.js (topic)`. Each peer will log information about the other connected peers.
|
||||
|
||||
Start typing into any terminal, and it will be broadcast to all connected peers.
|
||||
|
||||
```javascript
|
||||
//peer.js
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import crypto from 'hypercore-crypto'
|
||||
import b4a from 'b4a'
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// Keep track of all connections and console.log incoming data
|
||||
const conns = []
|
||||
swarm.on('connection', conn => {
|
||||
const name = b4a.toString(conn.remotePublicKey, 'hex')
|
||||
console.log('* got a connection from:', name, '*')
|
||||
conns.push(conn)
|
||||
conn.once('close', () => conns.splice(conns.indexOf(conn), 1))
|
||||
conn.on('data', data => console.log(`${name}: ${data}`))
|
||||
})
|
||||
|
||||
// Broadcast stdin to all connections
|
||||
process.stdin.on('data', d => {
|
||||
for (const conn of conns) {
|
||||
conn.write(d)
|
||||
}
|
||||
})
|
||||
|
||||
// Join a common topic
|
||||
const topic = process.argv[2] ? b4a.from(process.argv[2], 'hex') : crypto.randomBytes(32)
|
||||
const discovery = swarm.join(topic, { client: true, server: true })
|
||||
|
||||
// The flushed promise will resolve when the topic has been fully announced to the DHT
|
||||
discovery.flushed().then(() => {
|
||||
console.log('joined topic:', b4a.toString(topic, 'hex'))
|
||||
})
|
||||
```
|
||||
69
howto/connect-two-peers-by-key-with-hyperdht.md
Normal file
69
howto/connect-two-peers-by-key-with-hyperdht.md
Normal file
@@ -0,0 +1,69 @@
|
||||
|
||||
### How to connect two Peers by key with Hyperdht
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
```bash
|
||||
mkdir connect-two-peers
|
||||
cd connect-two-peers
|
||||
pear init -y -t terminal
|
||||
npm install hyperdht b4a graceful-goodbye
|
||||
```
|
||||
|
||||
[Hyperswarm](../building-blocks/hyperswarm.md) helps to find and connect to peers who are announcing a common 'topic'. The swarm topic can be anything. The HyperDHT uses a series of holepunching techniques to establish direct connections between peers, even if they're located on home networks with tricky NATs.
|
||||
|
||||
In the HyperDHT, peers are identified by a public key, not by an IP address. With the public key, users can connect to each other irrespective of their location, even if they move between different networks.
|
||||
|
||||
> Hyperswarm's holepunching will fail if both the client peer and the server peer are on randomizing [NATs](https://en.wikipedia.org/wiki/Network_address_translation), in which case the connection must be relayed through a third peer. Hyperswarm does not do any relaying by default.
|
||||
|
||||
> For example, Keet implements its relaying system wherein other call participants can serve as relays -- the more participants in the call, the stronger overall connectivity becomes.
|
||||
|
||||
Use the HyperDHT to create a basic CLI chat app where a client peer connects to a server peer by public key. This example consists of two files: `client.js` and `server.js`.
|
||||
|
||||
`server.js` will create a key pair and then start a server that will listen on the generated key pair. The public key is logged into the console. Copy it for instantiating the client.
|
||||
|
||||
|
||||
```javascript
|
||||
//server.js
|
||||
import DHT from 'hyperdht'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import b4a from 'b4a'
|
||||
|
||||
const dht = new DHT()
|
||||
|
||||
// This keypair is your peer identifier in the DHT
|
||||
const keyPair = DHT.keyPair()
|
||||
|
||||
const server = dht.createServer(conn => {
|
||||
console.log('got connection!')
|
||||
process.stdin.pipe(conn).pipe(process.stdout)
|
||||
})
|
||||
|
||||
server.listen(keyPair).then(() => {
|
||||
console.log('listening on:', b4a.toString(keyPair.publicKey, 'hex'))
|
||||
})
|
||||
|
||||
// Unnannounce the public key before exiting the process
|
||||
// (This is not a requirement, but it helps avoid DHT pollution)
|
||||
goodbye(() => server.close())
|
||||
```
|
||||
|
||||
`client.js` will spin up a client, and the public key copied earlier must be supplied as a command line argument for connecting to the server. The client process will log `got connection` into the console when it connects to the server.
|
||||
|
||||
Once it's connected, try typing in both terminals!
|
||||
|
||||
``` javascript
|
||||
//client.js
|
||||
import DHT from 'hyperdht'
|
||||
import b4a from 'b4a'
|
||||
|
||||
console.log('Connecting to:', process.argv[2])
|
||||
const publicKey = b4a.from(process.argv[2], 'hex')
|
||||
|
||||
const dht = new DHT()
|
||||
const conn = dht.connect(publicKey)
|
||||
conn.once('open', () => console.log('got connection!'))
|
||||
|
||||
process.stdin.pipe(conn).pipe(process.stdout)
|
||||
```
|
||||
|
||||
186
howto/create-a-full-peer-to-peer-filesystem-with-hyperdrive.md
Normal file
186
howto/create-a-full-peer-to-peer-filesystem-with-hyperdrive.md
Normal file
@@ -0,0 +1,186 @@
|
||||
### How to create a full peer-to-peer filesystem with Hyperdrive
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
```bash
|
||||
mkdir p2p-filesystem
|
||||
cd p2p-filesystem
|
||||
pear init -y -t terminal
|
||||
npm install hyperswarm hyperdrive localdrive corestore debounceify b4a graceful-goodbye
|
||||
```
|
||||
|
||||
[hyperdrive.md](../building-blocks/hyperdrive.md) is a secure, real-time distributed file system designed for easy peer-to-peer file sharing. In the same way that a Hyperbee is just a wrapper around a Hypercore, a Hyperdrive is a wrapper around two Hypercores: one is a Hyperbee index for storing file metadata, and the other is used to store file contents.
|
||||
|
||||
Now mirror a local directory into a Hyperdrive, replicate it with a reader peer, who then mirrors it into their own local copy. When the writer modifies its drive, by adding, removing, or changing files, the reader's local copy will be updated to reflect that. To do this, use two additional tools: [mirrordrive.md](../helpers/mirrordrive.md) and [localdrive.md](../helpers/localdrive.md), which handle all interactions between Hyperdrives and the local filesystem.
|
||||
|
||||
This example consists of three files: `writer.js`, `drive-reader.js` and `bee-reader.js`.
|
||||
|
||||
`writer.js` creates a local drive instance for a local directory and then mirrors the local drive into the Hyperdrive instance. The store used to create the Hyperdrive instance is replicated using Hyperswarm to make the data of Hyperdrive accessible to other peers. Copy the drive key logged into the command line for the `reader.js` execution.
|
||||
|
||||
|
||||
```javascript
|
||||
writer.js
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import Hyperdrive from 'hyperdrive'
|
||||
import Localdrive from 'localdrive'
|
||||
import Corestore from 'corestore'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import debounce from 'debounceify'
|
||||
import b4a from 'b4a'
|
||||
|
||||
// create a Corestore instance
|
||||
const store = new Corestore('./writer-storage')
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// replication of the corestore instance on connection with other peers
|
||||
swarm.on('connection', conn => store.replicate(conn))
|
||||
|
||||
// A local drive provides a Hyperdrive interface to a local directory
|
||||
const local = new Localdrive('./writer-dir')
|
||||
|
||||
// A Hyperdrive takes a Corestore because it needs to create many cores
|
||||
// One for a file metadata Hyperbee, and one for a content Hypercore
|
||||
const drive = new Hyperdrive(store)
|
||||
|
||||
// wait till the properties of the hyperdrive instance are initialized
|
||||
await drive.ready()
|
||||
|
||||
// Import changes from the local drive into the Hyperdrive
|
||||
const mirror = debounce(mirrorDrive)
|
||||
|
||||
const discovery = swarm.join(drive.discoveryKey)
|
||||
await discovery.flushed()
|
||||
|
||||
console.log('drive key:', b4a.toString(drive.key, 'hex'))
|
||||
|
||||
// start the mirroring process (i.e copying) of content from writer-dir to the drive
|
||||
// whenever something is entered (other than '/n' or Enter )in the command-line
|
||||
process.stdin.setEncoding('utf-8')
|
||||
process.stdin.on('data', (d) => {
|
||||
if (!d.match('\n')) return
|
||||
mirror()
|
||||
})
|
||||
|
||||
// this function copies the contents from writer-dir directory to the drive
|
||||
async function mirrorDrive () {
|
||||
console.log('started mirroring changes from \'./writer-dir\' into the drive...')
|
||||
const mirror = local.mirror(drive)
|
||||
await mirror.done()
|
||||
console.log('finished mirroring:', mirror.count)
|
||||
}
|
||||
```
|
||||
|
||||
`drive-reader.js` creates a local drive instance for a local directory and then mirrors the contents of the local Hyperdrive instance into the local drive instance (which will write the contents to the local directory).
|
||||
|
||||
Try running `node drive-reader.js (key-from-above)`, then add/remove/modify files inside `writer-dir` then press `Enter` in the writer's terminal (to import the local changes into the writer's drive). Observe that all new changes mirror into `reader-dir`.
|
||||
|
||||
|
||||
```javascript
|
||||
drive-reader.js
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import Hyperdrive from 'hyperdrive'
|
||||
import Localdrive from 'localdrive'
|
||||
import Corestore from 'corestore'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import debounce from 'debounceify'
|
||||
import b4a from 'b4a'
|
||||
|
||||
// create a Corestore instance
|
||||
const store = new Corestore('./reader-storage')
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// replication of store on connection with other peers
|
||||
swarm.on('connection', conn => store.replicate(conn))
|
||||
|
||||
// create a local copy of the remote drive
|
||||
const local = new Localdrive('./reader-dir')
|
||||
|
||||
// create a hyperdrive using the public key passed as a command-line argument
|
||||
const drive = new Hyperdrive(store, b4a.from(process.argv[2], 'hex'))
|
||||
|
||||
// wait till all the properties of the drive are initialized
|
||||
await drive.ready()
|
||||
|
||||
const mirror = debounce(mirrorDrive)
|
||||
|
||||
// call the mirror function whenever content gets appended
|
||||
// to the Hypercore instance of the hyperdrive
|
||||
drive.core.on('append', mirror)
|
||||
|
||||
const foundPeers = store.findingPeers()
|
||||
|
||||
// join a topic
|
||||
swarm.join(drive.discoveryKey, { client: true, server: false })
|
||||
swarm.flush().then(() => foundPeers())
|
||||
|
||||
// start the mirroring process (i.e copying the contents from remote drive to local dir)
|
||||
mirror()
|
||||
|
||||
async function mirrorDrive () {
|
||||
console.log('started mirroring remote drive into \'./reader-dir\'...')
|
||||
const mirror = drive.mirror(local)
|
||||
await mirror.done()
|
||||
console.log('finished mirroring:', mirror.count)
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Just as a Hyperbee is **just** a Hypercore, a Hyperdrive is **just** a Hyperbee (which is **just** a Hypercore). Now inspect the Hyperdrive as though it were a Hyperbee, and log out some file metadata.
|
||||
|
||||
`bee-reader.js` creates a Hyperbee instance using the Hypercore instance created with the copied public key. Every time the Hyperbee is updated (an `append` event is emitted on the underlying Hypercore), all file metadata nodes will be logged out.
|
||||
|
||||
Try adding or removing a few files from the writer's data directory, then pressing `Enter` in the writer's terminal to mirror the changes.
|
||||
|
||||
|
||||
```javascript
|
||||
bee-reader.js
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import Corestore from 'corestore'
|
||||
import Hyperbee from 'hyperbee'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import debounce from 'debounceify'
|
||||
import b4a from 'b4a'
|
||||
|
||||
// create a Corestore instance
|
||||
const store = new Corestore('./reader-storage')
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// replicate corestore instance on connection with other peers
|
||||
swarm.on('connection', conn => store.replicate(conn))
|
||||
|
||||
// create/get the hypercore instance using the public key supplied as command-line arg
|
||||
const core = store.get({ key: b4a.from(process.argv[2], 'hex') })
|
||||
|
||||
// create a hyperbee instance using the hypercore instance
|
||||
const bee = new Hyperbee(core, {
|
||||
keyEncoding: 'utf-8',
|
||||
valueEncoding: 'json'
|
||||
})
|
||||
|
||||
// wait till the properties of the hypercore instance are initialized
|
||||
await core.ready()
|
||||
|
||||
const foundPeers = store.findingPeers()
|
||||
swarm.join(core.discoveryKey)
|
||||
swarm.flush().then(() => foundPeers())
|
||||
|
||||
// execute the listBee function whenever the data is appended to the underlying hypercore
|
||||
core.on('append', listBee)
|
||||
|
||||
listBee()
|
||||
|
||||
// listBee function will list the key-value pairs present in the hyperbee instance
|
||||
async function listBee () {
|
||||
console.log('\n***************')
|
||||
console.log('hyperbee contents are now:')
|
||||
for await (const node of bee.createReadStream()) {
|
||||
console.log(' ', node.key, '->', node.value)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
86
howto/replicate-and-persist-with-hypercore.md
Normal file
86
howto/replicate-and-persist-with-hypercore.md
Normal file
@@ -0,0 +1,86 @@
|
||||
### How to replicate and persist with Hypercore
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
```bash
|
||||
mkdir hypercore-basics
|
||||
cd hypercore-basics
|
||||
pear init -y -t terminal
|
||||
npm install hyperswarm hypercore b4a graceful-goodbye
|
||||
```
|
||||
|
||||
In the Hyperswarm examples, peers can exchange chat messages so long as both are online at the same time and directly connected, and those messages are not persistent (they will be lost if the recipient is offline). Hypercore fixes all of these problems.
|
||||
|
||||
[hypercore.md](../building-blocks/hypercore.md) is a secure, distributed append-only log. It is built for sharing enormous datasets and streams of real-time data. It has a secure transport protocol, making it easy to build fast and scalable peer-to-peer applications.
|
||||
|
||||
Now extend the ephemeral chat example above but using Hypercore to add many significant new features:
|
||||
|
||||
1. **Persistence**: The owner of the Hypercore can add messages at any time, and they'll be persisted to disk. Whenever they come online, readers can replicate these messages over Hyperswarm.
|
||||
2. **Many Readers:** New messages added to the Hypercore will be broadcast to interested readers. The owner gives each reader a reading capability (`core.key`) and a corresponding discovery key (`core.discoveryKey`). The former is used to authorize the reader, ensuring that they have permission to read messages, and the latter is used to discover the owner (and other readers) on the swarm.
|
||||
|
||||
The following example consists of two files: `reader.js` and `writer.js`. When these two files are executed (run using node), two peers are created and connected. A Hypercore is used to store the data entered into the command line.
|
||||
|
||||
The `writer.js` code stores the data entered into the command line to the Hypercore instance. The Hypercore instance is replicated with other peers using Hyperswarm.
|
||||
|
||||
|
||||
```javascript
|
||||
//writer.js
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import Hypercore from 'hypercore'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import b4a from 'b4a'
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
const core = new Hypercore('./writer-storage')
|
||||
|
||||
// core.key and core.discoveryKey will only be set after core.ready resolves
|
||||
await core.ready()
|
||||
console.log('hypercore key:', b4a.toString(core.key, 'hex'))
|
||||
|
||||
// Append all stdin data as separate blocks to the core
|
||||
process.stdin.on('data', data => core.append(data))
|
||||
|
||||
// core.discoveryKey is *not* a read capability for the core
|
||||
// It's only used to discover other peers who *might* have the core
|
||||
swarm.join(core.discoveryKey)
|
||||
swarm.on('connection', conn => core.replicate(conn))
|
||||
```
|
||||
|
||||
|
||||
`reader.js` uses Hyperswarm to connect to the previously initiated peer and synchronize the local Hypercore instance with the Hypercore instance of the writer.
|
||||
|
||||
```javascript
|
||||
//reader.js
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import Hypercore from 'hypercore'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
const core = new Hypercore('./reader-storage', process.argv[2])
|
||||
await core.ready()
|
||||
|
||||
const foundPeers = core.findingPeers()
|
||||
swarm.join(core.discoveryKey)
|
||||
swarm.on('connection', conn => core.replicate(conn))
|
||||
|
||||
// swarm.flush() will wait until *all* discoverable peers have been connected to
|
||||
// It might take a while, so don't await it
|
||||
// Instead, use core.findingPeers() to mark when the discovery process is completed
|
||||
swarm.flush().then(() => foundPeers())
|
||||
|
||||
// This won't resolve until either
|
||||
// a) the first peer is found
|
||||
// or b) no peers could be found
|
||||
await core.update()
|
||||
|
||||
let position = core.length
|
||||
console.log(`Skipping ${core.length} earlier blocks...`)
|
||||
for await (const block of core.createReadStream({ start: core.length, live: true })) {
|
||||
console.log(`Block ${position++}: ${block}`)
|
||||
}
|
||||
```
|
||||
|
||||
184
howto/share-append-only-databases-with-hyperbee.md
Normal file
184
howto/share-append-only-databases-with-hyperbee.md
Normal file
@@ -0,0 +1,184 @@
|
||||
|
||||
### How to share Append-Only Databases with Hyperbee
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
```bash
|
||||
mkdir connect-two-peers
|
||||
cd connect-two-peers
|
||||
pear init -y -t terminal
|
||||
npm install hyperdht b4a graceful-goodbye
|
||||
```
|
||||
|
||||
[hyperbee.md](../building-blocks/hyperbee.md) is an append-only B-tree based on Hypercore. It provides a key/value-store API with methods to insert and get key/value pairs, perform atomic batch insertions, and create sorted iterators.
|
||||
|
||||
The example consists of three files: `writer.js` , `bee-reader.js` and `core-reader.js`.
|
||||
|
||||
`writer.js` stores 100k entries from a given dictionary file into a Hyperbee instance. The Corestore instance used to create the Hyperbee instance is replicated using Hyperswarm. This enables other peers to replicate their Corestore instance and download the dictionary data into their local Hyperbee instances.
|
||||
|
||||
> Download the `dict.json.gz` compressed file from the [GitHub repository](https://github.com/holepunchto/examples/blob/main/quick-start/hyperbee/dict.json.gz) to the folder where the `writer.js`is present. The compressed file contains 100K dictionary words.
|
||||
|
||||
```javascript
|
||||
//writer.js
|
||||
import fs from 'fs'
|
||||
import zlib from 'zlib'
|
||||
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import Corestore from 'corestore'
|
||||
import Hyperbee from 'hyperbee'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import b4a from 'b4a'
|
||||
|
||||
// create a corestore instance with the given location
|
||||
const store = new Corestore('./writer-storage')
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// replication of corestore instance
|
||||
swarm.on('connection', conn => store.replicate(conn))
|
||||
|
||||
// creation of Hypercore instance (if not already created)
|
||||
const core = store.get({ name: 'my-bee-core' })
|
||||
|
||||
// creation of Hyperbee instance using the core instance
|
||||
const bee = new Hyperbee(core, {
|
||||
keyEncoding: 'utf-8',
|
||||
valueEncoding: 'utf-8'
|
||||
})
|
||||
|
||||
// wait till all the properties of the hypercore are initialized
|
||||
await core.ready()
|
||||
|
||||
// join a topic
|
||||
const discovery = swarm.join(core.discoveryKey)
|
||||
|
||||
// Only display the key once the Hyperbee has been announced to the DHT
|
||||
discovery.flushed().then(() => {
|
||||
console.log('bee key:', b4a.toString(core.key, 'hex'))
|
||||
})
|
||||
|
||||
// Only import the dictionary the first time this script is executed
|
||||
// The first block will always be the Hyperbee header block
|
||||
if (core.length <= 1) {
|
||||
console.log('importing dictionary...')
|
||||
const dict = await loadDictionary()
|
||||
const batch = bee.batch()
|
||||
for (const { key, value } of dict) {
|
||||
await batch.put(key, value)
|
||||
}
|
||||
await batch.flush()
|
||||
} else {
|
||||
// Otherwise just seed the previously-imported dictionary
|
||||
console.log('seeding dictionary...')
|
||||
}
|
||||
|
||||
async function loadDictionary() {
|
||||
const compressed = await fs.promises.readFile('./dict.json.gz')
|
||||
return new Promise((resolve, reject) => {
|
||||
// unzip the compressed file and return the content
|
||||
zlib.unzip(compressed, (err, dict) => {
|
||||
if (err) return reject(err)
|
||||
return resolve(JSON.parse(b4a.toString(dict)))
|
||||
})
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
`bee-reader.js` creates a Corestore instance and replicates it using the Hyperswarm instance to the same topic as `writer.js`. On every word entered in the command line, it will download the respective data to the local Hyperbee instance.
|
||||
|
||||
Try looking at disk space the `reader-storage` directory is using after each query. notice that it's significantly smaller than `writer-storage`! This is because Hyperbee only downloads the Hypercore blocks it needs to satisfy each query, a feature we call **sparse downloading.**
|
||||
|
||||
```javascript
|
||||
bee-reader.js
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import Corestore from 'corestore'
|
||||
import Hyperbee from 'hyperbee'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import b4a from 'b4a'
|
||||
|
||||
// creation of a corestore instance
|
||||
const store = new Corestore('./reader-storage')
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// replication of the corestore instance on connection with other peers
|
||||
swarm.on('connection', conn => store.replicate(conn))
|
||||
|
||||
// create or get the hypercore using the public key supplied as command-line argument
|
||||
const core = store.get({ key: b4a.from(process.argv[2], 'hex') })
|
||||
|
||||
// create a hyperbee instance using the hypercore instance
|
||||
const bee = new Hyperbee(core, {
|
||||
keyEncoding: 'utf-8',
|
||||
valueEncoding: 'utf-8'
|
||||
})
|
||||
|
||||
// wait till the hypercore properties to be intialized
|
||||
await core.ready()
|
||||
|
||||
// logging the public key of the hypercore instance
|
||||
console.log('core key here is:', core.key.toString('hex'))
|
||||
|
||||
// Attempt to connect to peers
|
||||
swarm.join(core.discoveryKey)
|
||||
|
||||
// Do a single Hyperbee.get for every line of stdin data
|
||||
// Each `get` will only download the blocks necessary to satisfy the query
|
||||
process.stdin.setEncoding('utf-8')
|
||||
process.stdin.on('data', data => {
|
||||
const word = data.trim()
|
||||
if (!word.length) return
|
||||
bee.get(word).then(node => {
|
||||
if (!node || !node.value) console.log(`No dictionary entry for ${data}`)
|
||||
else console.log(`${data} -> ${node.value}`)
|
||||
}, err => console.error(err))
|
||||
})
|
||||
```
|
||||
|
||||
Importantly, a Hyperbee is **just** a Hypercore, where the tree nodes are stored as Hypercore blocks. Now examine the Hyperbee as if it were just a Hypercore and log out a few blocks.
|
||||
|
||||
`core-reader.js` will continually download and log the last block of the Hypercore containing the Hyperbee data. Note that these blocks are encoded using Hyperbee's Node encoding, which we can easily import and use.
|
||||
|
||||
|
||||
```javascript
|
||||
core-reader.js
|
||||
import Hypercore from 'hypercore'
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import Corestore from 'corestore'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import b4a from 'b4a'
|
||||
|
||||
import { Node } from 'hyperbee/lib/messages.js'
|
||||
|
||||
// creation of a corestore instance
|
||||
const store = new Corestore('./reader-storage')
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// replication of the corestore instance on connection with other peers
|
||||
swarm.on('connection', conn => store.replicate(conn))
|
||||
|
||||
// create or get the hypercore using the public key supplied as command-line argument
|
||||
const core = store.get({ key: b4a.from(process.argv[2], 'hex') })
|
||||
// wait till the properties of the hypercore instance are initialized
|
||||
await core.ready()
|
||||
|
||||
const foundPeers = store.findingPeers()
|
||||
// join a topic
|
||||
swarm.join(core.discoveryKey)
|
||||
swarm.flush().then(() => foundPeers())
|
||||
|
||||
// update the meta-data information of the hypercore instance
|
||||
await core.update()
|
||||
|
||||
const seq = core.length - 1
|
||||
const lastBlock = await core.get(core.length - 1)
|
||||
|
||||
// print the information about the last block or the latest block of the hypercore instance
|
||||
console.log(`Raw Block ${seq}:`, lastBlock)
|
||||
console.log(`Decoded Block ${seq}`, Node.decode(lastBlock))
|
||||
```
|
||||
123
howto/work-with-many-hypercores-using-corestore.md
Normal file
123
howto/work-with-many-hypercores-using-corestore.md
Normal file
@@ -0,0 +1,123 @@
|
||||
|
||||
### How to work with many Hypercores using Corestore
|
||||
|
||||
Get setup by creating a project folder and installing dependencies:
|
||||
|
||||
```bash
|
||||
mkdir many-cores
|
||||
cd many-cores
|
||||
pear init -y -t terminal
|
||||
npm install corestore hyperswarm b4a graceful-goodbye
|
||||
```
|
||||
|
||||
An append-only log is powerful on its own, but it's most useful as a building block for constructing larger data structures, such as databases or filesystems. Building these data structures often requires many cores, each with different responsibilities. For example, Hyperdrive uses one core to store file metadata and another to store file contents.
|
||||
|
||||
[corestore.md](../helpers/corestore.md) is a Hypercore factory that makes it easier to manage large collections of named Hypercores. A simple example below demonstrates a pattern often in use: co-replicating many cores using Corestore, where several 'internal cores' are linked to from a primary core. Only the primary core is announced on the swarm -- the keys for the others are recorded inside of that core.
|
||||
|
||||
This example consists of two files: `writer.js` and `reader.js`. In the previous example, we replicated only a single Hypercore instance. But in this example, we will replicate a single Corestore instance, which will internally manage the replication of a collection of Hypercores.
|
||||
|
||||
The file `writer.js` uses a Corestore instance to create three Hypercores, which are then replicated with other peers using Hyperswarm. The keys for the second and third cores are stored in the first core (the first core 'bootstraps' the system). Messages entered into the command line are written into the second and third cores, depending on the length of the message. To execute `reader.js`, copy the main core key logged into the command line.
|
||||
|
||||
|
||||
```javascript
|
||||
//writer.js
|
||||
import Corestore from 'corestore'
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import b4a from 'b4a'
|
||||
|
||||
const store = new Corestore('./writer-storage')
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// A name is a purely-local, and maps to a key pair. It's not visible to readers.
|
||||
// Since a name always corresponds to a key pair, these are all writable
|
||||
const core1 = store.get({ name: 'core-1', valueEncoding: 'json' })
|
||||
const core2 = store.get({ name: 'core-2' })
|
||||
const core3 = store.get({ name: 'core-3' })
|
||||
await Promise.all([core1.ready(), core2.ready(), core3.ready()])
|
||||
|
||||
console.log('main core key:', b4a.toString(core1.key, 'hex'))
|
||||
|
||||
// Here we'll only join the swarm with the core1's discovery key
|
||||
// We don't need to announce core2 and core3, because they'll replicated with core1
|
||||
swarm.join(core1.discoveryKey)
|
||||
|
||||
// Corestore replication internally manages to replicate every loaded core
|
||||
// Corestore *does not* exchange keys (read capabilities) during replication.
|
||||
swarm.on('connection', conn => store.replicate(conn))
|
||||
|
||||
// Since Corestore does not exchange keys, they need to be exchanged elsewhere.
|
||||
// Here, we'll record the other keys in the first block of core1.
|
||||
if (core1.length === 0) {
|
||||
await core1.append({
|
||||
otherKeys: [core2, core3].map(core => b4a.toString(core.key, 'hex'))
|
||||
})
|
||||
}
|
||||
|
||||
// Record all short messages in core2, and all long ones in core3
|
||||
process.stdin.on('data', data => {
|
||||
if (data.length < 5) {
|
||||
console.log('appending short data to core2')
|
||||
core2.append(data)
|
||||
} else {
|
||||
console.log('appending long data to core3')
|
||||
core3.append(data)
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
`reader.js` connects to the previous peer with Hyperswarm and replicates the local Corestore instance to receive the data from it. This requires the copied key to be supplied as an argument when executing the file, which will then be used to create a core with the same public key as the other peer (i.e., the same discovery key for both the reader and writer peers).
|
||||
|
||||
|
||||
```javascript
|
||||
//reader.js
|
||||
import Corestore from 'corestore'
|
||||
import Hyperswarm from 'hyperswarm'
|
||||
import goodbye from 'graceful-goodbye'
|
||||
import b4a from 'b4a'
|
||||
|
||||
// pass the key as a command line argument
|
||||
const key = b4a.from(process.argv[2], 'hex')
|
||||
|
||||
// creation of a Corestore instance
|
||||
const store = new Corestore('./reader-storage')
|
||||
|
||||
const swarm = new Hyperswarm()
|
||||
goodbye(() => swarm.destroy())
|
||||
|
||||
// replication of corestore instance on every connection
|
||||
swarm.on('connection', conn => store.replicate(conn))
|
||||
|
||||
// creation/getting of a hypercore instance using the key passed
|
||||
const core = store.get({ key, valueEncoding: 'json' })
|
||||
// wait till all the properties of the hypercore instance are initialized
|
||||
await core.ready()
|
||||
|
||||
const foundPeers = store.findingPeers()
|
||||
// join a topic
|
||||
swarm.join(core.discoveryKey)
|
||||
swarm.flush().then(() => foundPeers())
|
||||
|
||||
// update the meta-data of the hypercore instance
|
||||
await core.update()
|
||||
|
||||
if (core.length === 0) {
|
||||
console.log('Could not connect to the writer peer')
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
// getting cores using the keys stored in the first block of main core
|
||||
const { otherKeys } = await core.get(0)
|
||||
for (const key of otherKeys) {
|
||||
const core = store.get({ key: b4a.from(key, 'hex') })
|
||||
// on every append to the hypercore,
|
||||
// download the latest block of the core and log it to the console
|
||||
core.on('append', () => {
|
||||
const seq = core.length - 1
|
||||
core.get(seq).then(block => {
|
||||
console.log(`Block ${seq} in Core ${key}: ${block}`)
|
||||
})
|
||||
})
|
||||
}
|
||||
```
|
||||
127
readme.md
127
readme.md
@@ -30,27 +30,85 @@ Welcome to the Internet of Peers
|
||||
* [Sharing a Pear Application](./guide/sharing-a-pear-app.md)
|
||||
* [Marking a Release](./guide/releasing-a-pear-app.md)
|
||||
|
||||
### How-tos
|
||||
|
||||
* [How to connect two peers by key with HyperDHT](./howto/connect-two-peers-by-key-with-hyperdht.md)
|
||||
* [How to connect to many peers by topic with Hyperswarm](./howto/connect-to-many-peers-by-topic-with-hyperswarm.md)
|
||||
* [How to replicate and persist with Hypercore](./howto/replicate-and-persist-with-hypercore.md)
|
||||
* [How to work with many Hypercores using Corestore](./howto/work-with-many-hypercores-using-corestore.md)
|
||||
* [How to share append-only databases with Hyperbee](./howto/share-append-only-databases-with-hyperbee.md)
|
||||
* [How to create a full peer-to-peer filesystem with Hyperdrive](./howto/create-a-full-peer-to-peer-filesystem-with-hyperdrive.md)
|
||||
|
||||
## Building blocks
|
||||
|
||||
The following structural components form the backbone of the Pear Ecosystem.
|
||||
| Module | Stability |
|
||||
| ------------------------------------------------| :----------------------------------------------------------: |
|
||||
| [`hypercore`](./building-blocks/hypercore.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
| [`hyperbee`](./building-blocks/hyperbee.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
| [`hyperdrive`](./building-blocks/hyperdrive.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
| [`autobase`](./building-blocks/autobase.md) | <mark style="background-color:blue;">**experimental**</mark> |
|
||||
| [`hyperswarm`](./building-blocks/hyperswarm.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
| [`hyperdht`](./building-blocks/hyperdht.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
|
||||
1. [`hypercore`](./building-blocks/hypercore.md): A distributed, secure append-only log for creating fast and scalable applications without a backend, as it is entirely peer-to-peer.
|
||||
2. [`hyperbee`](./building-blocks/hyperbee.md): An append-only B-tree running on a Hypercore that provides key-value store API, with methods for inserting and getting key/value pairs, atomic batch insertions, and creation of sorted iterators.
|
||||
3. [`hyperdrive`](./building-blocks/hyperdrive.md): A secure, real-time distributed file system that simplifies P2P file sharing that provides an efficient way to store and access data across multiple connected devices in a decentralized manner.
|
||||
4. [`autobase`](./building-blocks/autobase.md): An experimental module used to automatically rebase multiple causally-linked Hypercores into a single, linearized Hypercore for multi-user collaboration.
|
||||
5. [`hyperdht`](./building-blocks/hyperdht.md): A DHT powering Hyperswarm. Through this DHT, each server is bound to a unique key pair, with the client connecting to the server using the server's public key.
|
||||
6. [`hyperswarm`](./building-blocks/hyperswarm.md): A high-level API for finding and connecting to peers who are interested in a "topic."
|
||||
### Hypercore
|
||||
|
||||
The [`hypercore`](./building-blocks/hypercore.md) module is a distributed, secure append-only log for creating fast and scalable applications without a backend, as it is entirely peer-to-peer.
|
||||
|
||||
Notable features include:
|
||||
|
||||
* Improved fork detection in the replication protocol, to improve resilience.
|
||||
* Optional on-disk encryption for blocks (in addition to the existing transport encryption).
|
||||
* A write-ahead log in the storage layer to ensure that power loss or unexpected shutdown cannot lead to data corruption.
|
||||
* The [`session`](./building-blocks/hypercore.md#core.session-options) and [`snapshot`](./building-blocks/hypercore.md#core.snapshot-options) methods for providing multiple views over the same underlying Hypercore, which simplifies resource management.
|
||||
* A [`truncate`](./building-blocks/hypercore.md#await-core.truncate-newlength-forkid) method for intentionally creating a new fork, starting at a given length. We use this method extensively in [`autobase`](./building-blocks/autobase.md).
|
||||
|
||||
### Hyperswarm
|
||||
|
||||
The [`hyperswarm`](./building-blocks/hyperswarm.md) module is a high-level API for finding and connecting to peers who are interested in a "topic."
|
||||
|
||||
Notable features include:
|
||||
|
||||
* An improved UDP holepunching algorithm that uses arbitrary DHT nodes (optionally selected by the connecting peers) to proxy necessary metadata while being maximally privacy-preserving.
|
||||
* A custom-built transport protocol, [UDX](https://github.com/hyperswarm/libudx), that takes advantage of the holepunching algorithm to avoid unnecessary overhead (it doesn't include handshaking since holepunching takes care of that, for example). It's blazing fast.
|
||||
* A simplified DHT API that closely resembles NodeJS's `net` module, but using public keys instead of IP addresses.
|
||||
|
||||
### Hyperdrive
|
||||
|
||||
The [`hyperdrive`](./building-blocks/hyperdrive.md) module is a secure, real-time distributed file system that simplifies P2P file sharing that provides an efficient way to store and access data across multiple connected devices in a decentralized manner.
|
||||
|
||||
* Uses Hyperbee internally for storing file metadata
|
||||
* Major API simplification. Instead of mirroring POSIX APIs, the new API better captures the core requirements of P2P file transfer.
|
||||
* Auxiliary tools, [`localdrive`](./helpers/localdrive.md) and [`mirrordrive`](./helpers/mirrordrive.md), that streamline import/export flows and make it easy to mirror drives to and from the local filesystem.
|
||||
|
||||
### Autobase (experimental)
|
||||
|
||||
The [`autobase`](./building-blocks/autobase.md) experimental module provides a "virtual Hypercore" layer over many Hypercores owned by many different peers.
|
||||
|
||||
Notable features include:
|
||||
|
||||
* automatic rebasing of multiple causally-linked Hypercores into a single, linearized Hypercore for multi-user collaboration
|
||||
* low-friction integration into higher-level modules like Hyperbee and Hyperdrive: Autobase's output shares the familiar Hypercore API so peer-to-peer multi-user collaboration is achievable with little additional implementation effort.
|
||||
|
||||
> Autobase is still experimental and is likely to change significantly in the near future.
|
||||
|
||||
### Hyperdht
|
||||
|
||||
The `hyperdht` module is the Distributed Hash Table (DHT) powering Hyperswarm. Through this DHT, each server is bound to a unique key pair, with the client connecting to the server using the server's public key.
|
||||
|
||||
Notable features include:
|
||||
|
||||
* lower-level module provides direct access to the DHT for connecting peers using keypairs
|
||||
|
||||
## Helpers
|
||||
|
||||
Helper modules can be used together with the building blocks to create cutting-edge P2P tools and applications.
|
||||
|
||||
1. [`corestore`](./helpers/corestore.md): A Hypercore factory designed to facilitate the management of sizable named Hypercore collections.
|
||||
2. [`localdrive`](./helpers/localdrive.md): A file system interoperable with Hyperdrive.
|
||||
3. [`mirrordrive`](./helpers/mirrordrive.md): Mirror a [`hyperdrive`](./building-blocks/hyperdrive.md) or a [`localdrive`](./helpers/localdrive.md) into another one.
|
||||
4. [`secretstream`](./helpers/secretstream.md): SecretStream is used to securely create connections between two peers in Hyperswarm.
|
||||
5. [compact-`encoding`](./helpers/compact-encoding.md): A series of binary encoding schemes for building fast and small parsers and serializers. We use this in Keet to store chat messages and in Hypercore's replication protocol.
|
||||
6. [`protomux`](./helpers/protomux.md): Multiplex multiple message oriented protocols over a stream.
|
||||
*. [`corestore`](./helpers/corestore.md): A Hypercore factory designed to facilitate the management of sizable named Hypercore collections.
|
||||
*. [`localdrive`](./helpers/localdrive.md): A file system interoperable with Hyperdrive.
|
||||
*. [`mirrordrive`](./helpers/mirrordrive.md): Mirror a [`hyperdrive`](./building-blocks/hyperdrive.md) or a [`localdrive`](./helpers/localdrive.md) into another one.
|
||||
*. [`secretstream`](./helpers/secretstream.md): SecretStream is used to securely create connections between two peers in Hyperswarm.
|
||||
*. [compact-`encoding`](./helpers/compact-encoding.md): A series of binary encoding schemes for building fast and small parsers and serializers. We use this in Keet to store chat messages and in Hypercore's replication protocol.
|
||||
*. [`protomux`](./helpers/protomux.md): Multiplex multiple message oriented protocols over a stream.
|
||||
|
||||
## Tools
|
||||
|
||||
@@ -65,37 +123,9 @@ The following tools are used extensively employed in the day-to-day development
|
||||
| <mark>**[Drives](./tools/drives)**</mark> | CLI to download, seed, and mirror a [hyperdrive](./building-blocks/hyperdrive) or a [localdrive](./helpers/localdrive). |
|
||||
|
||||
|
||||
### Hypercore
|
||||
|
||||
* The [`session`](./building-blocks/hypercore.md#core.session-options) and [`snapshot`](./building-blocks/hypercore.md#core.snapshot-options) methods for providing multiple views over the same underlying Hypercore, which simplifies resource management.
|
||||
* A [`truncate`](./building-blocks/hypercore.md#await-core.truncate-newlength-forkid) method for intentionally creating a new fork, starting at a given length. We use this method extensively in [`autobase`](./building-blocks/autobase.md), as described below.
|
||||
* An improved fork detection in the replication protocol, to improve resilience.
|
||||
* Optional on-disk encryption for blocks (in addition to the existing transport encryption).
|
||||
* The storage layer now uses a write-ahead log to ensure that power loss or unexpected shutdown cannot lead to data corruption.
|
||||
|
||||
### Hyperswarm
|
||||
|
||||
* An improved UDP holepunching algorithm that uses arbitrary DHT nodes (optionally selected by the connecting peers) to proxy necessary metadata while being maximally privacy-preserving.
|
||||
* A custom-built transport protocol, [UDX](https://github.com/hyperswarm/libudx), that takes advantage of the holepunching algorithm to avoid unnecessary overhead (it doesn't include handshaking since holepunching takes care of that, for example). It's blazing fast.
|
||||
* A simplified DHT API that closely resembles NodeJS's `net` module, but using public keys instead of IP addresses.
|
||||
|
||||
### Hyperdrive
|
||||
|
||||
* Uses Hyperbee internally for storing file metadata
|
||||
* Major API simplification. Instead of mirroring POSIX APIs, the new API better captures the core requirements of P2P file transfer.
|
||||
* Auxiliary tools, [`localdrive`](./helpers/localdrive.md) and [`mirrordrive`](./helpers/mirrordrive.md), that streamline import/export flows and make it easy to mirror drives to and from the local filesystem. We use these every day when deploying Keet.
|
||||
|
||||
### Autobase (experimental)
|
||||
|
||||
Hypercores are single-writer data structures, but collaboration is crucial. [`autobase`](./building-blocks/autobase.md "mention") is an experimental module that allows to turn many Hypercores, owned by different people, into a single 'virtual' Hypercore. In Keet, every member of a room has their input Hypercore where they write chat messages, and Autobase merges these into the linear view members see on the screen.
|
||||
|
||||
As Autobase's output shares the familiar Hypercore API, it is possible to plug it into higher-level modules like Hyperbee and Hyperdrive, getting a multi-user collaboration with little additional effort.
|
||||
|
||||
> Autobase is still experimental and is likely to change significantly in the near future.
|
||||
|
||||
## Stability indexing
|
||||
|
||||
Throughout the documentation, indications of a module's stability are provided. Some modules are well-established and used widely, making them highly unlikely to ever change. Other modules may be new, experimental, or known to have risks associated with their use.
|
||||
Throughout the documentation, indications of stability are provided. Some modules are well-established and used widely, making them highly unlikely to ever change. Other modules may be new, experimental, or known to have risks associated with their use.
|
||||
|
||||
The following stability indices have been used:
|
||||
|
||||
@@ -106,16 +136,3 @@ The following stability indices have been used:
|
||||
| <mark style="background-color:yellow;">**deprecated**</mark> | Being removed or replaced in the future. |
|
||||
| <mark style="background-color:red;">**unstable**</mark> | May change or be removed without warning. |
|
||||
|
||||
#### Stability overview
|
||||
|
||||
| Module | Stability |
|
||||
| -------------------------------------------------------- | :----------------------------------------------------------: |
|
||||
| [`hypercore`](./building-blocks/hypercore.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
| [`hyperbee`](./building-blocks/hyperbee.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
| [`hyperdrive`](./building-blocks/hyperdrive.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
| [`autobase`](./building-blocks/autobase.md) | <mark style="background-color:blue;">**experimental**</mark> |
|
||||
| [`hyperswarm`](./building-blocks/hyperswarm.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
| [`hyperdht`](./building-blocks/hyperdht.md) | <mark style="background-color:green;">**stable**</mark> |
|
||||
|
||||
|
||||
> Any part of a module (method, event, or property) that is not documented as part of that module's public API is subject to change at any time.
|
||||
|
||||
@@ -62,21 +62,25 @@ Specify a remote key to reseed.
|
||||
--verbose|-v Additional output
|
||||
```
|
||||
|
||||
## pear launch <key|link> -- [...args]
|
||||
## pear run <key> -- [...args]
|
||||
|
||||
Launch an application by key or link.
|
||||
Run an application from a key.
|
||||
|
||||
A Pear link takes the form: `pear://<key>/<data>`.
|
||||
The key argument may also be a a Pear Link containing the key.
|
||||
|
||||
A Pear Link takes the form: `pear://<key>/<data>`.
|
||||
|
||||
The `<data>` portion of the link is available as `pear.config.linkData`.
|
||||
|
||||
Arguments supplied after a double-dash (`--`) are passed as `pear.config.args`.
|
||||
|
||||
```
|
||||
--dev Launch the app in dev mode
|
||||
--store|-s=path Set the Application Storage path
|
||||
--tmp-store|-t Automatic new tmp folder as store path
|
||||
--checkout=n|release|staged Launch a version
|
||||
--dev Run the app in dev mode
|
||||
--store|-s=path Set the Application Storage path
|
||||
--tmp-store|-t Automatic new tmp folder as store path
|
||||
--checkout=n Run a checkout, n is version length
|
||||
--checkout=release Run checkout from marked released length
|
||||
--checkout=staged Run checkout from latest version length
|
||||
```
|
||||
|
||||
## pear release <channel|key> [dir]
|
||||
@@ -128,6 +132,10 @@ Connect to a Read-Eval-Print-Loop session with sidecar.
|
||||
|
||||
A key is printed out, use with repl-swarm module to connect.
|
||||
|
||||
## pear use <key>
|
||||
|
||||
Switch to a different platform release-line.
|
||||
|
||||
## pear versions
|
||||
|
||||
Output version information.
|
||||
|
||||
Reference in New Issue
Block a user