diff --git a/building-blocks/autobase.md b/building-blocks/autobase.md
new file mode 100644
index 0000000..625f732
--- /dev/null
+++ b/building-blocks/autobase.md
@@ -0,0 +1,302 @@
+# Autobase
+
+**experimental**
+
+Autobase is used to automatically rebase multiple causally-linked Hypercores into a single, linearized Hypercore. The output of an Autobase is 'just a Hypercore', which means it can be used to transform higher-level data structures (like Hyperbee) into multiwriter data structures with minimal additional work.
+
+> Although Autobase is still under development, it finds application in many active projects. Keet rooms, for example, are powered by Autobase! This is a testament to the potential of Autobase, and we are excited to see what else it can achieve.
+
+
+> [Github (Autobase)](https://github.com/holepunchto/autobase)
+
+- [Autobase](../building-blocks/autobase.md)
+ - [Create a new instance](autobase.md#installation)
+ - Basic:
+ - Properties:
+ - [base.inputs](autobase.md#baseinputs)
+ - [base.outputs](autobase.md#baseoutputs)
+ - [base.localInput](autobase.md#baselocalinput)
+ - [base.localOutput](autobase.md#baselocaloutput)
+ - Methods:
+ - [base.clock()](autobase.md#const-clock--baseclock)
+ - [base.isAutobase(core)](autobase.md#await-autobaseisautobasecore)
+ - [base.append(value, [clock], [input])](autobase.md#await-baseappendvalue-clock-input)
+ - [base.latest([input1, input2, ...])](autobase.md#const-clock--await-baselatestinput1-input2)
+ - [base.addInput(input)](autobase.md#await-baseaddinputinput)
+ - [base.removeInput(input)](autobase.md#await-baseremoveinputinput)
+ - [base.addOutput(output)](autobase.md#await-baseaddoutputoutput)
+ - [base.removeOutput(output)](autobase.md#await-baseremoveoutputoutput)
+ - Streams:
+ - Methods:
+ - [base.createCausalStream()](autobase.md#const-stream--basecreatecausalstream)
+ - [base.createReadStream([options])](autobase.md#const-stream--basecreatereadstreamoptions)
+ - Linearized Views:
+ - Properties:
+ - [view.status](autobase.md#viewstatus)
+ - [view.length](autobase.md#viewlength)
+ - Methods:
+ - [base.start({ apply, unwrap } = {})](autobase.md#basestart-apply-unwrap)
+ - [view.update()](autobase.md#await-viewupdate)
+ - [view.get(idx, [options])](autobase.md#const-entry--await-viewgetidx-options)
+ - [view.append([blocks])](autobase.md#await-viewappendblocks)
+
+
+### Installation
+
+Install with [npm](https://www.npmjs.com/):
+
+```bash
+npm install autobase
+```
+
+> Autobase is constructed from a known set of trusted input Hypercores. Authorizing these inputs is outside of the scope of Autobase -- this module is unopinionated about trust and assumes it comes from another channel.
+
+
+### API
+
+#### **`const base = new Autobase([options])`**
+
+Creates a new Autobase from a set of input/output Hypercores.
+
+The following table describes the properties of the optional `options` object.
+
+| Property | Description | Type | Default |
+| :---------------: | -------------------------------------------------------------------------- | --------- | ------- |
+| **`inputs`** | The list of Hypercores for Autobase to linearize | Array | `[]` |
+| **`outputs`** | An optional list of output Hypercores containing linearized views | Array | `[]` |
+| **`localInput`** | The Hypercore that will be written to in base.append operations | Hypercore | `null` |
+| **`localOutput`** | A writable Hypercore that linearized views will be persisted into | Hypercore | `null` |
+| **`autostart`** | Creates a linearized view (base.view) immediately | Boolean | `false` |
+| **`apply`** | Creates a linearized view (base.view) immediately using this apply function | Function | `null` |
+| **`unwrap`** | base.view.get calls will return node values only instead of full nodes | Boolean | `false` |
+
+#### Properties
+
+#### **`base.inputs`**
+
+The list of input Hypercores.
+
+#### **`base.outputs`**
+
+The list of output Hypercores containing persisted linearized views.
+
+#### **`base.localInput`**
+
+If non-null, this Hypercore will be appended to in [base.append](autobase.md#await-baseappendvalue-clock-input) operations.
+
+#### **`base.localOutput`**
+
+If non-null `base.view` will be persisted into this Hypercore.
+
+#### **`base.started`**
+
+A Boolean indicating if `base.view` has been created.
+
+See the [linearized views section](autobase.md#linearized-views) for details about the `apply` option.
+
+> ℹ️ Prior to calling `base.start()`, `base.view` will be `null`.
+
+
+#### Methods
+
+#### **`const clock = base.clock()`**
+
+Returns a Map containing the latest lengths for all Autobase inputs.
+
+The Map has the form: `(hex-encoded-key) -> (Hypercore length)`
+
+#### **`await Autobase.isAutobase(core)`**
+
+Returns `true` if `core` is an Autobase input or an output.
+
+#### **`await base.append(value, [clock], [input])`**
+
+Append a new value to the autobase.
+
+* `clock`: The causal clock defaults to base.latest.
+
+#### **`const clock = await base.latest([input1, input2, ...])`**
+
+Generate a causal clock linking the latest entries of each input.
+
+`latest` will update the input Hypercores (`input.update()`) prior to returning the clock.
+
+You generally will not need to use this, and can instead just use [`append`](autobase.md#await-baseappendvalue-clock-input) with the default clock:
+
+```javascript
+await base.append('hello world')
+```
+
+#### **`await base.addInput(input)`**
+
+Adds a new input Hypercore.
+
+* `input` must either be a fresh Hypercore, or a Hypercore that has previously been used as an Autobase input.
+
+#### **`await base.removeInput(input)`**
+
+Removes an input Hypercore.
+
+* `input` must be a Hypercore that is currently an input.
+
+{% hint style="info" %}
+Removing an input, and then subsequently linearizing the Autobase into an existing output, could result in a large truncation operation on that output -- this is effectively 'purging' that input entirely.
+
+Future releases will see the addition of 'soft removal', which will freeze an input at a specific length, and no process blocks past that length, while still preserving that input's history in linearized views. For most applications, soft removal matches the intuition behind 'removing a user'.
+{% endhint %}
+
+#### **`await base.addOutput(output)`**
+
+Adds a new output Hypercore.
+
+* `output` must be either a fresh Hypercore or a Hypercore that was previously used as an Autobase output.
+
+If `base.outputs` is not empty, Autobase will do 'remote linearizing': `base.view.update()` will treat these outputs as the 'trunk', minimizing the amount of local re-processing they need to do during updates.
+
+#### **`await base.removeOutput(output)`**
+
+Removes an output Hypercore. `output` can be either a Hypercore or a Hypercore key.
+
+* `output` must be a Hypercore, or a Hypercore key, that is currently an output (in `base.outputs`).
+
+### Streams
+
+In order to generate shareable linearized views, Autobase must first be able to generate a deterministic, causal ordering over all the operations in its input Hypercores.
+
+Every input node contains embedded causal information (a vector clock) linking it to previous nodes. By default, when a node is appended without additional options (i.e., `base.append('hello')`), Autobase will embed a clock containing the latest known lengths of all other inputs.
+
+Using the vector clocks in the input nodes, Autobase can generate two types of streams:
+
+#### Causal Streams
+
+Causal streams start at the heads (the last blocks) of all inputs, walk backward, and yield nodes with a deterministic ordering (based on both the clock and the input key) such that anybody who regenerates this stream will observe the same ordering, given the same inputs.
+
+They should fail in the presence of unavailable nodes -- the deterministic ordering ensures that any indexer will process input nodes in the same order.
+
+The simplest kind of linearized view (`const view = base.linearize()`), is just a Hypercore containing the results of a causal stream in reversed order (block N in the index will not be causally dependent on block N+1).
+
+#### **`const stream = base.createCausalStream()`**
+
+Generate a Readable stream of input blocks with deterministic, causal ordering.
+
+Any two users who create an Autobase with the same set of inputs, and the same lengths (i.e., both users have the same initial states), and will produce identical causal streams.
+
+If an input node is causally-dependent on another node that is not available, the causal stream will not proceed past that node, as this would produce inconsistent output.
+
+#### Read Streams
+
+Similar to `Hypercore.createReadStream()`, this stream starts at the beginning of each input, and does not guarantee the same deterministic ordering as the causal stream. Unlike causal streams, which are used mainly for indexing, read streams can be used to observe updates. And as they move forward in time, they can be live.
+
+#### **`const stream = base.createReadStream([options])`**
+
+Generate a Readable stream of input blocks, from earliest to latest.
+
+Unlike `createCausalStream`, the ordering of `createReadStream` is not deterministic. The read stream only gives you the guarantee that every node it yields will **not** be causally-dependent on any node yielded later.
+
+Read streams have a public property `checkpoint`, which can be used to create new read streams that resume from the checkpoint's position:
+
+```javascript
+const stream1 = base.createReadStream()
+// Do something with stream1 here
+const stream2 = base.createReadStream({ checkpoint: stream1.checkpoint }) // Resume from stream1.checkpoint
+```
+
+`createReadStream` can be passed two custom async hooks:
+
+* `onresolve`: Called when an unsatisfied node (a node that links to an unknown input) is encountered. Can be used to add inputs to the Autobase dynamically.
+ * Returning `true` indicates that you added new inputs to the Autobase, and so the read stream should begin processing those inputs.
+ * Returning `false` indicates that you did not resolve the missing links, and so the node should be yielded immediately as is.
+* `onwait`: Called after each node is yielded. Can be used to add inputs to the Autobase dynamically.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ---------------- | --------------------------------------------------------------------------- | ---------- | ------------------------------- |
+| **`live`** | Enables live mode (the stream will continuously yield new nodes) | Boolean | `false` |
+| **`tail`** | When in live mode, starts at the latest clock instead of the earliest | Boolean | `false` |
+| **`map`** | A sync map function | Function | `(node) => node` |
+| **`checkpoint`** | Resumes from where a previous read stream left off (`readStream.checkpoint`) | Readstream | `null` |
+| **`wait`** | If false, the read stream will only yield previously-downloaded blocks | Boolean | `true` |
+| **`onresolve`** | A resolve hook (described above) | Function | `async (node) => true \| false` |
+| **`onwait`** | A wait hook (described above) | Function | `async (node) => undefined` |
+
+### Linearized Views
+
+Autobase is designed for computing and sharing linearized views over many input Hypercores. A linearized view is a 'merged' view over the inputs, giving a way of interacting with the N input Hypercores as though it were a single, combined Hypercore.
+
+These views, instances of the `LinearizedView` class, in many ways look and feel like normal Hypercores. They support `get`, `update`, and `length` operations.
+
+By default, a view is a persisted version of an Autobase's causal stream, saved into a Hypercore. But a lot more can be done with them: by passing a function into `linearize`'s `apply` option, you can define your own indexing strategies.
+
+Linearized views are incredibly powerful as they can be persisted to a Hypercore using the new `truncate` API added in Hypercore 10. This means that peers querying a multiwriter data structure don't need to read in all changes and apply them themself. Instead, they can start from an existing view that's shared by another peer. If that view is missing indexing any data from inputs, Autobase will create a 'view over the remote view', applying only the changes necessary to bring the remote view up-to-date. The best thing is that this all happens automatically for you!
+
+#### Customizing Views with `apply`
+
+The default linearized view is just a persisted causal stream -- input nodes are recorded into an output Hypercore in causal order, with no further modifications. This minimally-processed view is useful on its own for applications that don't follow an event-sourcing pattern (i.e., chat), but most use cases involve processing operations in the inputs into indexed representations.
+
+To support indexing, `base.start` can be provided with an `apply` function that's passed batches of input nodes during rebasing, and can choose what to store in the output. Inside `apply`, the view can be directly mutated through the `view.append` method, and these mutations will be batched when the call exits.
+
+The simplest `apply` function is just a mapper, a function that modifies each input node and saves it into the view in a one-to-one fashion. Here's an example that uppercases String inputs, and saves the resulting view into an `output` Hypercore:
+
+```javascript
+base.start({
+ async apply (batch) {
+ batch = batch.map(({ value }) => Buffer.from(value.toString('utf-8').toUpperCase(), 'utf-8'))
+ await view.append(batch)
+ }
+})
+// After base.start, the linearized view is available as a property on the Autobase
+await base.view.update()
+console.log(base.view.length)
+```
+
+More sophisticated indexing might require multiple appends per input node, or reading from the view during `apply` -- both are perfectly valid. The [multiwriter Hyperbee example](https://github.com/holepunchto/autobase/blob/master/examples/autobee-simple.js) shows how this `apply` pattern can be used to build Hypercore-based indexing data structures using this approach.
+
+#### View Creation
+
+#### **`base.start({ apply, unwrap } = {})`**
+
+Creates a new linearized view, and sets it on `base.view`. The view mirrors the Hypercore API wherever possible, meaning it can be used where ever you would normally use a Hypercore.
+
+You can either call `base.start` manually when you want to start using `base.view`, or pass either `apply` or `autostart` options to the Autobase constructor. If these constructor options are present, Autobase will start immediately.
+
+If you choose to call `base.start` manually, it must only be called once.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ------------ | ------------------------------------------------------ | -------- | --------------- |
+| **`unwrap`** | Set this to auto unwrap the gets to only return .value | Boolean | `false` |
+| **`apply`** | The apply function described above | Function | `(batch) => {}` |
+
+#### **`view.status`**
+
+The status of the last linearize operation.
+
+Returns an object of the form `{ added: N, removed: M }` where:
+
+* `added` indicates how many nodes were appended to the output during the linearization
+* `removed` indicates how many nodes were truncated from the output during the linearization
+
+#### **`view.length`**
+
+The length of the view. Similar to `hypercore.length`.
+
+#### **`await view.update()`**
+
+Ensures the view is up-to-date.
+
+#### **`const entry = await view.get(idx, [options])`**
+
+Gets an entry from the view. If set `unwrap` to true, it returns `entry.value`. Otherwise, it returns an entry similar to this:
+
+```javascript
+{
+ clock, // the causal clock this entry was created at
+ value // the value that is stored here
+}
+```
+
+#### **`await view.append([blocks])`**
+
+This operation can only be performed inside the `apply` function.
diff --git a/building-blocks/hyperbee.md b/building-blocks/hyperbee.md
new file mode 100644
index 0000000..165b9e8
--- /dev/null
+++ b/building-blocks/hyperbee.md
@@ -0,0 +1,432 @@
+# Hyperbee
+
+**stable**
+
+Hyperbee is an append only B-tree based on [hypercore.md](hypercore.md "mention"). It provides a key/value-store API, with methods for inserting and getting key-value pairs, atomic batch insertions, and creating sorted iterators. It uses a single Hypercore for storage, using a technique called embedded indexing. It provides features like cache warmup extension, efficient diffing, version control, sorted iteration, and sparse downloading.
+
+> As with the Hypercore, a Hyperbee can only have a **single writer on a single machine**; the creator of the Hyperdrive is the only person who can modify it as they're the only one with the private key. That said, the writer can replicate to **many readers**, in a manner similar to BitTorrent.
+
+
+> [Github (Hyperbee)](https://github.com/holepunchto/hyperbee)
+
+* [Hyperbee](../building-blocks/hyperbee.md):
+ * [Create a new instance](hyperbee.md#installation):
+ * Basic:
+ * Properties:
+ * [db.core](./hyperbee.md#dbcore)
+ * [db.version](./hyperbee.md#dbversion)
+ * [db.id](./hyperbee.md#dbid)
+ * [db.key](./hyperbee.md#dbkey)
+ * [db.discoveryKey](./hyperbee.md#dbdiscoverykey)
+ * [db.writable](./hyperbee.md#dbwritable)
+ * [db.readable](./hyperbee.md#dbreadable)
+ * Methods:
+ * [db.ready()](hyperbee.md#await-dbready)
+ * [db.close()](hyperbee.md#await-dbclose)
+ * [db.put(key, \[value\], \[options\])](hyperbee.md#await-dbputkey-value-options)
+ * [db.get(key, \[options\])](hyperbee.md#const--seq-key-value---await-dbgetkey-options)
+ * [db.del(key, \[options\])](hyperbee.md#await-dbdelkey-options)
+ * [db.getBySeq(seq, \[options\])](hyperbee.md#const--key-value---await-dbgetbyseqseq-options)
+ * [db.replicate(isInitiatorOrStream)](hyperbee.md#const-stream--dbreplicateisinitiatororstream)
+ * [db.batch()](hyperbee.md#const-batch--dbbatch)
+ * [batch.put(key, \[value\], \[options\])](hyperbee.md#await-batchputkey-value-options)
+ * [batch.get(key, \[options\])](hyperbee.md#const--seq-key-value---await-batchgetkey-options)
+ * [batch.del(key, \[options\])](hyperbee.md#await-batchdelkey-options)
+ * [batch.flush()](hyperbee.md#await-batchflush)
+ * [batch.close()](hyperbee.md#await-batchclose)
+ * [db.createReadStream(\[range\], \[options\])](hyperbee.md#const-stream--dbcreatereadstreamrange-options)
+ * [db.peek(\[range\], \[options\])](hyperbee.md#const--seq-key-value---await-dbpeekrange-options)
+ * [db.createHistoryStream(\[options\])](hyperbee.md#const-stream--dbcreatehistorystreamoptions)
+ * [db.createDiffStream(otherVersion, \[options\])](hyperbee.md#const-stream--dbcreatediffstreamotherversion-options)
+ * [db.getAndWatch(key, \[options\])](hyperbee.md#const-entrywatcher--await-dbgetandwatchkey-options)
+ * [db.watch(\[range\])](hyperbee.md#const-watcher--dbwatchrange)
+ * [db.checkout(version)](hyperbee.md#const-snapshot--dbcheckoutversion)
+ * [db.snapshot()](hyperbee.md#const-snapshot--dbsnapshot)
+ * [db.sub('sub-prefix', \[options\])](hyperbee.md#const-sub--dbsubsub-prefix-optionss)
+ * [db.getHeader(\[options\])](hyperbee.md#const-header--await-dbgetheaderoptions)
+ * [Hyperbee.isHyperbee(core, \[options\])](hyperbee.md#const-ishyperbee--await-hyperbeeishyperbeecore-options)
+
+### Installation
+
+Install with [npm](https://www.npmjs.com/):
+
+```bash
+npm install hyperbee
+```
+
+### API
+
+#### **`const db = new Hyperbee(core, [options])`**
+
+Make a new Hyperbee instance. `core` should be a [hypercore.md](hypercore.md "mention").
+
+`options` include:
+
+| Property | Description | Type | Default |
+| :-----------------: | --------------------------------------------------------------------------- | ------ | ---------- |
+| **`valueEncoding`** | Encoding type for the values. Takes values of 'json', 'utf-8', or 'binary'. | String | `'binary'` |
+| **`keyEncoding`** | Encoding type for the keys. Takes values of 'ascii', 'utf-8', or 'binary'. | String | `'binary'` |
+
+
+> Currently read/diff streams sort based on the _encoded_ value of the keys.
+
+
+#### Properties
+
+#### **`db.core`**
+
+The underlying [Hypercore](hypercore.md) backing this bee.
+
+#### **`db.version`**
+
+A number that indicates how many modifications were made, is useful as a version identifier.
+
+#### **`db.id`**
+
+String containing the ID (z-base-32 of the public key) identifying this bee.
+
+#### **`db.key`**
+
+Buffer containing the public key identifying this bee.
+
+#### **`db.discoveryKey`**
+
+Buffer containing a key derived from `db.key`.
+
+> This discovery key does not allow you to verify the data, it's only to announce or look for peers that are sharing the same bee, without leaking the bee key.
+
+
+#### **`db.writable`**
+
+Boolean indicating to put or delete data in this bee.
+
+#### **`db.readable`**
+
+Boolean indicating if we can read from this bee. After closing the bee this will be `false`.
+
+#### **Methods**
+
+#### **`await db.ready()`**
+
+Waits until the internal state is loaded.
+
+Use it once before reading synchronous properties like `db.version`, unless you called any of the other APIs.
+
+#### **`await db.close()`**
+
+Fully close this bee, including its core.
+
+#### **`await db.put(key, [value], [options])`**
+
+Inserts a new key. Value can be optional.
+
+> If inserting a series of data atomically or want more performance then check the `db.batch` API.
+
+
+**`options`** includes:
+
+```javascript
+{
+ cas (prev, next) { return true }
+}
+```
+
+**Compare And Swap (cas)**
+
+`cas` option is a function comparator to control whether the `put` succeeds.
+
+By returning `true` it will insert the value, otherwise, it won't.
+
+It receives two args: `prev` is the current node entry, and `next` is the potential new node.
+
+```javascript
+await db.put('number', '123', { cas })
+console.log(await db.get('number')) // => { seq: 1, key: 'number', value: '123' }
+
+await db.put('number', '123', { cas })
+console.log(await db.get('number')) // => { seq: 1, key: 'number', value: '123' }
+// Without cas this would have been { seq: 2, ... }, and the next { seq: 3 }
+
+await db.put('number', '456', { cas })
+console.log(await db.get('number')) // => { seq: 2, key: 'number', value: '456' }
+
+function cas (prev, next) {
+ // You can use same-data or same-object lib, depending on the value complexity
+ return prev.value !== next.value
+}
+```
+
+#### **`const { seq, key, value } = await db.get(key, [options])`**
+
+Gets a key's value. Returns `null` if the key doesn't exist.
+
+`seq` is the Hypercore index at which this key was inserted.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ------------------- | --------------------------------------------------------------------------- | ------- | -------- |
+| **`wait`** | Wait for the meta-data of hypercore to be updated | Boolean | `true` |
+| **`update`** | Determine if the core has to be updated before any operation | Boolean | `true` |
+| **`keyEncoding`** | Encoding type for the keys. Takes values of 'json', 'utf-8', or 'binary'. | String | `binary` |
+| **`valueEncoding`** | Encoding type for the values. Takes values of 'json', 'utf-8', or 'binary'. | String | `binary` |
+
+
+> `db.get(key, [options])` uses the state at the time of initiating the read, so the write operations that complete after `get` is initiated and before it is resolved are ignored.
+
+
+#### **`await db.del(key, [options])`**
+
+Delete a key.
+
+`options` include:
+
+```javascript
+{
+ cas (prev) { return true }
+}
+```
+
+**Compare And Swap (cas)**
+
+`cas` option is a function comparator to control whether the `del` succeeds.
+
+By returning `true` it will delete the value, otherwise, it won't.
+
+It only receives one arg: `prev` which is the current node entry.
+
+```javascript
+// This won't get deleted
+await db.del('number', { cas })
+console.log(await db.get('number')) // => { seq: 1, key: 'number', value: 'value' }
+
+// Change the value so the next time we try to delete it then "cas" will return true
+await db.put('number', 'can-be-deleted')
+
+await db.del('number', { cas })
+console.log(await db.get('number')) // => null
+
+function cas (prev) {
+ return prev.value === 'can-be-deleted'
+}
+```
+#### **`const { key, value } = await db.getBySeq(seq, [options])`**
+
+Gets the key and value from a block number.
+
+`seq` is the Hypercore index. Returns `null` if block doesn't exists.
+
+#### **`const stream = db.replicate(isInitiatorOrStream)`**
+
+See more about how replicate works at [core.replicate](hypercore.md#const-stream-core.replicate-isinitiatororreplicationstream).
+
+#### **`const batch = db.batch()`**
+
+Makes a new atomic batch that is either fully processed or not processed at all.
+
+
+> If there are several inserts and deletions then a batch can be much faster.
+
+
+#### **`await batch.put(key, [value], [options])`**
+
+Inserts a key into a batch.
+
+`options` are the same as **`db.put`** method.
+
+#### **`const { seq, key, value } = await batch.get(key, [options])`**
+
+Gets a key, and value out of a batch.
+
+`options` are the same as **`db.get`** method.
+
+#### **`await batch.del(key, [options])`**
+
+Deletes a key into the batch.
+
+`options` are the same as **`db.del`** method.
+
+#### **`await batch.flush()`**
+
+Commits the batch to the database, and releases any locks it has acquired.
+
+#### **`await batch.close()`**
+
+Destroys a batch, and releases any locks it has acquired on the `db`.
+
+Call this to abort a batch without flushing it.
+
+
+
+Learn more about db.batch()
+
+A batch is atomic: it is either processed fully or not at all.
+
+A Hyperbee has a single write lock. A batch acquires this write lock with its first modifying operation (**`put`**, **`del`**), and releases it when it flushes. We can also explicitly acquire the lock with **`await batch.lock()`**. If using the batch only for read operations, the write lock is never acquired. Once the write lock is acquired, the batch must flush before any other writes to the Hyperbee can be processed.
+
+A batch's state snaps at creation time, so write operations applied outside of the batch are not taken into account when reading. Write operations within the batch do get taken into account, as is to be expected — if you first run **`await batch.put('myKey', 'newValue')`** and later run **`await batch.get('myKey')`**, you will observe **`'newValue'`**.
+
+
+
+#### **`const stream = db.createReadStream([range], [options])`**
+
+Make a read stream. Sort order is based on the binary value of the keys. All entries in the stream are similar to the ones returned from **`db.get`**.
+
+`range` should specify the range you want to read and looks like this:
+
+```javascript
+{
+ gt: 'only return keys > than this',
+ gte: 'only return keys >= than this',
+ lt: 'only return keys < than this',
+ lte: 'only return keys <= than this'
+}
+```
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ------------- | ---------------------------------- | ------- | ------- |
+| **`reverse`** | determine order of the keys | Boolean | `false` |
+| **`limit`** | maximum number of entries you want | Integer | `-1` |
+
+#### **`const { seq, key, value } = await db.peek([range], [options])`**
+
+Similar to doing a read stream and returning the first value, but a bit faster than that.
+
+#### **`const stream = db.createHistoryStream([options])`**
+
+Create a stream of all entries ever inserted or deleted from the `db`. Each entry has an additional `type` property indicating if it was a `put` or `del` operation.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ------------- | ------------------------------------------------------------------------ | ------- | ------- |
+| **`live`** | determine whether the stream will wait for new data and never end or not | Boolean | `false` |
+| **`reverse`** | determine the order in which data is received | Boolean | `false` |
+| **`gt`** | start after this index | Integer | `null` |
+| **`gte`** | start with this seq (inclusive) | Integer | `null` |
+| **`lt`** | stop before this index | Integer | `null` |
+| **`lte`** | stop after this index | Integer | `null` |
+| **`limit`** | maximum number of entries you want | Integer | `-1` |
+
+
+> If any of the gte, gt, lte, lt arguments are `< 0` then they'll implicitly be added with the version before starting so doing `{ gte: -1 }` makes a stream starting at the last index.
+
+
+#### **`const stream = db.createDiffStream(otherVersion, [options])`**
+
+Creates a stream of shallow changes between two versions of the `db`.
+
+`options` are the same as `db.createReadStream`, except for reverse.
+
+Each entry is sorted by key and looks like this:
+
+```javascript
+{
+ left: ,
+ right:
+}
+```
+
+> If an entry exists in `db` but not in the other version, then `left` is set and `right` will be null, and vice versa.
+>
+> If the entries are causally equal (i.e., they have the identical seq), they are not returned, only the diff.
+
+
+#### `const entryWatcher = await db.getAndWatch(key, [options])`
+
+Returns a watcher which listens to changes on the given key.
+
+`entryWatcher.node` contains the current entry in the same format as the result of `bee.get(key)`, and will be updated as it changes.
+
+> By default, the node will have the bee's key encoding and value encoding, but you can overwrite it by setting the `keyEncoding` and `valueEncoding` options.
+>
+>Listen to `entryWatcher.on('update')` to be notified when the value of node has changed.
+
+
+Call `await watcher.close()` to stop the watcher.
+
+#### **`const watcher = db.watch([range])`**
+
+Listens to changes that are on the optional `range`.
+
+`range` options are the same as `createReadStream` except they are reversed.
+
+> By default, the yielded snapshots will have the bee's key encoding and value encoding, but can be overwritten by setting the `keyEncoding` and `valueEncoding` options.
+
+Usage example:
+
+```javascript
+for await (const [current, previous] of watcher) {
+ console.log(current.version)
+ console.log(previous.version)
+}
+```
+
+Returns a new value after a change, `current` and `previous` are snapshots that are auto-closed before the next value.
+
+Methods:
+
+`await watcher.ready()`
+
+Waits until the watcher is loaded and detects changes.
+
+`await watcher.destroy()`
+
+Stops the watcher. You could also stop it by using `break` inside the loop.
+
+
+> Do not attempt to close the snapshots yourself. Since they're used internally, let them be auto-closed.
+>
+> Watchers are not supported on subs and checkouts. Instead, use the `range` option to limit the scope.
+
+
+#### **`const snapshot = db.checkout(version)`**
+
+Get a read-only snapshot of a previous version.
+
+#### **`const snapshot = db.snapshot()`**
+
+Shorthand for getting a checkout for the current version.
+
+#### **`const sub = db.sub('sub-prefix', options = {})`**
+
+Create a sub-database where a given value will prefix all entries.
+
+This makes it easy to create namespaces within a single Hyperbee.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| :-----------------: | --------------------------------------------------------------------------- | ------ | ------------------------- |
+| **`sep`** | A namespace separator | Buffer | `Buffer.alloc(1)` |
+| **`valueEncoding`** | Encoding type for the values. Takes values of 'json', 'utf-8', or 'binary'. | String | `defaults to the parents` |
+| **`keyEncoding`** | Encoding type for the keys. Takes values of 'ascii', 'utf-8', or 'binary'. | String | `defaults to the parents` |
+
+For example:
+
+```javascript
+const root = new Hyperbee(core)
+const sub = root.sub('a')
+
+// In root, this will have the key ('a' + separator + 'b')
+await sub.put('b', 'hello')
+
+// Returns => { key: 'b', value: 'hello')
+await sub.get('b')
+```
+
+#### **`const header = await db.getHeader([options])`**
+
+Returns the header contained in the first block. Throws an error if undecodable.
+
+`options` are the same as the `core.get` method.
+
+#### **`const isHyperbee = await Hyperbee.isHyperbee(core, [options])`**
+
+Returns `true` if the core contains a Hyperbee, `false` otherwise.
+
+This requests the first block on the core, so it can throw depending on the options.
+
+`options` are the same as the `core.get` method.
diff --git a/building-blocks/hypercore.md b/building-blocks/hypercore.md
new file mode 100644
index 0000000..e793ebc
--- /dev/null
+++ b/building-blocks/hypercore.md
@@ -0,0 +1,578 @@
+# Hypercore
+
+**stable**
+
+Hypercore is a secure, distributed append-only log built for sharing large datasets and streams of real-time data. It comes with a secure transport protocol, making it easy to build fast and scalable peer-to-peer applications.
+
+> [Github (Hypercore)](https://github.com/holepunchto/hypercore)
+
+* [Hypercore](../building-blocks/hypercore.md)
+ * [Creating a new instance](hypercore.md#installation)
+ * Basic:
+ * Properties:
+ * [core.writable](hypercore.md#corewritable)
+ * [core.readable](hypercore.md#corereadable)
+ * [core.id](hypercore.md#coreid)
+ * [core.key](hypercore.md#corekey)
+ * [core.keyPair](hypercore.md#corekeypair)
+ * [core.discoveryKey](hypercore.md#corediscoverykey)
+ * [core.encryptionKey](hypercore.md#coreencryptionkey)
+ * [core.length](hypercore.md#corelength)
+ * [core.contiguousLength](hypercore.md#corecontiguouslength)
+ * [core.fork](hypercore.md#corefork)
+ * [core.padding](hypercore.md#corepadding)
+ * Methods:
+ * [core.append(block)](hypercore.md#const--length-bytelength---await-coreappendblock)
+ * [core.get(index, \[options\])](hypercore.md#const-block--await-coregetindex-options)
+ * [core.has(start, \[end\])](hypercore.md#const-has--await-corehasstart-end)
+ * [core.update()](hypercore.md#const-updated--await-coreupdateoptions)
+ * [core.seek(byteOffset)](hypercore.md#const-index-relativeoffset--await-coreseekbyteoffset-options)
+ * [core.createReadStream(\[options\])](hypercore.md#const-stream--corecreatereadstreamoptions)
+ * [core.createByteStream(\[options\])](hypercore.md#const-stream--corecreatereadstreamoptions)
+ * [core.clear(start, \[end\], \[options\])](hypercore.md#const-cleared--await-coreclearstart-end-options)
+ * [core.truncate(newLength, \[forkId\])](hypercore.md#await-coretruncatenewlength-forkid)
+ * [core.purge()](hypercore.md#await-corepurge)
+ * [core.treeHash(\[length\])](hypercore.md#const-hash--await-coretreehashlength)
+ * [core.download(\[range\])](hypercore.md#const-range--coredownloadrange)
+ * [core.session(\[options\])](hypercore.md#const-session--await-coresessionoptions)
+ * [core.info(\[options\])](hypercore.md#const-info--await-coreinfooptions)
+ * [core.close()](hypercore.md#await-coreclose)
+ * [core.ready()](hypercore.md#await-coreready)
+ * [core.replicate(isInitiatorOrReplicationStream, \[options\])](hypercore.md#const-stream--corereplicateisinitiatorstream-options)
+ * [core.findingPeers()](hypercore.md#const-done--corefindingpeers)
+ * [core.session(\[options\])](hypercore.md#coresessionoptions)
+ * [core.snapshot(\[options\])](hypercore.md#coresnapshotoptions)
+ * Events:
+ * [append](hypercore.md#coreonappend)
+ * [truncate](hypercore.md#coreontruncate-ancestors-forkid)
+ * [ready](hypercore.md#coreonready)
+ * [close](hypercore.md#coreonclose)
+ * [peer-add](hypercore.md#coreonpeer-add)
+ * [peer-remove](hypercore.md#coreonpeer-remove)
+
+### Installation
+
+Install with [npm](https://www.npmjs.com/):
+
+```bash
+npm install hypercore
+```
+
+A Hypercore can only be modified by its creator; internally it signs updates with a private key that's meant to live on a single machine, and should never be shared. However, the writer can replicate to many readers, in a manner similar to BitTorrent.
+
+> Unlike BitTorrent, a Hypercore can be modified after its initial creation, and peers can receive live update notifications whenever the writer adds new blocks.
+
+
+### API
+
+#### **`const core = new Hypercore(storage, [key], [options])`**
+
+Creates a new Hypercore instance.
+
+`storage` should be set to a directory where you want to store the data and core metadata.
+
+```javascript
+const core = new Hypercore('./directory') // store data in ./directory
+```
+
+> Alternatively, the user can pass a function instead that is called with every filename Hypercore needs to function and return your own [abstract-random-access](https://github.com/random-access-storage/abstract-random-access) instance that is used to store the data.
+
+
+```javascript
+const RAM = require('random-access-memory')
+const core = new Hypercore((filename) => {
+ // Filename will be one of: data, bitfield, tree, signatures, key, secret_key
+ // The data file will contain all your data concatenated.
+
+ // Store all files in ram by returning a random-access-memory instance
+ return new RAM()
+})
+```
+
+By default Hypercore uses [random-access-file](https://github.com/random-access-storage/random-access-file). This is also useful if users want to store specific files in other directories.
+
+Hypercore will produce the following files:
+
+* `oplog` - The internal truncating journal/oplog that tracks mutations, the public key, and other metadata.
+* `tree` - The Merkle Tree file.
+* `bitfield` - The bitfield marking which data blocks this core has.
+* `data` - The raw data of each block.
+
+
+> `tree`, `data`, and `bitfield` are normally very sparse files.
+
+
+`key` can be set to a Hypercore public key. If you do not set this the public key will be loaded from storage. If no key exists a new key pair will be generated.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| :-------------------: | --------------------------------------------------------------------------------------- | -------- | ------------------ |
+| **`createIfMissing`** | create a new Hypercore key pair if none was present in the storage | Boolean | `true` |
+| **`overwrite`** | overwrite any old Hypercore that might already exist | Boolean | `false` |
+| **`sparse`** | enable sparse mode, counting unavailable blocks towards core.length and core.byteLength | Boolean | `true` |
+| **`valueEncoding`** | one of 'json', 'utf-8', or 'binary' | String | `'binary'` |
+| **`encodeBatch`** | optionally apply an encoding to complete batches | Function | `batch => { ... }` |
+| **`keyPair`** | optionally pass the public key and secret key as a key pair | Object | `null` |
+| **`encryptionKey`** | optionally pass an encryption key to enable block encryption | String | `null` |
+| **`onwait`** | hook that is called if gets are waiting for download | Function | `() => {}` |
+| **`timeout`** | constructor timeout | integer | `0` |
+| **`writable`** | disable appends and truncates | Boolean | `true` |
+
+We can also set valueEncoding to any [abstract-encoding](https://github.com/mafintosh/abstract-encoding) or [compact-encoding](https://github.com/compact-encoding) instance.
+
+valueEncodings will be applied to individual blocks, even if we append batches. To control encoding at the batch level, the `encodeBatch` option can be used, which is a function that takes a batch and returns a binary-encoded batch. If a custom valueEncoding is provided, it will not be applied prior to `encodeBatch`.
+
+> **Do not** attempt to create multiple Hypercores with the same private key (i.e., on two different devices).
+>
+>Doing so will **most definitely** cause a Hypercore conflict. A conflict implies that the core was implicitly forked. In such a scenario, replicating peers will 'gossip' that the core should be deemed dead and unrecoverable.
+
+
+#### Properties
+
+#### **`core.readable`**
+
+Can we read from this core? After [closing](hypercore.md#await-coreclose) the core this will be `false`.
+
+#### **`core.id`**
+
+A string containing the ID (z-base-32 of the public key) that identifies this core.
+
+#### **`core.key`**
+
+Buffer containing the public key identifying this core.
+
+
+
+Learn more about Hypercore Keys
+
+All Hypercores are identified by two properties: A **public key** and a **discovery key**, the latter of which is derived from the public key. Importantly, the public key gives peers read \*\*\*\* capability — if we have the key, we can exchange blocks with other peers.
+
+The process of block replication requires the peers to prove to each other that they know the public key. This is important because the public key is necessary for peers to be able to validate the blocks. Hence, only the peers who know the public key can perform the block replication.
+
+Since the public key is also a read capability, it can't be used to discover other readers (by advertising it on a DHT, for example) as that would lead to capability leaks. The discovery key, being derived from the public key but lacking read capability, can be shared openly for peer discovery.
+
+
+
+#### **`core.keyPair`**
+
+An object containing buffers of the core's public and secret key
+
+#### **`core.discoveryKey`**
+
+Buffer containing a key derived from the core's public key. In contrast to `core.key,` this key does not allow you to verify the data. It can be used to announce or look for peers that are sharing the same core, without leaking the core key.
+
+> The above properties are populated after [`ready`](hypercore.md#await-core.ready) has been emitted. Will be `null` before the event.
+
+
+#### **`core.encryptionKey`**
+
+Buffer containing the optional block encryption key of this core. Will be `null` unless block encryption is enabled.
+
+#### **`core.writable`**
+
+Can we append to this core?
+
+> Populated after [`ready`](hypercore.md#await-core.ready) has been emitted. Will be `false` before the event.
+
+
+#### **`core.length`**
+
+The number of blocks of data available on this core. If `sparse: false`, this will equal `core.contiguousLength`.
+
+#### **`core.contiguousLength`**
+
+The number of blocks contiguously available starting from the first block of this core.
+
+#### **`core.fork`**
+
+The current fork id of this core
+
+> The above properties are populated after [`ready`](hypercore.md#await-core.ready) has been emitted. Will be `0` before the event.
+
+
+#### **`core.padding`**
+
+The amount of padding applied to each block of this core. Will be `0` unless block encryption is enabled.
+
+#### Methods
+
+#### **`const { length, byteLength } = await core.append(block)`**
+
+Append a block of data (or an array of blocks) to the core. Returns the new length and byte length of the core.
+
+> This operation is 'atomic'. This means that the block is appended altogether or not at all (in case of I/O failure).
+
+
+```javascript
+// simple call append with a new block of data
+await core.append(Buffer.from('I am a block of data'))
+
+// pass an array to append multiple blocks as a batch
+await core.append([Buffer.from('batch block 1'), Buffer.from('batch block 2')])
+```
+
+#### **`const block = await core.get(index, [options])`**
+
+Get a block of data. If the data is not available locally this method will prioritize and wait for the data to be downloaded.
+
+```javascript
+// get block #42
+const block = await core.get(42)
+
+// get block #43, but only wait 5s
+const blockIfFast = await core.get(43, { timeout: 5000 })
+
+// get block #44, but only if we have it locally
+const blockLocal = await core.get(44, { wait: false })
+```
+
+`options` include:
+
+| Property | Description | Type | Default |
+| :-----------------: | ------------------------------------------------------ | ------- | -------------------- |
+| **`wait`** | Wait for the block to be downloaded | Boolean | `true` |
+| **`onwait`** | Hook that is called if the get is waiting for download | Boolean | `() => {}` |
+| **`timeout`** | Wait at max some milliseconds (0 means no timeout) | Boolean | `0` |
+| **`valueEncoding`** | One of 'json', 'utf-8', or 'binary' | String | core's valueEncoding |
+| **`decrypt`** | Automatically decrypts the block if encrypted | Boolean | `true` |
+
+#### **`const has = await core.has(start, [end])`**
+
+Check if the core has all blocks between `start` and `end`.
+
+#### **`const updated = await core.update([options])`**
+
+Wait for the core to try and find a signed update to its length. Does not download any data from peers except for proof of the new core length.
+
+```javascript
+const updated = await core.update()
+console.log('core was updated?', updated, 'length is', core.length)
+```
+
+`options` include:
+
+| Property | Description | Type | Default |
+| :--------: | ------------------------------------------------- | ------- | ------- |
+| **`wait`** | Wait for the meta-data of hypercore to be updated | Boolean | `replicator.findingPeers > 0` |
+
+#### **`const [index, relativeOffset] = await core.seek(byteOffset, [options])`**
+
+Seek a byte offset.
+
+Returns `[index, relativeOffset]`, where `index` is the data block the byteOffset is contained in and `relativeOffset` is the relative byte offset in the data block.
+
+```javascript
+await core.append([Buffer.from('abc'), Buffer.from('d'), Buffer.from('efg')])
+
+const first = await core.seek(1) // returns [0, 1]
+const second = await core.seek(3) // returns [1, 0]
+const third = await core.seek(5) // returns [2, 1]
+```
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ------------- | ------------------------------ | ------- | ----------------------------- |
+| **`wait`** | wait for data to be downloaded | Boolean | `true` |
+| **`timeout`** | wait for given milliseconds | Integer | `core.timeout` |
+
+#### **`const stream = core.createReadStream([options])`**
+
+Make a read stream to read a range of data out at once.
+
+```javascript
+// read the full core
+const fullStream = core.createReadStream()
+
+// read from block 10-15
+const partialStream = core.createReadStream({ start: 10, end: 15 })
+
+// pipe the stream somewhere using the .pipe method
+// or consume it as an async iterator
+
+for await (const data of fullStream) {
+ console.log('data:', data)
+}
+```
+
+`options` include:
+
+| Property | Description | Type | Default |
+| -------------- | -------------------------------------------------------------- | ------- | ------------- |
+| **`start`** | Starting offset to read a range of data | Integer | `0` |
+| **`end`** | Ending offset to read a range of data | Integer | `core.length` |
+| **`live`** | Allow realtime data replication | Boolean | `false` |
+| **`snapshot`** | Auto set end to core.length on open or update it on every read | Boolean | `true` |
+
+#### `const bs = core.createByteStream([options])`
+
+Make a byte stream to read a range of bytes.
+
+``` js
+// Read the full core
+const fullStream = core.createByteStream()
+// Read from byte 3, and from there read 50 bytes
+const partialStream = core.createByteStream({ byteOffset: 3, byteLength: 50 })
+// Consume it as an async iterator
+for await (const data of fullStream) {
+ console.log('data:', data)
+}
+// Or pipe it somewhere like any stream:
+partialStream.pipe(process.stdout)
+```
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ------------- | ------------------------------ | ------- | ----------------------------- |
+| **`byteOffset`** | Starting offset to read a range of bytes | Integer | `0` |
+| **`byteLength`** | Number of bytes that will be read | Integer | `core.byteLength - options.byteOffset` |
+| **`prefetch`** | Controls the number of blocks to preload | Integer | `32` |
+
+#### **`const cleared = await core.clear(start, [end], [options])`**
+
+Clears stored blocks between `start` and `end`, reclaiming storage when possible.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ----------------- | --------------------------------------------------------------------- | ------- | ------- |
+| **`diff`** | Returned `cleared` bytes object is null unless you enable this | Boolean | `false` |
+
+```javascript
+await core.clear(4) // clear block 4 from your local cache
+await core.clear(0, 10) // clear block 0-10 from your local cache
+```
+
+The core will also 'gossip' with peers it is connected to, that is no longer has these blocks.
+
+#### **`await core.truncate(newLength, [forkId])`**
+
+Truncates the core to a smaller length.
+
+Per default, this will update the fork ID of the core to `+ 1`, but we can set the preferred fork ID with the option. Note that the fork ID should be incremented in a monotone manner.
+
+#### `await core.purge()`
+
+Purge the Hypercore from your storage, completely removing all data.
+
+#### **`const hash = await core.treeHash([length])`**
+
+Get the Merkle Tree hash of the core at a given length, defaulting to the current length of the core.
+
+#### **`const range = core.download([range])`**
+
+Download a range of data.
+
+You can await until the range has been fully downloaded by doing:
+
+```javascript
+await range.done()
+```
+
+A range can have the following properties:
+
+```javascript
+{
+ start: startIndex,
+ end: nonInclusiveEndIndex,
+ blocks: [index1, index2, ...],
+ linear: false // download range linearly and not randomly
+}
+```
+
+To download the full core continuously (often referred to as non-sparse mode):
+
+```javascript
+// Note that this will never be consider downloaded as the range
+// will keep waiting for new blocks to be appended.
+core.download({ start: 0, end: -1 })
+```
+
+To download a discrete range of blocks, pass a list of indices:
+
+```javascript
+core.download({ blocks: [4, 9, 7] })
+```
+
+To cancel downloading a range, simply destroy the range instance:
+
+```javascript
+// will stop downloading now
+range.destroy()
+```
+#### **`const session = await core.session([options])`**
+
+Creates a new Hypercore instance that shares the same underlying core. Options are inherited from the parent instance, unless they are re-set.
+
+`options` are the same as in the constructor.
+
+> You must close any session you make.
+
+#### **`const info = await core.info([options])`**
+
+Get information about this core, such as its total size in bytes.
+
+The object will look like this:
+
+```javascript
+Info {
+ key: Buffer(...),
+ discoveryKey: Buffer(...),
+ length: 18,
+ contiguousLength: 16,
+ byteLength: 742,
+ fork: 0,
+ padding: 8,
+ storage: {
+ oplog: 8192,
+ tree: 4096,
+ blocks: 4096,
+ bitfield: 4096
+ }
+}
+```
+
+`options` include:
+
+| Property | Description | Type | Default |
+| --------- | ------------------------------ | ------- | ------- |
+| `storage` | get storage estimates in bytes | Boolean | `false` |
+
+#### **`await core.close()`**
+
+Close this core and release any underlying resources.
+
+#### **`await core.ready()`**
+
+Waits for the core to open.
+
+After this has been called `core.length` and other properties have been set.
+
+> ℹ️ In general, you do not need to wait for `ready` unless you're checking a synchronous property (like `key` or `discoverykey`), as all async methods on the public API, will await this internally.
+
+
+#### **`const stream = core.replicate(isInitiator|stream, options)`**
+
+Creates a replication stream. We should pipe this to another Hypercore instance.
+
+The `isInitiator` argument is a boolean indicating whether you are the initiator of the connection (ie the client) or if you are the passive part (i.e., the server).
+
+> If a P2P swarm like Hyperswarm is being used, you can know this by checking if the swarm connection is a client socket or a server socket. In Hyperswarm, a user can check that using the [client property on the peer details object](https://github.com/hyperswarm/hyperswarm#swarmonconnection-socket-details--).
+
+
+
+To multiplex the replication over an existing Hypercore replication stream, another stream instance can be passed instead of the `isInitiator` Boolean.
+
+To replicate a Hypercore using [hyperswarm.md](hyperswarm.md "mention"):
+
+```javascript
+// assuming swarm is a Hyperswarm instance and core is a Hypercore
+swarm.on('connection', conn => {
+ core.replicate(conn)
+})
+```
+
+> If you want to replicate many Hypercores over a single Hyperswarm connection, you probably want to be using [corestore.md](../helpers/corestore.md "mention").
+
+
+If not using [hyperswarm.md](hyperswarm.md "mention") or [corestore.md](../helpers/corestore.md "mention"), specify the `isInitiator` field, which will create a fresh protocol stream that can be piped over any transport you'd like:
+
+```javascript
+// assuming we have two cores, localCore + remoteCore, sharing the same key
+// on a server
+const net = require('net')
+const server = net.createServer(function (socket) {
+ socket.pipe(remoteCore.replicate(false)).pipe(socket)
+})
+
+// on a client
+const socket = net.connect(...)
+socket.pipe(localCore.replicate(true)).pipe(socket)
+```
+
+> In almost all cases, the use of both Hyperswarm and Corestore Replication is advised and will meet all your needs.
+
+#### **`const done = core.findingPeers()`**
+
+Create a hook that tells Hypercore users are finding peers for this core in the background. Call `done` when user current discovery iteration is done. If using Hyperswarm, call this after a `swarm.flush()` finishes.
+
+This allows `core.update` to wait for either the `findingPeers` hook to finish or one peer to appear before deciding whether it should wait for a Merkle tree update before returning.
+
+In order to prevent `get` and `update` from resolving until Hyperswarm (or any other external peer discovery process) has finished, use the following pattern:
+
+```javascript
+// assuming swarm is a Hyperswarm and core is a Hypercore
+const done = core.findingPeers()
+swarm.join(core.discoveryKey)
+
+// swarm.flush() can be a very expensive operation, so don't await it
+// this just marks the 'worst case', i.e., when no additional peers will be found
+swarm.flush().then(() => done())
+
+// if this block is not available locally, the `get` will wait until
+// *either* a peer connects *or* the swarm flush finishes
+await core.get(0)
+```
+
+#### **`core.session([options])`**
+
+Returns a new session for the Hypercore.
+
+Used for the resource management of the Hypercores using reference counting. The sessions are individual openings to a Hypercore instance and consequently, the changes made through one session will be reflected across all sessions of the Hypercore.
+
+> The returned value of `core.session()` can be used as a Hypercore instance i.e., everything provided by the Hypercore API can be used with it.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| :----------: | ------------------------------------------------------------------------------------------- | ------- | ----------- |
+| **`wait`** | Wait for the block to be downloaded | Boolean | `true` |
+| **`onwait`** | Hook that is called if the get is waiting for download | Boolean | `() => {}` |
+| **`sparse`** | Enables sparse mode, counting unavailable blocks towards `core.length` and `core.byteLength` | Boolean | `true` |
+| **`class`** | class name | Class | `Hypercore` |
+
+```javascript
+const core = new Hypercore(ram)
+const session1 = core.session()
+
+await core.close() // will not close the underlying Hypercore
+await session1.close() // will close the Hypercore
+```
+
+#### **`core.snapshot([options])`**
+
+Returns a snapshot of the core at that particular time. This is useful if you want to ensure that multiple `get` operations are acting on a consistent view of the Hypercore (i.e., if the core forks in between two reads, the second should throw an error).
+
+If [`core.update()`](hypercore.md#const-updated--await-coreupdateoptions) is explicitly called on the snapshot instance, it will no longer be locked to the previous data. Rather, it will get updated with the current state of the Hypercore instance.
+
+`options` are the same as the options to [`core.session()`](hypercore.md#coresessionoptions).
+
+> The fixed-in-time Hypercore clone created via snapshotting does not receive updates from the main Hypercore, unlike the Hypercore instance returned by `core.session()`.
+
+#### Events
+
+#### **`core.on('append')`**
+
+Emitted when the core has been appended to (i.e., has a new length/byte length), either locally or remotely.
+
+#### **`core.on('truncate', ancestors, forkId)`**
+
+Emitted when the core has been truncated, either locally or remotely.
+
+#### **`core.on('ready')`**
+
+Emitted after the core has initially opened all its internal state.
+
+#### **`core.on('close')`**
+
+Emitted when the core has been fully closed
+
+#### **`core.on('peer-add')`**
+
+Emitted when a new connection has been established with a peer.
+
+#### **`core.on('peer-remove')`**
+
+Emitted when a peer's connection has been closed.
diff --git a/building-blocks/hyperdht.md b/building-blocks/hyperdht.md
new file mode 100644
index 0000000..4569714
--- /dev/null
+++ b/building-blocks/hyperdht.md
@@ -0,0 +1,277 @@
+# HyperDHT
+
+**stable**
+
+The DHT powering Hyperswarm and built on top of [dht-rpc](https://github.com/mafintosh/dht-rpc). The HyperDHT uses a series of holepunching techniques to ensure connectivity works on most networks and is mainly used to facilitate finding and connecting to peers using end-to-end encrypted Noise streams.
+
+In the HyperDHT, peers are identified by a public key, not by an IP address. If you know someone's public key, you can connect to them regardless of where they're located, even if they move between different networks.
+
+> [Github (Hyperdht)](https://github.com/holepunchto/hyperdht)
+
+* [HyperDHT](../building-blocks/hyperdht.md)
+ * [Create a new instance](hyperdht.md#installation)
+ * Basic:
+ * Methods:
+ * [DHT.keyPair(\[seed\])](hyperdht.md#keypair--dhtkeypairseed)
+ * [DHT.bootstrapper(port, host, \[options\])](hyperdht.md#node--dhtbootstrapperport-host-options)
+ * [node.destroy(\[options\])](hyperdht.md#await-nodedestroyoptions)
+ * [Creating P2P servers:](hyperdht.md#creating-p2p-servers)
+ * [node.createServer(\[options\], \[onconnection\])](hyperdht.md#const-server--nodecreateserveroptions-onconnection)
+ * Methods:
+ * [server.listen(keyPair)](hyperdht.md#await-serverlistenkeypair)
+ * [server.refresh()](hyperdht.md#serverrefresh)
+ * [server.address()](hyperdht.md#serveraddress)
+ * [server.close()](hyperdht.md#await-serverclose)
+ * Events:
+ * [connection](hyperdht.md#serveronconnection-socket)
+ * [listening](hyperdht.md#serveronlistening)
+ * [close](hyperdht.md#serveronclose)
+ * [Connecting to P2P servers](hyperdht.md#connecting-to-p2p-servers):
+ * [node.connect(remotePublicKey, \[options\])](hyperdht.md#const-socket--nodeconnectremotepublickey-options)
+ * Properties:
+ * [socket.remotePublicKey](hyperdht.md#socketremotepublickey)
+ * [socket.publicKey](hyperdht.md#socketpublickey)
+ * Events:
+ * [open](hyperdht.md#socketonopen)
+ * [Additional Peer Discovery](hyperdht.md#additional-peer-discovery):
+ * Methods:
+ * [node.lookup(topic, \[options\])](hyperdht.md#const-stream--nodelookuptopic-options)
+ * [node.announce(topic, keyPair, \[relayAddresses\], \[options\])](hyperdht.md#const-stream--nodeannouncetopic-keypair-relayaddresses-options)
+ * [node.unannounce(topic, keyPair, \[options\])](hyperdht.md#await-nodeunannouncetopic-keypair-options)
+ * [Mutable/immutable records:](hyperdht.md#mutableimmutable-records)
+ * Methods:
+ * [node.immutablePut(value, \[options\])](hyperdht.md#const--hash-closestnodes---await-nodeimmutableputvalue-options)
+ * [node.immutableGet(hash, \[options\])](hyperdht.md#const--value-from---await-nodeimmutablegethash-options)
+ * [node.mutablePut(keyPair, value, \[options\])](hyperdht.md#const--publickey-closestnodes-seq-signature---await-nodemutableputkeypair-value-options)
+ * [node.mutableGet(publicKey, \[options\])](hyperdht.md#const--value-from-seq-signature---await-nodemutablegetpublickey-options)
+
+### Installation
+
+Install with [npm](https://www.npmjs.com/):
+
+```bash
+npm install hyperdht
+```
+
+### API
+
+#### **`const node = new DHT([options])`**
+
+Create a new DHT node.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| --------------- | ------------------------------------------------------------------------------------------------ | ------ | -------------------------------------------------------------------------------------- |
+| **`bootstrap`** | overwrite the default bootstrap servers, just need to be an array of any known DHT node(s) | Array | `['node1.hyperdht.org:49737', 'node2.hyperdht.org:49737', 'node3.hyperdht.org:49737']` |
+| **`keyPair`** | optionally pass the public key and secret key as a key pair to use for server.listen and connect | Object | `null` |
+
+See [dht-rpc](https://github.com/mafintosh/dht-rpc) for more options as HyperDHT inherits from that.
+
+> ℹ️ The default bootstrap servers are publicly served on behalf of the commons. To run a fully isolated DHT, start one or more DHT nodes with an empty bootstrap array (`new DHT({bootstrap:[]})`) and then use the addresses of those nodes as the `bootstrap` option in all other DHT nodes. You'll need at least one persistent node for the network to be completely operational.
+
+#### Methods
+
+#### **`keyPair = DHT.keyPair([seed])`**
+
+Generates the required key pair for DHT operations.
+
+Returns an object with `{publicKey, secretKey}`. `publicKey` holds a public key buffer, `secretKey` holds a private key buffer.
+
+Any options passed are forwarded to dht-rpc.
+
+#### `node = DHT.bootstrapper(port, host, [options])`
+
+To run your own Hyperswarm network use this method to easily create a bootstrap node.
+
+#### **`await node.destroy([options])`**
+
+Fully destroy this DHT node.
+
+> This will also unannounce any running servers. If you want to force close the node without waiting for the servers to unannounce pass `{ force: true }`.
+
+### Creating P2P Servers
+
+#### **`const server = node.createServer([options], [onconnection])`**
+
+Creates a new server for accepting incoming encrypted P2P connections.
+
+`options` include:
+
+```javascript
+{
+ firewall (remotePublicKey, remoteHandshakePayload) {
+ // validate if you want a connection from remotePublicKey
+ // if you do return false, else return true
+ // remoteHandshakePayload contains their ip and some more info
+ return true
+ }
+}
+```
+
+> Servers can be run on normal home computers, as the DHT will UDP holepunch connections are personal to users.
+
+
+#### Methods
+
+#### **`await server.listen(keyPair)`**
+
+Makes the server listen on a keyPair. To connect to this server use `keyPair.publicKey` as the connect address.
+
+#### **`server.refresh()`**
+
+Refreshes the server, causing it to reannounce its address. This is automatically called on network changes.
+
+#### **`server.address()`**
+
+Returns an object containing the address of the server:
+
+```javascript
+{
+ host, // external IP of the server,
+ port, // external port of the server if predictable,
+ publicKey // public key of the server
+}
+```
+
+Information can also be retrieved from `node.remoteAddress()` minus the public key.
+
+#### **`await server.close()`**
+
+Stops listening.
+
+#### Events
+
+#### **`server.on('connection', socket)`**
+
+Emitted when a new encrypted connection has passed the firewall check.
+
+`socket` is a [NoiseSecretStream](https://github.com/holepunchto/hyperswarm-secret-stream) instance.
+
+To check about user you are connected to using `socket.remotePublicKey` and `socket.handshakeHash` contains a unique hash representing this crypto session (same on both sides).
+
+#### **`server.on('listening')`**
+
+Emitted when the server is fully listening on a keyPair.
+
+#### **`server.on('close')`**
+
+Emitted when the server is fully closed.
+
+### Connecting to P2P Servers
+
+#### **`const socket = node.connect(remotePublicKey, [options])`**
+
+Connect to a remote server. Similar to `createServer` this performs UDP hole punching for P2P connectivity.
+
+```javascript
+const node = new DHT()
+
+const remotePublicKey = Buffer.from('public key of remote peer', 'hex')
+const encryptedSocket = node.connect(remotePublicKey)
+```
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ------------- | -------------------------------------------------------- | ------ | --------------------- |
+| **`nodes`** | optional array of close dht nodes to speed up connecting | Array | `[]` |
+| **`keyPair`** | optional key pair to use when connection | Object | `node.defaultKeyPair` |
+
+#### Properties
+
+#### **`socket.remotePublicKey`**
+
+The public key of the remote peer.
+
+#### **`socket.publicKey`**
+
+The public key of the connection.
+
+#### Events
+
+#### **`socket.on('open')`**
+
+Emitted when the encrypted connection has been fully established with the server.
+
+```javascript
+encryptedSocket.on('open', function () {
+ console.log('Connected to server')
+})
+```
+
+### Additional Peer Discovery
+
+#### **`const stream = node.lookup(topic, [options])`**
+
+Look for peers in the DHT on the given topic. The topic should be a 32-byte buffer (normally a hash of something).
+
+The returned stream looks like this
+
+```javascript
+{
+ // Who sent the response?
+ from: { id, host, port },
+ // What address they responded to (i.e., your address)
+ to: { host, port },
+ // List of peers announcing under this topic
+ peers: [ { publicKey, nodes: [{ host, port }, ...] } ]
+}
+```
+
+To connect to the peers, also call `connect` afterward with those public keys.
+
+Any passed options are forwarded to dht-rpc.
+
+#### Methods
+
+#### **`const stream = node.announce(topic, keyPair, [relayAddresses], [options])`**
+
+Announces that users are listening on a key pair to the DHT under a specific topic. An announce does a parallel lookup so the stream returned looks like the lookup stream.
+
+Any passed options are forwarded to dht-rpc.
+
+> When announcing you'll send a signed proof to peers that you own the key pair and wish to announce under the specific topic. Optionally you can provide up to 3 nodes, indicating which DHT nodes can relay messages to you - this speeds up connects later on for other users.
+>
+> Creating a server using `dht.createServer` automatically announces itself periodically on the key pair it is listening on. When announcing the server under a specific topic, you can access the nodes it is close to using `server.nodes`.
+
+#### **`await node.unannounce(topic, keyPair, [options])`**
+
+Unannounces a key pair.
+
+Any passed options are forwarded to dht-rpc.
+
+### Mutable/Immutable Records
+
+#### Methods
+
+#### **`const { hash, closestNodes } = await node.immutablePut(value, [options])`**
+
+Stores an immutable value in the DHT. When successful, the hash of the value is returned.
+
+Any passed options are forwarded to dht-rpc.
+
+#### **`const { value, from } = await node.immutableGet(hash, [options])`**
+
+Fetch an immutable value from the DHT. When successful, it returns the value corresponding to the hash.
+
+Any passed options are forwarded to dht-rpc.
+
+#### **`const { publicKey, closestNodes, seq, signature } = await node.mutablePut(keyPair, value, [options])`**
+
+Stores a mutable value in the DHT.
+
+Any passed options are forwarded to dht-rpc.
+
+#### **`const { value, from, seq, signature } = await node.mutableGet(publicKey, [options])`**
+
+Fetches a mutable value from the DHT.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ------------- | -------------------------------------------------------- | ------ | --------------------- |
+| **`seq`** | Returns values with corresponding `seq` values that are greater than or equal to the supplied `seq` option | Integer | `0` |
+| **`latest`** | Indicates whether the query should try to find the highest seq before returning, or just the first verified value larger than `options.seq` it sees. | Boolean | `false` |
+
+Any passed options are forwarded to dht-rpc.
diff --git a/building-blocks/hyperdrive.md b/building-blocks/hyperdrive.md
new file mode 100644
index 0000000..58707db
--- /dev/null
+++ b/building-blocks/hyperdrive.md
@@ -0,0 +1,398 @@
+# Hyperdrive
+
+**stable**
+
+Hyperdrive is a secure, real-time distributed file system designed for easy P2P file sharing. We use it extensively inside Holepunch; apps like Keet are distributed to users as Hyperdrives, as is the Holepunch platform itself.
+
+> [Github (Hyperdrive)](https://github.com/holepunchto/hyperdrive)
+
+* [Hyperdrive](../building-blocks/hyperdrive.md)
+ * [Create a new instance](hyperdrive.md#installation)
+ * Basic:
+ * Properties:
+ * [drive.corestore](hyperdrive.md#drivecorestore)
+ * [drive.db](hyperdrive.md#drivedb)
+ * [drive.core](hyperdrive.md#drivecore)
+ * [drive.id](hyperdrive.md#driveid)
+ * [drive.key](hyperdrive.md#drivekey)
+ * [drive.writable](hyperdrive.md#drivewritable)
+ * [drive.readable](hyperdrive.md#drivereadable)
+ * [drive.discoveryKey](hyperdrive.md#drivediscoverykey)
+ * [drive.contentKey](hyperdrive.md#drivecontentkey)
+ * [drive.version](hyperdrive.md#driveversion)
+ * [drive.supportsMetadata](hyperdrive.md#drivesupportsmetadata)
+ * Methods:
+ * [drive.ready()](hyperdrive.md#await-driveready)
+ * [drive.close()](hyperdrive.md#await-driveclose)
+ * [drive.put(path, buffer, \[options\])](hyperdrive.md#await-driveputpath-buffer-options)
+ * [drive.get(path, \[options\])](hyperdrive.md#const-buffer--await-drivegetpath-options)
+ * [drive.entry(path, \[options\])](hyperdrive.md#const-entry--await-driveentrypath-options)
+ * [drive.exists(path)](hyperdrive.md#const-exists--await-driveexistspath)
+ * [drive.del(path)](hyperdrive.md#await-drivedelpath)
+ * [drive.compare(entryA, entryB)](hyperdrive.md#const-comparison--drivecompareentrya-entryb)
+ * [drive.clear(path, \[options\])](hyperdrive.md#const-cleared--await-driveclearpath-options)
+ * [drive.clearAll(\[options\])](hyperdrive.md#const-cleared--await-driveclearalloptions)
+ * [drive.purge()](hyperdrive.md#await-drivepurge)
+ * [drive.symlink(path, linkname)](hyperdrive.md#await-drivesymlinkpath-linkname)
+ * [drive.batch()](hyperdrive.md#const-batch--drivebatch)
+ * [batch.flush()](hyperdrive.md#await-batchflush)
+ * [drive.list(folder, \[options\])](hyperdrive.md#const-stream--drivelistfolder-options)
+ * [drive.readdir(folder)](hyperdrive.md#const-stream--drivereaddirfolder)
+ * [drive.entries(\[range\], \[options\])](hyperdrive.md#const-stream--await-driveentriesrange-options)
+ * [drive.mirror(out, \[options\])](hyperdrive.md#const-mirror--drivemirrorout-options)
+ * [drive.watch(\[folder\])](hyperdrive.md#const-watcher--drivewatchfolder)
+ * [drive.createReadStream(path, \[options\])](hyperdrive.md#const-rs--drivecreatereadstreampath-options)
+ * [drive.createWriteStream(path, \[options\])](hyperdrive.md#const-ws--drivecreatewritestreampath-options)
+ * [drive.download(folder, \[options\])](hyperdrive.md#await-drivedownloadfolder-options)
+ * [drive.checkout(version)](hyperdrive.md#const-snapshot--drivecheckoutversion)
+ * [drive.diff(version, folder, \[options\])](hyperdrive.md#await-drivedownloaddiffversion-folder-options)
+ * [drive.downloadDiff(version, folder, \[options\])](hyperdrive.md#await-drivedownloaddiffversion-folder-options)
+ * [drive.downloadRange(dbRanges, blobRanges)](hyperdrive.md#await-drivedownloadrangedbranges-blobranges)
+ * [drive.findingPeers()](hyperdrive.md#const-done--drivefindingpeers)
+ * [drive.replicate(isInitiatorOrStream)](hyperdrive.md#const-stream--drivereplicateisinitiatororstream)
+ * [drive.update(\[options\])](hyperdrive.md#const-updated--await-driveupdateoptions)
+ * [drive.getBlobs()](hyperdrive.md#const-blobs--await-drivegetblobs)
+
+### Installation
+
+Install with [npm](https://www.npmjs.com/):
+
+```bash
+npm install hyperdrive
+```
+
+### API
+
+#### **`const drive = new Hyperdrive(store, [key])`**
+
+Creates a new Hyperdrive instance. `store` must be an instance of [corestore.md](../helpers/corestore.md "mention").
+
+By default, it uses the core at `{ name: 'db' }` from `store`, unless you set the public `key`.
+
+#### Properties
+
+#### **`drive.corestore`**
+
+The Corestore instance used as storage.
+
+#### **`drive.db`**
+
+The underlying Hyperbee backing the drive file structure.
+
+#### **`drive.core`**
+
+The Hypercore used for `drive.db`.
+
+#### **`drive.id`**
+
+String containing the id (z-base-32 of the public key) identifying this drive.
+
+#### **`drive.key`**
+
+The public key of the Hypercore backing the drive.
+
+#### **`drive.writable`**
+
+Boolean indicating if we can write or delete data in this drive.
+
+#### **`drive.readable`**
+
+Boolean indicating if we can read from this drive. After closing the drive this will be `false`.
+
+#### **`drive.discoveryKey`**
+
+The hash of the public key of the Hypercore backing the drive. It can be used as a `topic` to seed the drive using Hyperswarm.
+
+#### **`drive.contentKey`**
+
+The public key of the [Hyperblobs](https://github.com/holepunchto/hyperblobs) instance holding blobs associated with entries in the drive.
+
+#### **`drive.version`**
+
+The number that indicates how many modifications were made, is useful as a version identifier.
+
+#### **`drive.supportsMetadata`**
+
+Boolean indicating if the drive handles or not metadata. Always `true`.
+
+#### Methods
+
+#### **`await drive.ready()`**
+
+Waits until the internal state is loaded.
+
+Use it once before reading synchronous properties like `drive.discoveryKey`, unless you called any of the other APIs.
+
+#### **`await drive.close()`**
+
+Fully close this drive, including its underlying Hypercore backed data structures.
+
+#### **`await drive.put(path, buffer, [options])`**
+
+Creates a file at `path` in the drive. `options` are the same as in `createWriteStream`.
+
+#### **`const buffer = await drive.get(path, [options])`**
+
+Returns the blob at `path` in the drive. If no blob exists, returns `null`.
+
+It also returns `null` for symbolic links.
+
+`options` include:
+
+```js
+{
+ follow: false, // Follow symlinks, 16 max or throws an error
+ wait: true, // Wait for block to be downloaded
+ timeout: 0 // Wait at max some milliseconds (0 means no timeout)
+}
+```
+
+#### **`const entry = await drive.entry(path, [options])`**
+
+Returns the entry at `path` in the drive. It looks like this:
+
+```javascript
+{
+ seq: Number,
+ key: String,
+ value: {
+ executable: Boolean, // Whether the blob at path is an executable
+ linkname: null, // If entry not symlink, otherwise a string to the entry this links to
+ blob: { // Hyperblobs id that can be used to fetch the blob associated with this entry
+ blockOffset: Number,
+ blockLength: Number,
+ byteOffset: Number,
+ byteLength: Number
+ },
+ metadata: null
+ }
+}
+```
+
+`options` include:
+
+```js
+{
+ follow: false, // Follow symlinks, 16 max or throws an error
+ wait: true, // Wait for block to be downloaded
+ timeout: 0 // Wait at max some milliseconds (0 means no timeout)
+}
+```
+
+#### `const exists = await drive.exists(path)`
+
+Returns `true` if the entry at `path` does exists, otherwise `false`.
+
+#### **`await drive.del(path)`**
+
+Deletes the file at `path` from the drive.
+
+> ℹ️ The underlying blob is not deleted, only the reference in the file structure.
+
+#### **`const comparison = drive.compare(entryA, entryB)`**
+
+Returns `0` if entries are the same, `1` if `entryA` is older, and `-1` if `entryB` is older.
+
+#### **`const cleared = await drive.clear(path, [options])`**
+
+Deletes the blob from storage to free up space, but the file structure reference is kept.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ----------------- | --------------------------------------------------------------------- | ------- | ------- |
+| **`diff`** | Returned `cleared` bytes object is null unless you enable this | Boolean | `false` |
+
+#### `const cleared = await drive.clearAll([options])`
+
+Deletes all the blobs from storage to free up space, similar to how `drive.clear()` works.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ----------------- | --------------------------------------------------------------------- | ------- | ------- |
+| **`diff`** | Returned `cleared` bytes object is null unless you enable this | Boolean | `false` |
+
+#### `await drive.purge()`
+
+Purges both cores (db and blobs) from your storage, completely removing all the drive's data.
+
+#### **`await drive.symlink(path, linkname)`**
+
+Creates an entry in drive at `path` that points to the entry at `linkname`.
+
+If a blob entry currently exists at `path` then it will get overwritten and `drive.get(key)` will return `null`, while `drive.entry(key)` will return the entry with symlink information.
+
+#### **`const batch = drive.batch()`**
+
+Useful for atomically mutating the drive, has the same interface as Hyperdrive.
+
+#### **`await batch.flush()`**
+
+Commit a batch of mutations to the underlying drive.
+
+#### **`const stream = drive.list(folder, [options])`**
+
+Returns a stream of all entries in the drive at paths prefixed with `folder`.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| --------------- | --------------------------------------------- | ------- | ------- |
+| **`recursive`** | whether to descend into all subfolders or not | Boolean | `true` |
+
+#### **`const stream = drive.readdir(folder)`**
+
+Returns a stream of all subpaths of entries in the drive stored at paths prefixed by `folder`.
+
+#### **`const stream = await drive.entries([range], [options])`**
+
+Returns a read stream of entries in the drive.
+
+`options` are the same as `Hyperbee().createReadStream([range], [options])`.
+
+#### **`const mirror = drive.mirror(out, [options])`**
+
+Mirrors this drive into another. Returns a [mirrordrive.md](../helpers/mirrordrive.md "mention") instance constructed with `options`.
+
+Call `await mirror.done()` to wait for the mirroring to finish.
+
+#### **`const watcher = drive.watch([folder])`**
+
+Returns an iterator that listens on `folder` to yield changes, by default on `/`.
+
+Usage example:
+
+```javascript
+for await (const [current, previous] of watcher) {
+ console.log(current.version)
+ console.log(previous.version)
+}
+```
+
+> `current` and `previous` are the snapshots that are auto-closed before next value.
+>
+> Do not close those snapshots as they're used internally, let them be auto-closed.
+
+
+Methods:
+
+`await watcher.ready()`
+
+Waits until the watcher is loaded and detecting changes.
+
+`await watcher.destroy()`
+
+Stops the watcher. You could also stop it by using `break` in the loop.
+
+#### **`const rs = drive.createReadStream(path, [options])`**
+
+Returns a stream to read out the blob stored in the drive at `path`.
+
+`options` include:
+
+```javascript
+{
+ start: Number, // `start` and `end` are inclusive
+ end: Number,
+ length: Number, // `length` overrides `end`, they're not meant to be used together
+ wait: true, // Wait for blocks to be downloaded
+ timeout: 0 // Wait at max some milliseconds (0 means no timeout)
+}
+```
+
+#### **`const ws = drive.createWriteStream(path, [options])`**
+
+Stream a blob into the drive at `path`.
+
+`options` include:
+
+| Property | Description | Type | Default |
+| ---------------- | ---------------------------------------------------- | ------- | ------- |
+| **`executable`** | whether the blob is executable or not | Boolean | `true` |
+| **`metadata`** | Extended file information i.e., arbitrary JSON value | Object | `null` |
+
+#### **`await drive.download(folder, [options])`**
+
+Downloads the blobs corresponding to all entries in the drive at paths prefixed with `folder`.
+
+`options` are the same as those for `drive.list(folder, [options])`.
+
+#### **`const snapshot = drive.checkout(version)`**
+
+Gets a read-only snapshot of a previous version.
+
+#### **`const stream = drive.diff(version, folder, [options])`**
+
+Creates a stream of shallow changes to `folder` between `version` and `drive.version`.
+
+Each entry is sorted by key and looks like this:
+
+```javascript
+{
+ left: Object, // Entry in folder at drive.version for some path
+ right: Object // Entry in folder at drive.checkout(version) for some path
+}
+```
+
+> ℹ️ If an entry exists in `drive.version` of the `folder` but not in `version`, then `left` is set and `right` will be `null`, and vice versa.
+
+#### **`await drive.downloadDiff(version, folder, [options])`**
+
+Downloads all the blobs in `folder` corresponding to entries in `drive.checkout(version)` that are not in `drive.version`.
+
+In other words, downloads all the blobs added to `folder` up to `version` of the drive.
+
+#### **`await drive.downloadRange(dbRanges, blobRanges)`**
+
+Downloads the entries and blobs stored in the [ranges](https://github.com/holepunchto/hypercore#const-range--coredownloadrange) `dbRanges` and `blobRanges`.
+
+#### **`const done = drive.findingPeers()`**
+
+Indicates to Hyperdrive that users are finding peers in the background, requests will be on hold until this is done.
+
+Call `done()` when the current discovery iteration is done, i.e., after `swarm.flush()` finishes.
+
+#### **`const stream = drive.replicate(isInitiatorOrStream)`**
+
+Usage example:
+
+```javascript
+const swarm = new Hyperswarm()
+const done = drive.findingPeers()
+swarm.on('connection', (socket) => drive.replicate(socket))
+swarm.join(drive.discoveryKey)
+swarm.flush().then(done, done)
+```
+
+Learn more about how replicate works at [corestore.replicate](https://github.com/holepunchto/corestore#const-stream--storereplicateoptsorstream).
+
+#### **`const updated = await drive.update([options])`**
+
+Waits for initial proof of the new drive version until all `findingPeers` are done.
+
+`options` include:
+
+```javascript
+{
+ wait: false
+}
+```
+
+Use `drive.findingPeers()` or `{ wait: true }` to make await `drive.update()` blocking.
+
+#### **`const blobs = await drive.getBlobs()`**
+
+Returns the [Hyperblobs](https://github.com/holepunchto/hyperblobs) instance storing the blobs indexed by drive entries.
+
+```javascript
+await drive.put('/file.txt', Buffer.from('hi'))
+
+const buffer1 = await drive.get('/file.txt')
+
+const blobs = await drive.getBlobs()
+const entry = await drive.entry('/file.txt')
+const buffer2 = await blobs.get(entry.value.blob)
+
+// => buffer1 and buffer2 are equals
+```
diff --git a/building-blocks/hyperswarm.md b/building-blocks/hyperswarm.md
new file mode 100644
index 0000000..8c25bcb
--- /dev/null
+++ b/building-blocks/hyperswarm.md
@@ -0,0 +1,224 @@
+# Hyperswarm
+
+**stable**
+
+Hyperswarm helps to find and connect to peers announcing a common 'topic' that can be anything. Using Hyperswarm, discover and connect peers with a shared interest over a distributed network. For example, we often use Hypercore's discovery key as the swarm topic for discovering peers to replicate with.
+
+Hyperswarm offers a simple interface to abstract away the complexities of underlying modules such as [HyperDHT](hyperdht.md) and [SecretStream](../helpers/secretstream.md). These modules can also be used independently for specialized tasks.
+
+> [Github (Hyperswarm)](https://github.com/hyperswarm/hyperswarm)
+
+* [Hyperswarm](../building-blocks/hyperswarm.md)
+ * [Create a new instance](hyperswarm.md#installation)
+ * Basic:
+ * Properties:
+ * [swarm.connecting](hyperswarm.md#swarmconnecting)
+ * [swarm.connections](hyperswarm.md#swarmconnections)
+ * [swarm.peers](hyperswarm.md#swarmpeers)
+ * [swarm.dht](hyperswarm.md#swarmdht)
+ * Methods:
+ * [swarm.join(topic, [options])](hyperswarm.md#const-discovery--swarmjointopic-options)
+ * Events:
+ * [connection](hyperswarm.md#swarmonconnection-socket-peerinfo)
+ * [update](hyperswarm.md#swarmonupdate)
+ * [Clients and Servers:](hyperswarm.md#clients-and-servers)
+ * Methods:
+ * [swarm.leave(topic)](hyperswarm.md#await-swarmleavetopic)
+ * [swarm.joinPeer(noisePublicKey)](hyperswarm.md#swarmjoinpeernoisepublickey)
+ * [swarm.leavePeer(noisePublicKey)](hyperswarm.md#swarmleavepeernoisepublickey)
+ * [swarm.status(topic)](hyperswarm.md#const-discovery--swarmstatustopic)
+ * [swarm.listen()](hyperswarm.md#await-swarmlisten)
+ * [swarm.flush()](hyperswarm.md#await-swarmflush)
+ * [Peer info:](hyperswarm.md#peerinfo)
+ * Properties:
+ * [peerInfo.publicKey](hyperswarm.md#peerinfopublickey)
+ * [peerInfo.topics](hyperswarm.md#peerinfotopics)
+ * [peerInfo.prioritized](hyperswarm.md#peerinfoprioritized)
+ * Methods:
+ * [peerInfo.ban(banStatus = false)](hyperswarm.md#peerinfobanbanstatus--false)
+ * [Peer Discovery:](hyperswarm.md#peer-discovery)
+ * Methods:
+ * [discovery.flushed()](hyperswarm.md#await-discoveryflushed)
+ * [discovery.refresh({ client, server })](hyperswarm.md#await-discoveryrefresh-client-server)
+ * [discovery.destroy()](hyperswarm.md#await-discoverydestroy)
+
+### Installation
+
+Install with [npm](https://www.npmjs.com/):
+
+```bash
+npm install hyperswarm
+```
+
+### API
+
+#### **`const swarm = new Hyperswarm([options])`**
+
+Constructs a new Hyperswarm instance.
+
+The following table describes the properties of the optional `options` object.
+
+| Property | Description |
+| :------------: | ----------------------------------------------------------------------------------------------------------------------------------------------- |
+| **`keyPair`** | A Noise keypair will be used to listen/connect on the DHT. Defaults to a new key pair. |
+| **`seed`** | A unique, 32-byte, random seed that can be used to deterministically generate the key pair. |
+| **`maxPeers`** | The maximum number of peer connections allowed. |
+| **`firewall`** | A sync function of the form `remotePublicKey => (true\|false)`. If true, the connection will be rejected. Defaults to allowing all connections. |
+| **`dht`** | A DHT instance. Defaults to a new instance. |
+
+#### **Properties:**
+
+#### **`swarm.connecting`**
+
+A number that indicates connections in progress.
+
+#### **`swarm.connections`**
+
+A set of all active client/server connections.
+
+#### **`swarm.peers`**
+
+A Map containing all connected peers, of the form: `(Noise public key hex string) -> PeerInfo object`
+
+See the [`PeerInfo`](hyperswarm.md#peerinfo) API for more details.
+
+#### **`swarm.dht`**
+
+A [`HyperDHT`](./hyperdht.md) instance. Useful if you want lower-level control over Hyperswarm's networking.
+
+#### Methods
+
+#### **`const discovery = swarm.join(topic, [options])`**
+
+Returns a [`PeerDiscovery`](hyperswarm.md#peer-discovery) object.
+
+Start discovering and connecting to peers sharing a common topic. As new peers are connected, they will be emitted from the swarm as [`connection`](hyperswarm.md#swarmonconnection-socket-peerinfo) events.
+
+`topic` must be a 32-byte Buffer and use a publicly sharable id, typically a Hypercore `discoveryKey` which we can then link to (join will leak the `topic` to DHT nodes).
+
+`options` can include:
+
+| Property | Description | Type | Default |
+| :----------: | -------------------------------------------------------------------------- | ------- | ------- |
+| **`server`** | Accept server connections for this topic by announcing yourself to the DHT | Boolean | `true` |
+| **`client`** | Actively search for and connect to discovered servers | Boolean | `true` |
+
+> Calling `swarm.join()` makes this core directly discoverable. To ensure that this core remains discoverable, Hyperswarm handles the periodic refresh of the join. For maximum efficiency, fewer joins should be called; if sharing a single Hypercore that links to other Hypercores, only join a `topic` for the first one.
+
+#### Events
+
+#### **`swarm.on('connection', (socket, peerInfo) => {})`**
+
+Emitted whenever the swarm connects to a new peer.
+
+`socket` is an end-to-end (Noise) encrypted Duplex stream.
+
+`peerInfo` is a [`PeerInfo`](hyperswarm.md#peerinfo) instance.
+
+#### `swarm.on('update', () => {})`
+
+Emitted when internal values are changed, useful for user interfaces.
+
+> For instance, the 'update' event is emitted when `swarm.connecting` or `swarm.connections` changes.
+
+### **Clients and Servers**
+
+In Hyperswarm, there are two ways for peers to join the swarm: client mode and server mode. If you've previously used Hyperswarm v2, these were called 'lookup' and 'announce', but we now think 'client' and 'server' are more descriptive.
+
+When user joins a topic as a server, the swarm will start accepting incoming connections from clients (peers that have joined the same topic in client mode). Server mode will announce this user keypair to the DHT so that other peers can discover the user server. When server connections are emitted, they are not associated with a specific topic -- the server only knows it received an incoming connection.
+
+When user joins a topic as a client, the swarm will do a query to discover available servers, and will eagerly connect to them. As with server mode, these connections will be emitted as `connection` events, but in client mode, they will be associated with the topic (`info.topics` will be set in the `connection` event).
+
+#### Methods
+
+#### **`await swarm.leave(topic)`**
+
+Stop discovering peers for the given topic.
+
+`topic` must be a 32-byte Buffer
+
+> If a topic was previously joined in server mode, `leave` will stop announcing the topic on the DHT.
+>
+> If a topic was previously joined in client mode, `leave` will stop searching for servers announcing the topic.
+
+`leave` will **not** close any existing connections.
+
+#### **`swarm.joinPeer(noisePublicKey)`**
+
+Establish a direct connection to a known peer.
+
+`noisePublicKey` must be a 32-byte Buffer
+
+As with the standard `join` method, `joinPeer` will ensure that peer connections are reestablished in the event of failures.
+
+#### **`swarm.leavePeer(noisePublicKey)`**
+
+Stops attempting direct connections to a known peer.
+
+`noisePublicKey` must be a 32-byte Buffer
+
+> If a direct connection is already established, that connection will **not** be destroyed by `leavePeer`.
+
+#### **`const discovery = swarm.status(topic)`**
+
+Gets the `PeerDiscovery` object associated with the topic, if it exists.
+
+#### **`await swarm.listen()`**
+
+Explicitly starts listening for incoming connections. This will be called internally after the first `join`, so it rarely needs to be called manually.
+
+#### **`await swarm.flush()`**
+
+Waits for any pending DHT announcements, and for the swarm to connect to any pending peers (peers that have been discovered, but are still in the queue awaiting processing).
+
+Once a `flush()` has completed, the swarm will have connected to every peer it can discover from the current set of topics it's managing.
+
+> `flush()` is not topic-specific, so it will wait for every pending DHT operation and connection to be processed -- it's quite heavyweight, so it could take a while. In most cases, it's not necessary, as connections are emitted by `swarm.on('connection')` immediately after they're opened.
+
+### PeerInfo
+
+`swarm.on('connection', ...)` emits a `PeerInfo` instance whenever a new connection is established.
+
+There is a one-to-one relationship between connections and `PeerInfo` objects -- if a single peer announces multiple topics, those topics will be multiplexed over a single connection.
+
+#### **Properties:**
+
+#### **`peerInfo.publicKey`**
+
+The peer's Noise public key.
+
+#### **`peerInfo.topics`**
+
+An Array of topics that this Peer is associated with -- `topics` will only be updated when the Peer is in client mode.
+
+#### **`peerInfo.prioritized`**
+
+If true, the swarm will rapidly attempt to reconnect to this peer.
+
+#### **Methods:**
+
+#### **`peerInfo.ban(banStatus = false)`**
+
+Ban or unban the peer. Banning will prevent any future reconnection attempts, but it will **not** close any existing connections.
+
+### Peer Discovery
+
+`swarm.join` returns a `PeerDiscovery` instance which allows you to both control discovery behavior, and respond to lifecycle changes during discovery.
+
+#### Methods
+
+#### **`await discovery.flushed()`**
+
+Waits until the topic has been fully announced to the DHT. This method is only relevant in server mode. When `flushed()` has completed, the server will be available to the network.
+
+#### **`await discovery.refresh({ client, server })`**
+
+Updates the `PeerDiscovery` configuration, optionally toggling client and server modes. This will also trigger an immediate re-announce of the topic when the `PeerDiscovery` is in server mode.
+
+#### **`await discovery.destroy()`**
+
+Stops discovering peers for the given topic.
+
+> If a topic was previously joined in server mode, `leave` will stop announcing the topic on the DHT.
+>
+>If a topic was previously joined in client mode, `leave` will stop searching for servers announcing the topic.
\ No newline at end of file