this is only used for returning LimboResult::Busy, and we already
have LimboError::Busy, so it only adds confusion.
Moreover, the current busy handler was not handling LimboError::Busy,
because it's returned as an error, not as Ok. So this may fix the
"busy handler not working" issue in the perf thrpt benchmark.
This PR extends the existing encryption support to include the database
header page (page 1).
Reviewed-by: Avinash Sajjanshetty (@avinassh)
Closes#3040
This adds basic support for window functions. For now:
* Only existing aggregate functions can be used as window functions.
* Specialized window-specific functions (`rank`, `row_number`, etc.) are
not yet supported.
* Only the default frame definition is implemented:
`RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW EXCLUDE NO OTHERS`.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3079
We have not implemented them before because they require the raw
elements to be kept. It is easy to see why in the following example:
```
current_min = 3;
insert(2) => current_min = 2 // can be done without state
delete(2) => needs to look at the state to determine new min!
```
The aggregator state was a very simple key-value structure. To
accomodate for min/max, we will make it into a more complex table, where
we can encode a more complex structure.
The key insight is that we can use a primary key composed of:
```
1) storage_id
2) zset_id,
3) element
```
The storage_id and zset_id are our previous key, except they are now
exploded to support a larger range of storage_id. With more bits
available in the storage_id, we can encode information about which
column we are storing. For aggregations in multiple columns, we will
need to keep a different list of values for min/max!
The element is just the values of the columns.
Because this is a primary key, the data will be sorted in the btree. We
can then just do a prefix search in the first two components of the key
and easily find the min/max when needed.
This new format is also adequate for joins. Joins will just have a new
storage_id which encodes two "columns" (left side, right side).
Closes#3143
Fixes panics with `must have a read transaction to start a write
transaction` - previously we were simply ignoring these Busy errors and
thinking we have a read tx, when we actually don't.
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3148
We start a pager read transaction at the beginning of the MV transaction, because
any reads we do from the database file and WAL must uphold snapshot isolation.
However, we must end and immediately restart the read transaction before committing.
This is because other transactions may have committed writes to the DB file or WAL,
and our pager must read in those changes when applying our writes; otherwise we would overwrite
the changes from the previous committed transactions.
Note that this would be incredibly unsafe in the regular transaction model, but in MVCC we trust
the MV-store to uphold the guarantee that no write-write conflicts happened.
We must iterate the row versions in reverse order because the
versions are in order of oldest to newest, and we must commit
the newest version applied by the active transaction.
In insert_version_raw(), we correctly iterate the versions backwards
because we want to find the newest version that is still older than
the one we are inserting.
However, the order of `.enumerate()` and `.rev()` was wrong, so the
insertion position was calculated based on the position in the
_reversed_ iterator, not the original iterator.
Blacksmith runners have a lot of variance in performance, making it hard
for Nyrkiö to do its job. Discussed on [Discord](https://discord.com/cha
nnels/1258658826257961020/1402269486752469085)
Reviewed-by: Henrik Ingo <henrik@nyrk.io>
Closes#2448