this is only used for returning LimboResult::Busy, and we already
have LimboError::Busy, so it only adds confusion.
Moreover, the current busy handler was not handling LimboError::Busy,
because it's returned as an error, not as Ok. So this may fix the
"busy handler not working" issue in the perf thrpt benchmark.
This PR extends the existing encryption support to include the database
header page (page 1).
Reviewed-by: Avinash Sajjanshetty (@avinassh)
Closes#3040
This adds basic support for window functions. For now:
* Only existing aggregate functions can be used as window functions.
* Specialized window-specific functions (`rank`, `row_number`, etc.) are
not yet supported.
* Only the default frame definition is implemented:
`RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW EXCLUDE NO OTHERS`.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3079
We have not implemented them before because they require the raw
elements to be kept. It is easy to see why in the following example:
```
current_min = 3;
insert(2) => current_min = 2 // can be done without state
delete(2) => needs to look at the state to determine new min!
```
The aggregator state was a very simple key-value structure. To
accomodate for min/max, we will make it into a more complex table, where
we can encode a more complex structure.
The key insight is that we can use a primary key composed of:
```
1) storage_id
2) zset_id,
3) element
```
The storage_id and zset_id are our previous key, except they are now
exploded to support a larger range of storage_id. With more bits
available in the storage_id, we can encode information about which
column we are storing. For aggregations in multiple columns, we will
need to keep a different list of values for min/max!
The element is just the values of the columns.
Because this is a primary key, the data will be sorted in the btree. We
can then just do a prefix search in the first two components of the key
and easily find the min/max when needed.
This new format is also adequate for joins. Joins will just have a new
storage_id which encodes two "columns" (left side, right side).
Closes#3143
We start a pager read transaction at the beginning of the MV transaction, because
any reads we do from the database file and WAL must uphold snapshot isolation.
However, we must end and immediately restart the read transaction before committing.
This is because other transactions may have committed writes to the DB file or WAL,
and our pager must read in those changes when applying our writes; otherwise we would overwrite
the changes from the previous committed transactions.
Note that this would be incredibly unsafe in the regular transaction model, but in MVCC we trust
the MV-store to uphold the guarantee that no write-write conflicts happened.
We must iterate the row versions in reverse order because the
versions are in order of oldest to newest, and we must commit
the newest version applied by the active transaction.
In insert_version_raw(), we correctly iterate the versions backwards
because we want to find the newest version that is still older than
the one we are inserting.
However, the order of `.enumerate()` and `.rev()` was wrong, so the
insertion position was calculated based on the position in the
_reversed_ iterator, not the original iterator.
This removes 4 crates from the `cargo build` and tries to ensure that in
the future we avoid the same crates with different versions.
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3141
We have not implemented them before because they require the raw
elements to be kept. It is easy to see why in the following example:
current_min = 3;
insert(2) => current_min = 2 // can be done without state
delete(2) => needs to look at the state to determine new min!
The aggregator state was a very simple key-value structure. To
accomodate for min/max, we will make it into a more complex table, where
we can encode a more complex structure.
The key insight is that we can use a primary key composed of:
1) storage_id
2) zset_id,
3) element
The storage_id and zset_id are our previous key, except they are now
exploded to support a larger range of storage_id. With more bits
available in the storage_id, we can encode information about which
column we are storing. For aggregations in multiple columns, we will
need to keep a different list of values for min/max!
The element is just the values of the columns.
Because this is a primary key, the data will be sorted in the btree.
We can then just do a prefix search in the first two components of
the key and easily find the min/max when needed.
This new format is also adequate for joins. Joins will just have
a new storage_id which encodes two "columns" (left side, right side).
And also change the schema of the main table. I have come to see the
current key-value schema as inadequate for non-aggregate operators.
Calculating Min/Max, for example, doesn't feat in this schema because
we have to be able to track existing values and index them.
Another alternative is to keep one table per operator type, but this
quickly leads to an explosion of tables.
current logic can lead to a situation where:
- we call read_page(trunk_page_id)
- we assign trunk_page in the FreePageState state machine
- the page read fails and cache marks it as !locked && !loaded
- next call to Pager::free_page() asserts that the page is loaded and panics
Based on #3126Closes#3029Closes#3030Closes#3065Closes#3083Closes#3084Closes#3085
simple reason why mvcc update didn't work: it didn't try to update.
Closes#3127
This PR fixes incorrect path registration for sync in browser, add tests
and also expose revision string in the `stats()` method of synced
database
Closes#3124