in #2521, I messed up and introduced improper calculation of the current
checkpoint's max safe frame (mostly due to incorrect comments that I had
left on the method).
The confusion partially stems from our lack of Busy handling at the
moment, but essentially when determining the max safe frame for all
readers, for passive mode we cannot simply `break` out of the loop when
we find a reader with a lower read mark than we have, because _another_
reader might have an even _lower_ read mark, and we could proceed with
the first mark < shared_max.
And for !passive modes, we still attempt to backfill with the same lower
frame, we just return `Busy` at the end, after backfilling what we can
(we just don't reset the log for restart/truncate).
Most of the changes in this PR is just the renaming the fields of
Checkpoint Result, because the names were confusing
Closes#2560
We have to update the Transaction State before checking for the Schema
Cookie so that we can rollback the transaction later on correctly.
Closes#2535Closes#2549
Mainly the performance impact here comes from removing some unnecessary
checks and inlining `read_integer_fast()` directly into `op_column()`,
but I also added some fiddly nano-optimizations for fun
On main, we are roughly 3.4x slower than sqlite on `SELECT * FROM users
LIMIT 100`, and here we are roughly 3.2x slower, which ain't much, but
it's honest work.
A more impactful optimization, but a much more annoying refactor, would
be #2304Closes#2516
- When the rowid is changed in UPDATE, it is handled as a combination of DELETE + INSERT,
so we dont need to delete the old values in that case
- We should only update the views after the operation on the btree is done
- A proper state machine is needed to handle IO yielding points
Convert Sorter code to use state machines and not ignore completions.
Also simplifies some logic that seemed redundant to me. Also, I was
getting some IO errors because we were opening one file per Chunk, so I
fixed this by using only one file per sorter, and just using offsets in
the file for each chunk.
Builds on top of #2520Closes#2473
Currently, the simulator complains of the following error:
```
Error: failed with error: 'attempt to multiply with overflow'
```
However, we don't enable views in the simulator so -- despite being an
issue -- we should never see this. Let's fix `op_delete()` some more not
to not even call rowid() unless view processing is enabled.
Implement very basic views using DBSP
This is just the bare minimum that I needed to convince myself that this
approach will work. The only views that we support are slices of the
main table: no aggregations, no joins, no projections.
* drop view is implemented.
* view population is implemented.
* deletes, inserts and updates are implemented.
much like indexes before, a flag must be passed to enable views.
Closes#2530
This is just the bare minimum that I needed to convince myself that this
approach will work. The only views that we support are slices of the
main table: no aggregations, no joins, no projections.
drop view is implemented.
view population is implemented.
deletes, inserts and updates are implemented.
much like indexes before, a flag must be passed to enable views.
When building views (soon), it will be important to know which table
is being deleted. Getting from the cursor id is very cumbersome.
What we are doing here is symmetrical to op_insert, and sqlite also
passes table information in one of the registers (p4)
The built-in `unreachable!` macro, believe it or not is just an alias
for `panic!` and does not actually provide the compiler with a hint that
the path is not reachable.
This provides a wrapper around the actual
`std::hint::unreachable_unchecked()`, to be used only in the very hot
path of `execute` where it is not possible to be the incorrect variant.
Closes#2459
SQLite generates those in aggregations like min / max with collation
information either in the table definition or in the column expression.
We currently generate the wrong result here, and properly generating the
bytecode instruction fixes it.
Closes#2440
## Fix 1
Do not start a read transaction when a SELECT is not going to access the
database, which means we can avoid checking whether the schema has
changed.
## Fix 2
Add a field `accesses_db` to `Program` and `Statement` so we can avoid
even checking for `SchemaUpdated` errors when it's not possible to get
one.
## Fix 3
Avoid doing any work in `commit_txn` when not in a transaction. This
optimization is only enabled when `mv_store.is_none()`, because MVCC has
its own logic and this doesn't work with MVCC enabled, and honestly I'm
too tired to find out why. Left an inline comment about it, though.
```sql
Execute `SELECT 1`/limbo_execute_select_1
time: [21.440 ns 21.513 ns 21.586 ns]
change: [-60.766% -60.616% -60.453%] (p = 0.00 < 0.05)
Performance has improved.
```
Effect is even more dramatic in CI where the latency is down over 80%
Closes#2441
Closes#1967
To support this I had to change how we did `epilogue` similarly to how
SQLite does it. SQLIte first declares a `beginWriteOperation` when some
statement is going to necessitate a Write Transaction. And as we now
need to pass the current schema cookie to `epilogue` it was easier to
call epilogue only in one location (like we do with prologue), and just
have each statement declare their intentions separately. This allows us
to not have to pass the Schema around just to do the epilogue. I believe
this is something that @jussisaurio would be interested in.
~Also had to disable the MVCC test, as it was extremely buggy for me.~
Just disabled reprepare statements for MVCC
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2214