Based on #3126Closes#3029Closes#3030Closes#3065Closes#3083Closes#3084Closes#3085
simple reason why mvcc update didn't work: it didn't try to update.
Closes#3127
This PR fixes incorrect path registration for sync in browser, add tests
and also expose revision string in the `stats()` method of synced
database
Closes#3124
I searched using deepwiki how SQLite implements their busy handler. They
use a callback system with exponential backoff, where it stores the
callback in the pager and in the database. I confess I found this
slightly confusing, so I just implemented a simple exponential backoff
directly in the `Statement` struct. I imagine SQLite does this in a more
convoluted manner, as they do not have a concept of yielding as we do.
https://deepwiki.com/search/where-is-the-code-for-the-
busy_4a5ed006-4eed-479f-80c3-dd038832831b
I also fixed the rust bindings so that it yields when we return
`StepResult::IO`, instead of just blocking the async function. To
achieve this I implemented the `Stream` trait for `Rows` struct, which
unfortunately came with a slight change to the function signature of
`rows.next()` to `rows.try_next()`.
EDIT:
~test `test_multiple_connections_fuzz` timeouts because now it has the
busy handler "slowing" things down (this test generates a lot of busy
transactions), so it takes a lot longer for the test to run. Not sure if
it is acceptable for us to reduce the number of operations so the test
is shorter.~
EDIT:
Adjusted the API to be more in line with
https://www.sqlite.org/c3ref/busy_timeout.html.
Sets maximum total accumulated timeout. If the duration is None or Zero,
we unset the busy handler for this Connection.
This api defers slightly from SQLite as instead of sleeping for linear
amount of time specified by the user, we will sleep in phases until the
the total amount of time requested is reached. This means we first sleep
of 1ms, then if we still return busy, we sleep for 2 ms, and repeat
until a maximum of 100 ms per phase or we reached the total timeout.
Example:
1. Set duration to 5ms
2. Step through query -> returns Busy -> sleep/yield for 1 ms
3. Step through query -> returns Busy -> sleep/yield for 2 ms
4. Step through query -> returns Busy -> sleep/yield for 2 ms (totaling
5 ms of sleep)
5. Step through query -> returns Busy -> return Busy to user
This slight api change demonstrated a better throughtput in
`perf/throughput/turso` benchmark
```sh
cargo run -p write-throughput --release -- -t 2
Running write throughput benchmark with 2 threads, 100 batch size, 10 iterations, mode: Legacy
Database created at: write_throughput_test.db
Thread 1: 1000 inserts in 0.04s (23438.42 inserts/sec)
Thread 0: 1000 inserts in 0.08s (12385.64 inserts/sec)
=== BENCHMARK RESULTS ===
Total inserts: 2000
Total time: 0.08s
Overall throughput: 24762.60 inserts/sec
Threads: 2
Batch size: 100
Iterations per thread: 10
Database file exists: true
Database file size: 4096 bytes
```
Depends on #3102Closes#3067Closes#3074
1. commit state machine was assuming that begin_write_tx() cannot
fail, but it can fail if there is another tx that is not using
BEGIN CONCURRENT.
2. if a brand new non-CONCURRENT transaction attempts to start
exclusive transaction but fails with Busy, we must end the read
pager read tx it just started, because otherwise the next time
it attempts to do something it will panic with:
"cannot start a new read tx without ending an existing one"
on the main branch, mvcc allows concurrent inserts from multiple
txns even without BEGIN CONCURRENT, and then always hangs whenever
one of the txns tries to commit.
this commit fixes that issue.
This patch adds checksums to Turso DB. You may check the design here in
the [RFC](https://github.com/tursodatabase/turso/issues/2178).
1. We use reserved bytes (8 bytes) to store the checksums. On every IO
read, we verify that the checksum matches.
2. We use twox hash for checksums.
3. Checksum works only on 4K pages now. It's a small change to enable
for all other sizes, I will send another PR.
4. Right now, it's not possible to switch to different algorithm or turn
off altogether. That will be added in the future PRs.
5. Checksums can be enabled only for new dbs. For existing DBs, we will
disable it.
6. To add checksums for existing DBs, we need vacuum since it would
require rewrite of whole db.
Closes#2840
- The code now prevents dropping or indexing `sqlite_sequence`
- make sure that AUTOINCREMENT only works on a single `INTEGER PRIMARY KEY`
- handles `i64::MAX` gracefully by returning `SQLITE_FULL`
- also AUTOINCREMENT now works in both column and table constraints.
fmt
Currently, this is effectively a no-op because, at the optimization
stage, window function expressions are in the form
win_func(subquery_column1, subquery_column2, ...).
Nevertheless, expressions are rewritten to maintain consistency with
aggregates, which also hold cloned expressions from sources like result
columns. This ensures future changes in the optimizer won’t break window
function handling.
Adds initial support for window functions. For now, only existing
aggregate functions can be used as window functions—no specialized
window-specific functions are supported yet.
Currently, only the default frame definition is implemented:
RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW EXCLUDE NO OTHERS.
Two main reasons for this change:
* Improve readability by moving the logic for this special case closer
to the code that relies on it.
* Decouple AggFunc from the Aggregate struct. In the future, window
function processing will use AggFunc directly, without necessarily
depending on Aggregate.