This adds a new fuzz test case to verify that any query returns the same
results with and without a rowid alias.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2952
Rolling back a transaction on Error should result in
`connection.auto_commit` being set back to true.
Added a regression test for this where a UNIQUE constraint violation
rolls back the transaction and trying to COMMIT will fail.
Currently, our default conflict resolution strategy is ROLLBACK, which
ends the transaction. In SQLite, the default is ABORT, which rolls back
the current statement but allows the transaction to continue.
We should migrate to default ABORT once we support subtransactions.
Closes#3746
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3747
Adds `ALTER TABLE` to the simulator. Currently, there are no properties
that generate `ALTER TABLE`. The query is only generated in
`Property::Query` or in extension queries.
Conditions to generate `ALTER TABLE`:
- In differential testing, do not generate `ALTER COLUMN` as SQLite does
not support it.
- If there is only 1 column, or all columns are present in indexes, do
not generate a `DROP COLUMN` as it would be an error in the database
- if there are no tables, obviously do not generate `ALTER TABLE`
Some fixes:
- handle NULL generation in `GTValue` and `LTValue`, as we now have to
handle nulls due to `ADD COLUMN` adding cols with NULL
- correctly compare NULLs in `binary_compare`
Closes#3650
1. Moving manual CLI validation into Clap for safer argument handling.
2. Remove deprecated `with_ascii` flag from `PrettyFields` in logger
initialization.
3. Remove `log` and `env_logger` dependencies in favor of `tracing` from
simulator.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3533
This PR makes sync client completely autonomous as now it can defer
initial sync.
This can open possibility to asynchronously create DB in the Turso Cloud
while giving user ability to interact with local DB straight away.
Closes#3531
The page cache implementation uses a pre-allocated vector (`entries`)
with fixed capacity, along with a custom hash map and freelist. This
design requires expensive upfront allocation when creating a new
connection, which severely impacted performance in workloads that open
many short-lived connections (e.g., our concurrent write benchmarks that
create a new connection per transaction).
Therefore, replace the pre-allocated vector with an intrusive doubly-
linked list. This eliminates the page cache initialization overhead from
connection establishment, but also reduces memory usage to entries that
are actually used. Furthermore, the approach allows us to grow the page
cache with much less overhead.
The patch improves concurrent write throughput benchmark by 4x for
single-threaded performance.
Before:
```
$ write-throughput --threads 1 --batch-size 100 -i 1000 --mode concurrent
Running write throughput benchmark with 1 threads, 100 batch size, 1000 iterations, mode: Concurrent
Database created at: write_throughput_test.db
Thread 0: 100000 inserts in 3.82s (26173.63 inserts/sec)
```
After:
```
$ write-throughput --threads 1 --batch-size 100 -i 1000 --mode concurrent
Running write throughput benchmark with 1 threads, 100 batch size, 1000 iterations, mode: Concurrent
Database created at: write_throughput_test.db
Thread 0: 100000 inserts in 0.90s (110848.46 inserts/sec)
```
Closes#3456
The page cache implementation uses a pre-allocated vector (`entries`)
with fixed capacity, along with a custom hash map and freelist. This
design requires expensive upfront allocation when creating a new
connection, which severely impacted performance in workloads that open
many short-lived connections (e.g., our concurrent write benchmarks that
create a new connection per transaction).
Therefore, replace the pre-allocated vector with an intrusive
doubly-linked list. This eliminates the page cache initialization
overhead from connection establishment, but also reduces memory usage to
entries that are actually used. Furthermore, the approach allows us to
grow the page cache with much less overhead.
The patch improves concurrent write throughput benchmark by 4x for
single-threaded performance.
Before:
```
$ write-throughput --threads 1 --batch-size 100 -i 1000 --mode concurrent
Running write throughput benchmark with 1 threads, 100 batch size, 1000 iterations, mode: Concurrent
Database created at: write_throughput_test.db
Thread 0: 100000 inserts in 3.82s (26173.63 inserts/sec)
```
After:
```
$ write-throughput --threads 1 --batch-size 100 -i 1000 --mode concurrent
Running write throughput benchmark with 1 threads, 100 batch size, 1000 iterations, mode: Concurrent
Database created at: write_throughput_test.db
Thread 0: 100000 inserts in 0.90s (110848.46 inserts/sec)
```
Depends on #3272.
First big step towards: #1851
- Add ignore error flag to `Interaction` to ignore parse errors when
needed, and still properly report other errors from intermediate
queries.
- adjusted shrinking to accommodate transaction statements from
different connections and properly remove extensional queries from some
properties
- MVCC: generates `Begin Concurrent` and `Commit` statements that are
interleaved to test snapshot isolation between connection transactions.
- MVCC: if the next interactions are going to contain a DDL statement,
we first commit all transaction and execute the DDL statements serially
Closes#3278
We have used i64 before because that is the size of an integer in
SQLite. However, I believe that for large enough databases, the chances
of collision here are just too high. The effect of a collision is the
database silently returning incorrect data in the materialized view.
So now that everything else is working, we should move to i128.
In the hopes of doing a good job at teaching people what Turso can do,
I am adding built-in manual pages. When the CLI starts, it picks a
feature at random, and tells the user that the feature exists:
```
Turso v0.2.0-pre.8
Enter ".help" for usage hints.
Did you know that Turso supports Change Data Capture? Type .manual cdc to learn more.
This software is ALPHA, only use for development, testing, and experimentation.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database
```
There is a lot we can do to make this feature world class:
- we can automatically compile examples during compile time like
rust-doc, to make sure examples used in the manuals always work
- we can implement scrolling and navigation
- we can document a lot more features
But for now, this is a start!
- Fixed some incorrect code when running interactions in differential
testing. Instead of replacing the state that was used for running the
interaction, I naively just incremented the interaction pointer.
- adjusted the comparison to check returned values without considering
the order of the rows returned
- added differential testing to run in CI
Closes#3235
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3255
- Made the code around snapshot isolation more ergonomic with each
connections having its own transaction state. Also when shadowing, we
pass a ShadowTablesMut object that dynamically uses the committed tables
or the connection tables depending on the transaction state.
- added begin concurrent transaction before every property when mvcc is
enabled (this is just so we can have some mvcc code be tested using the
simulator under Begin Concurrent, I have not yet implemented the logic
to have concurrent transactions in the simulator)
- some small enhancements to shrinking
TODOs
- have proper logic to have concurrent transactions without WriteWrite
conflicts. This means when generating the plans we need to make sure
that we do not generate rows that will conflict with rows in other
transactions. This is slightly more powerful than what we do in the
fuzzer, as we just have `WriteWriteConflict` as an acceptable error
there. By baking this `NoConflict` approach to the simulator, we can
continuously test the what does not trigger a WriteWriteConflict and
snapshot isolation.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3226