This PR implements basic support for partial sync. Right now the scope
is limited to only `:memory:` IO and later will be properly expanded to
the file based IO later.
The main addition is `PartialDatabaseStorage` which make request to the
remote server for missing local pages on demand.
The main change is that now tursodatabase JS bindings accept optional
"external" IO event loop which in case of sync will drive `ProtocolIo`
internal work associated with remote page fetching tasks.
Closes#3931
# Fix: Clean up DBSP state table when dropping materialized views
## Problem
When dropping a materialized view, the internal DBSP state table (e.g.,
`__turso_internal_dbsp_state_v1_view_name`) and its automatic primary
key index were not being properly cleaned up. This caused two issues:
1. **Persistent schema entries**: The DBSP table and index entries
remained in `sqlite_schema` after dropping the view
2. **In-memory schema inconsistency**: The DBSP table remained in the
in-memory schema's `tables` HashMap, causing "table already exists"
errors when trying to recreate a materialized view with the same name
## Root Cause
The issue had two parts:
1. **Missing sqlite_schema cleanup**: The `translate_drop_view` function
deleted the view entry from `sqlite_schema` but didn't delete the
associated DBSP state table and index entries
2. **Missing in-memory schema cleanup**: The `remove_view` function
removed the materialized view from the in-memory schema but didn't
remove the DBSP state table and its indexes
## Solution
### Changes in `core/translate/view.rs`
- Added a second pass loop in `translate_drop_view` to scan
`sqlite_schema` and delete DBSP table and index entries
- The loop checks for entries matching the DBSP table name pattern
(`__turso_internal_dbsp_state_v{version}_{view_name}`) and the automatic
index name pattern (`sqlite_autoindex___turso_internal_dbsp_state_v{vers
ion}_{view_name}_1`)
- Registers for comparison values are allocated outside the loop for
efficiency
- Column registers are reused across loop iterations
### Changes in `core/schema.rs`
- Updated `remove_view` to also remove the DBSP state table and its
indexes from the in-memory schema's `tables` HashMap and `indexes`
collection
- This ensures consistency between the persistent schema
(`sqlite_schema`) and the in-memory schema
### Tests Added
Added two new test cases in `testing/materialized_views.test`:
1. **`matview-drop-cleans-up-dbsp-table`**: Explicitly verifies that
after dropping a materialized view:
- The view entry is removed from `sqlite_schema`
- The DBSP state table entry is removed from `sqlite_schema`
- The DBSP state index entry is removed from `sqlite_schema`
2. **`matview-recreate-after-drop`**: Verifies that a materialized view
can be successfully recreated after being dropped, which implicitly
tests that all underlying resources (including DBSP tables) are properly
cleaned up
## Testing
- All existing materialized view tests pass
- New tests specifically verify the cleanup behavior
- Manual testing confirms that materialized views can be dropped and
recreated without errors
## Related
This fix ensures that materialized views can be safely dropped and
recreated, resolving issues where the DBSP state table would persist and
cause conflicts.
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3928
## What
Rowsets are used in SQLite for two purposes:
1. for membership tests on a set of `i64`s,
2. for in-order iteration of a set of `i64`s,
Both in cases where we can just use rowids (which are `i64`) instead of
building an entire ephemeral btree from a table's contents.
For example, in cases where a `DELETE FROM tbl WHERE ...` is performed
on a table that has any `BEFORE DELETE` triggers, SQLite collects the
table's rowids into a RowSet before actually performing the deletion.
This is similar to how an UPDATE that modifies rowids (or the index used
to iterate the UPDATE loop) will first collect the rows into an
ephemeral index, and same with `INSERT INTO ... SELECT`.
## Details
RowSet uses a "batch" concept where insertions of a given batch must be
guaranteed by caller to contain no duplicates and will be pushed into a
vector for O(1). When a new batch is started, the previous batch is
folded into a `BTreeSet` so that membership tests can be performed in
O(logn). As far as I can tell, the "in-order iteration" use case doesn't
use this batch logic at all.
## AI disclosure
This entire PR description was written by me - no AIs were harmed in the
production of it. However, the code itself was mostly vibecoded using
two agents in Cursor:
- Composer 1: given the SQLite opcode documentation and rowset.c source
code, and asked to implement the VDBE instructions and the RowSet
module.
- GPT-5: given the same SQLite docs and source code, and asked to review
Composer 1's work and write feedback into a separate markdown file.
This loop was run for roughly 4-5 iterations, where each time GPT-5's
feedback was given to Composer 1, until GPT-5 found nothing to comment
anymore.
After this, I instructed Composer 1 to improve the documentation to be
less stupid.
After that, I made a manual editing pass over the runtime code to e.g.
change boolean flags to a `RowSetMode` enum to make clearer that the
rowset has two distinct mutually exclusive purposes (membership tests
and in-order iteration), plus cleaned up some other dumb shit and added
comments.
I am still not sure if this saved time or not.
Closes#3938
When building text values, we could not pass ownership of newly created
strings, which meant a lot of the times we were double cloning strings,
one to transform, and one to build the Value
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3932
Right now tursodb treat parameters/variable as non-constant. But
actually they are constant in a sense that parameters/variables has
fixed value during query execution which never changes.
This PR makes tursodb to treat parameters as constant and evaluate
expressions related to them only once.
One real-world scenario where this can be helpful is vector search
query:
```sql
SELECT id, vector_distance_jaccard(embedding, vector32_sparse(?)) as distance
FROM vectors
ORDER BY distance ASC
LIMIT ?
```
Without constant optimization, `vector32_sparse` function will be
executed for every row - which is very inefficient and query can be 100x
slower due to that (but there is no need to evaluate this function for
every query as we can transform text representation to binary only once)
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3936
Rowsets are used in SQLite for two purposes:
1. for membership tests on a set of `i64`s,
2. for in-order iteration of a set of `i64`s,
Both in cases where we can just use rowids (which are `i64`) instead of building an entire ephemeral btree from a table's contents.
For example, in cases where a `DELETE FROM tbl WHERE ...` is performed on a table that has any `BEFORE DELETE` triggers, SQLite collects the table's rowids into a RowSet before actually performing the deletion. This is similar to how an UPDATE that modifies rowids (or the index used to iterate the UPDATE loop) will first collect the rows into an ephemeral index, and same with `INSERT INTO ... SELECT`.
This entire PR description was written by me - no AIs were harmed in the production of it. However, the code itself was mostly vibecoded using two agents in Cursor:
- Composer 1: given the SQLite opcode documentation and rowset.c source code, and asked to implement the VDBE instructions and the RowSet module.
- GPT-5: given the same SQLite docs and source code, and asked to review Composer 1's work and write feedback into a separate markdown file.
This loop was run for roughly 4-5 iterations, where each time GPT-5's feedback was given to Composer 1, until GPT-5 found nothing to comment anymore.
After this, I instructed Composer 1 to improve the documentation to be less stupid.
After that, I made a manual editing pass over the runtime code to e.g. change boolean flags to a `RowSetMode` enum to make clearer that the rowset has two distinct mutually exclusive purposes (membership tests and in-order iteration), plus cleaned up some other dumb shit and added comments.
I am still not sure if this saved time or not.
Depends on #3923 .
To have similar semantics to how `op_compare` works, we need to apply an
affinity to the values referenced in the `SeekKey` that is used for
seeking. This means keeping some affinity metadata for the `WhereTerms`
in the optimization phase, then before seeking, we emit an affinity
conversion. Had to dig deep in the sqlite code to understand this
better.
Unfortunately, we cannot have just one compare function to rule them all
here, as we have a specialized/optimized compare code to handle records
that have not yet been deserialized.
Closes#3707
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3925
It seems that the build on macos arm is failing with `aegis` v0.9.0.
So, here I update `aegis`.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3561