Rolling back a transaction on Error should result in
`connection.auto_commit` being set back to true.
Added a regression test for this where a UNIQUE constraint violation
rolls back the transaction and trying to COMMIT will fail.
Currently, our default conflict resolution strategy is ROLLBACK, which
ends the transaction. In SQLite, the default is ABORT, which rolls back
the current statement but allows the transaction to continue.
We should migrate to default ABORT once we support subtransactions.
Closes#3746
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3747
This patch pushes unsafe Send and Sync to individual components instead
of doing it at Database level. This makes it easier for us to
incrementally fix thread-safety, but avoid developers adding more thread
unsafe code.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3745
`INSERT OR IGNORE INTO t VALUES (...)` can trivially be rewritten to
`INSERT INTO t VALUES (..) ON CONFLICT DO NOTHING`
This PR does this rewriting, as well as finishes a large refactor on
INSERT translation in general.. I just need a break from the rest of
this feature tbh.. just was getting under my skin and I have been in
`translate` land for too long.
Closes#3742
Rolling back a transaction should result in `connection.auto_commit` being set
back to true.
Added a regression test for this where a UNIQUE constraint violation rolls back
the transaction and trying to COMMIT will fail.
Currently, our default conflict resolution strategy is ROLLBACK,
which ends the transaction. In SQLite, the default is ABORT, which rolls back
the current statement but allows the transaction to continue.
We should migrate to default ABORT once we support subtransactions.
This patch pushes unsafe Send and Sync to individual components instead
of doing it at Database level. This makes it easier for us to
incrementally fix thread-safety, but avoid developers adding more thread
unsafe code.
This PR introduces a `Context` object that is stored in the `Completion`
that currently only stores a `Waker`. In the future, I want to add some
sort of abort signal so that we can abort tasks that share the same
Context. To pass the Waker, I introduced a `step_with_waker` function in
`Statement` that delegates to an internal `_step` function. `_step` is
the previous `step` but just with the `Option<&Waker>` argument.
I was going to try and have the BusyHandler by truly async as well, but
I decided to not do it here, because it will be slightly complicated to
achieve.
Closes#3535
The previous implementation of CompletionGroup would call the group's
callback function directly when the last completion finished:
if prev == 1 {
let group_result = group.result.get().and_then(|e| *e);
(group.complete)(group_result.map_or(Ok(0), Err));
}
This broke nested completion groups because parent groups track their
children via the Completion::callback() method. By calling the function
pointer directly, we bypassed the completion chain and parent groups
never received notification that their child had completed.
The fix stores a reference to the group's own Completion object in
self_completion during build(). When the last child finishes, we call
group_completion.callback() instead of invoking the function directly.
This properly propagates through the completion hierarchy, ensuring
parent groups decrement their outstanding count and eventually complete.
This matches the behavior of individual completions and maintains the
invariant that all completions notify their parents through the unified
callback() mechanism.
The previous implementation of CompletionGroup::add() would filter out
successfully-finished completions:
if !completion.finished() || completion.failed() {
self.completions.push(completion.clone());
}
This caused a problem when combined with drain() in the calling code.
Completions that were already finished would be removed from the source
vector by drain() but not added to the group, effectively losing track
of them.
This breaks the invariant that all completions passed to a group must
be tracked, regardless of their state. The build() method already
handles finished completions correctly by not including them in the
outstanding count.
The fix is to always add all completions and let build() handle their
state appropriately, matching the behavior of the old io_yield_many!()
macro.
Closes#3687 .
Previously, the `try_fold_expr_to_i64` function casted `NULL` as `0`
when evaluating expressions in `LIMIT` or `OFFSET` clauses. I removed
this function since evaluating the expression directly and relying on
the MustBeInt operation for casting seems to handle everything.
Closes#3695
We merged two concurrent fixes to `nchange` handling last night and
AFAICT the fix in #3692 was incorrect because it doesn't count UPDATEs
in cases where the original row was DELETEd as part of the UPDATE
statement.
The correct fix was in 87434b8
EDIT: okay, it's not strictly _incorrect_ in #3692 I guess, I just think
it's more intuitive to increment the change for UPDATE in the `insert`
opcode because that's what performs the actual update.
Closes#3735
We merged two concurrent fixes to `nchange` handling last night and
AFAICT the fix in #3692 was incorrect because it doesn't count UPDATEs
in cases where the original row was DELETEd as part of the UPDATE
statement.
The correct fix was in 87434b8
Closes#2600
## Problem
Every btree has a key it is sorted by - this is the integer `rowid` for
tables and an arbitrary-sized, potentially multi-column key for indexes.
Executing an UPDATE in a loop is not safe if the update modifies any
part of the key of the btree that is used for iterating the rows in said
loop. For example:
- Using the table itself to iterate rows is not safe if the UPDATE
modifies the rowid (or rowid alias) of a row, because since it modifies
the iteration order itself, it may cause rows to be skipped:
```sql
CREATE TABLE t(x INTEGER PRIMARY KEY, y);
INSERT <something>
UPDATE t SET y = RANDOM() where x > 100; // safe to iterate 't', 'y' is not being modified
UPDATE t SET x = RANDOM() where x > 100; // not safe to iterate 't', 'x' is being modified
```
- Using an index to iterate rows is not safe if the UPDATE modifies any
of the columns in the index key
```sql
CREATE TABLE t(x, y, z);
CREATE INDEX txy ON t (x,y);
INSERT <something>
UPDATE t SET z = RANDOM() where x = 100 and y > 0; // safe to iterate txy, neither x or y is being modified
UPDATE t SET x = RANDOM() where x = 100 and y > 0; // not safe to iterate txy, 'x' is being modified
UPDATE t SET y = RANDOM() where x = 100 and y > 0; // not safe to iterate txy, 'y' is being modified
```
## Current solution in tursodb
Our current `main` code recognizes this issue and adopts this pseudocode
algorithm from SQLite:
- open a table or index for reading the rows of the source table,
- for each row that matches the condition in the UPDATE statement, write
the row into a temporary table
- then use that temporary table for iteration in the UPDATE loop.
This guarantees that the iteration order will not be affected by the
UPDATEs because the ephemeral table is not under modification.
## Problem with current solution
Our `main` code specialcases the ephemeral table solution to rowids /
rowid aliases only. Using indexes for UPDATE iteration was disabled in
an earlier PR (#2599) due to the safety issue mentioned above, which
means that many UPDATE statements become full table scans:
```sql
turso> create table t(x PRIMARY KEY);
turso> insert into t select value from generate_series(1,10000);
turso> explain update t set x = x + 100000 where x > 50 and x < 60;
addr opcode p1 p2 p3 p4 p5 comment
---- ----------------- ---- ---- ---- ------------- -- -------
0 Init 0 28 0 0 Start at 28
1 OpenWrite 0 2 0 0 root=2; iDb=0
2 OpenWrite 1 3 0 0 root=3; iDb=0
-- scan entire 't' despite very narrow update range!
3 Rewind 0 27 0 0 Rewind table t
...
```
## Solution
We move the ephemeral table logic to _after_ the optimizer has selected
the best access path for the table, and then, if the UPDATE modifies the
key of the chosen access path (table or index; whichever was selected by
the optimizer), we change the plan to include the ephemeral table
prepopulation. Hence, the same query from above becomes:
```sql
turso> explain update t set x = x + 100000 where x > 50 and x < 60;
addr opcode p1 p2 p3 p4 p5 comment
---- ----------------- ---- ---- ---- ------------- -- -------
0 Init 0 35 0 0 Start at 35
1 OpenEphemeral 0 1 0 0 cursor=0 is_table=true
2 OpenRead 1 3 0 0 index=sqlite_autoindex_t_1, root=3, iDb=0
3 Integer 50 2 0 0 r[2]=50
-- index seek on PRIMARY KEY index
4 SeekGT 1 10 2 0 key=[2..2]
5 Integer 60 2 0 0 r[2]=60
6 IdxGE 1 10 2 0 key=[2..2]
7 IdxRowId 1 1 0 0 r[1]=cursor 1 for index sqlite_autoindex_t_1.rowid
8 Insert 0 3 1 ephemeral_scratch 2 intkey=r[1] data=r[3]
9 Next 1 6 0 0
10 OpenWrite 2 2 0 0 root=2; iDb=0
11 OpenWrite 3 3 0 0 root=3; iDb=0
-- only scan rows that were inserted to ephemeral index
12 Rewind 0 34 0 0 Rewind table ephemeral_scratch
13 RowId 0 5 0 0 r[5]=ephemeral_scratch.rowid
```
Note that an ephemeral index does not have to be used if the index is
not affected:
```sql
turso> create table t(x PRIMARY KEY, data);
turso> explain update t set data = 'some_data' where x > 50 and x < 60;
addr opcode p1 p2 p3 p4 p5 comment
---- ----------------- ---- ---- ---- ------------- -- -------
0 Init 0 15 0 0 Start at 15
1 OpenWrite 0 2 0 0 root=2; iDb=0
2 OpenWrite 1 3 0 0 root=3; iDb=0
3 Integer 50 1 0 0 r[1]=50
-- direct index seek
4 SeekGT 1 14 1 0 key=[1..1]
```
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3728
This PR contains NO semantic changes at all, this simply refactors
existing INSERT code to be easier to reason about.
Very sorry I know I've been working on `INSERT OR IGNORE|REPLACE|etc..`
for days now but the insert translation was literally unbearable and I
got it working but I barely could wrap my head around that whole
`translate_insert` function, so I spent a bunch of time refactoring the
whole INSERT handling out into different "Plans"... which turned into a
whole different clusterf***... So I just went back and made the existing
insert emission more modular and created some context that can make it
easier to reason about.
This should be able to just be merged quickly
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3731