`commit_txn` in MVCC was hacking its way through I/O until now. After
adding this and the test for concurrent writers we now see `busy` errors
returning as expected because there is no `commit` queueing happening
yet until next PR I open.
Closes#2895
@penberg this PR try to clean up `turso_parser`'s`fmt` code.
- `get_table_name` and `get_column_name` should return None when
table/column does not exist.
```rust
/// Context to be used in ToSqlString
pub trait ToSqlContext {
/// Given an id, get the table name
/// First Option indicates whether the table exists
///
/// Currently not considering aliases
fn get_table_name(&self, _id: TableInternalId) -> Option<&str> {
None
}
/// Given a table id and a column index, get the column name
/// First Option indicates whether the column exists
/// Second Option indicates whether the column has a name
fn get_column_name(&self, _table_id: TableInternalId, _col_idx: usize) -> Option<Option<&str>> {
None
}
// help function to handle missing table/column names
fn get_table_and_column_names(
&self,
table_id: TableInternalId,
col_idx: usize,
) -> (String, String) {
let table_name = self
.get_table_name(table_id)
.map(|s| s.to_owned())
.unwrap_or_else(|| format!("t{}", table_id.0));
let column_name = self
.get_column_name(table_id, col_idx)
.map(|opt| {
opt.map(|s| s.to_owned())
.unwrap_or_else(|| format!("c{col_idx}"))
})
.unwrap_or_else(|| format!("c{col_idx}"));
(table_name, column_name)
}
}
```
- remove `FmtTokenStream` because it is same as `WriteTokenStream `
- remove useless functions and simplify `ToTokens`
```rust
/// Generate token(s) from AST node
/// Also implements Display to make sure devs won't forget Display
pub trait ToTokens: Display {
/// Send token(s) to the specified stream with context
fn to_tokens<S: TokenStream + ?Sized, C: ToSqlContext>(
&self,
s: &mut S,
context: &C,
) -> Result<(), S::Error>;
// Return displayer representation with context
fn displayer<'a, 'b, C: ToSqlContext>(&'b self, ctx: &'a C) -> SqlDisplayer<'a, 'b, C, Self>
where
Self: Sized,
{
SqlDisplayer::new(ctx, self)
}
}
```
Closes#2748
- check free list trunk and pages
- use shared hash map to check for duplicate references for pages
- properly check overflow pages
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#2816
We were storing `txid` in `ProgramState`, this meant it was impossible
to track interactive transactions. This was extracted to `Connection`
instead.
Moreover, transaction state for mvcc now is reset on commit.
Closes#2689
This not only changes schema_did_change on commit_txn for mvcc, but also
extracts the connection transaction state from non mvcc transactions to
mvcc too.
We have halt and op_halt, doing essentially the same thing.
This PR unifies them. There is a minor difference between them now in
the way halt() handles auto-commit. My current understanding of the code
is that what we have in halt *is a bug*, which is already one bad
consequence of the duplication.
Closes#2631
We have halt and op_halt, doing essentially the same thing.
This PR unifies them. There is a minor difference between them now in
the way halt() handles auto-commit. My current understanding of the code
is that what we have in halt *is a bug*, which is already one bad
consequence of the duplication.
## Problem
There are several problems with our current statically allocated
`BufferPool`.
1. You cannot open two databases in the same process with different page
sizes, because the `BufferPool`'s `Arena`s will be locked forever into
the page size of the first database. This is the case regardless of
whether the two `Database`s are open at the same time, or if the first
is closed before the second is opened.
2. It is impossible to even write Rust tests for different page sizes
because of this, assuming the test uses a single process.
## Solution
Make `Database` own `BufferPool` instead of it being statically
allocated, so this problem goes away.
Note that I didn't touch the still statically-allocated
`TEMP_BUFFER_CACHE`, because it should continue to work regardless of
this change. It should only be a problem if the user has two or more
databases with different page sizes open simultaneously, because
`TEMP_BUFFER_CACHE` will only support one pool of a given page size at a
time, so the rest of the allocations will go through the global
allocator instead.
## Notes
I extracted this change out from #2569, because I didn't want it to be
smuggled in without being reviewed as an individual piece.
Reviewed-by: Avinash Sajjanshetty (@avinassh)
Closes#2596
Implements the unlikely(X) function. Removes runtime implementations of
likely(), unlikely() and likelihood(), replacing them with panics if
they reach the VDBE.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2559
Now that we actually implemented the statement parsing around views,
implementing normal SQLite views is relatively trivial, as they are just
an alias to a query.
We'll implement them now to get them out of the way, and then I'll go
back to DBSP
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2591
Problem
There are several problems with our current statically allocated
`BufferPool`.
1. You cannot open two databases in the same process with different
page sizes, because the `BufferPool`'s `Arena`s will be locked forever
into the page size of the first database. This is the case regardless
of whether the two `Database`s are open at the same time, or if the first
is closed before the second is opened.
2. It is impossible to even write Rust tests for different page sizes because
of this, assuming the test uses a single process.
Solution
Make `Database` own `BufferPool` instead of it being statically allocated, so this
problem goes away.
Note that I didn't touch the still statically-allocated `TEMP_BUFFER_CACHE`, because
it should continue to work regardless of this change. It should only be a problem if
the user has two or more databases with different page sizes open simultaneously, because
`TEMP_BUFFER_CACHE` will only support one pool of a given page size at a time, so the rest
of the allocations will go through the global allocator instead.
Notes
I extracted this change out from #2569, because I didn't want it to be smuggled in without
being reviewed as an individual piece.
The op_insert function was incorrectly trying to capture an "old record"
for fresh INSERT operations when a table had dependent materialized
views. This caused a "Cannot delete: no current row" error because the
cursor wasn't positioned on any row for new inserts.
The issue was introduced in commit f38333b3 which refactored the state
machine for incremental view handling but didn't properly distinguish
between:
- Fresh INSERT operations (no old record exists)
- UPDATE operations without rowid change (old record should be captured)
- UPDATE operations with rowid change (already handled by DELETE)
This fix checks if cursor.rowid() returns a value before attempting to
capture the old record. If no row exists (fresh INSERT), we correctly
set old_record to None instead of erroring out.
I am also including tests to make sure this doesn't break. The reason I
didn't include tests earlier is that I didn't know it was possible to
run the tests under a flag. But in here, I am just adding the flag to
the execution script.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#2579
A lot of the structures we have - like the ones under Schema, are
specific for materialized views. In preparation to adding normal views,
rename them, so things are less confusing.
The op_insert function was incorrectly trying to capture an "old record"
for fresh INSERT operations when a table had dependent materialized views.
This caused a "Cannot delete: no current row" error because the cursor
wasn't positioned on any row for new inserts.
The issue was introduced in commit f38333b3 which refactored the state
machine for incremental view handling but didn't properly distinguish
between:
- Fresh INSERT operations (no old record exists)
- UPDATE operations without rowid change (old record should be captured)
- UPDATE operations with rowid change (already handled by DELETE)
This fix checks if cursor.rowid() returns a value before attempting to
capture the old record. If no row exists (fresh INSERT), we correctly
set old_record to None instead of erroring out.
I am also including tests to make sure this doesn't break. The reason I
didn't include tests earlier is that I didn't know it was possible to
run the tests under a flag. But in here, I am just adding the flag to
the execution script.