Resolves#2378.
```
`ALTER TABLE _ RENAME TO _`/limbo_rename_table/
time: [15.645 ms 15.741 ms 15.850 ms]
Found 12 outliers among 100 measurements (12.00%)
8 (8.00%) high mild
4 (4.00%) high severe
`ALTER TABLE _ RENAME TO _`/sqlite_rename_table/
time: [34.728 ms 35.260 ms 35.955 ms]
Found 15 outliers among 100 measurements (15.00%)
8 (8.00%) high mild
7 (7.00%) high severe
```
<img width="1000" height="199" alt="image" src="https://github.com/user-
attachments/assets/ad943355-b57d-43d9-8a84-850461b8af41" />
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2399
Closes#2431
Discovered while fuzzing #2086
## What
We update `schema_version` whenever the schema changes
## Problem
Probably unintentionally, we were calling `SetCookie` in a loop for each
row in the target table, instead of only once at the end. This means 2
things:
- For large `n`, this is a lot of unnecessary instructions
- For `n==0`, `SetCookie` doesn't get called at all -> the schema won't
get marked as having been updated -> conns can operate on a stale schema
## Fix
Lift `SetCookie` out of the loop
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#2432
WAL API shouldn't be exposed by default because this is relatively
dangerous API which we use internally and ordinary users shouldn't not
be interested in it.
Reviewed-by: Pekka Enberg <penberg@iki.fi>
Closes#2424
## Background
When we get a new rowid using `op_new_rowid()`, we move to the end of
the btree to look at what the maximum rowid currently is, and then
increment it by one.
This requires a btree seek.
## Problem
If we were already on the rightmost page, this is a lot of unnecessary
work, including potentially a few page reads from disk (although to be
fair the ancestor pages are very likely to be in cache at this point.)
## Fix
Cache the rightmost page id whenever we enter it in
`move_to_rightmost()`, and invalidate it whenever we do a balancing
operation.
## Local benchmark results
```sql
Insert rows in batches/limbo_insert_1_rows
time: [23.333 µs 27.718 µs 35.801 µs]
change: [-7.7924% +0.8805% +12.841%] (p = 0.91 > 0.05)
No change in performance detected.
Insert rows in batches/limbo_insert_10_rows
time: [38.204 µs 38.381 µs 38.568 µs]
change: [-8.7188% -7.4786% -6.1955%] (p = 0.00 < 0.05)
Performance has improved.
Insert rows in batches/limbo_insert_100_rows
time: [158.39 µs 165.06 µs 178.37 µs]
change: [-21.000% -18.789% -15.666%] (p = 0.00 < 0.05)
Performance has improved.
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#2409
This should be safe to do as:
1. page cache is private per connection
2. since this connection wrote the flushed pages/frames, they are up to
date from its perspective
3. multiple concurrent statements inside one connection are not
snapshot-transactional even in sqlite
Reviewed-by: Pekka Enberg <penberg@iki.fi>
Closes#2407
https://github.com/tursodatabase/turso/pull/1256 switched cargo-dist to
Astral's forked version, but, recently, the official repository got a
new maintainer and started to be maintained again.
Their latest release, [v0.29.0](https://github.com/axodotdev/cargo-
dist/releases/tag/v0.29.0), now includes the features originally added
to Astral's version. So, probably it's a good time to switch back to the
official cargo-dist. That said, as there's no significant changes from
Astral's version, it's also fine to hold the current one.
Closes#2398
While working on #2151 I saw myself forced to do things like:
```rust
assert_eq!(
6,
*result
.next()
.await?
.unwrap()
.get_value(0)?
.as_integer()
.unwrap()
);
```
Just to get a simple value from a row, now with this PR users can just
do:
```rust
assert_eq!(6, result.get::<i32>(0)?);
```
(Thanks libsql devs, this is so much better!)
Closes#2377
This will save some work when yielding to IO. Previously, on every
invocation, if the record was a packed record, we parsed it and iterated
through the values to check for nulls. Now, the pre-seeking work is done
only once.
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#2394
Closes #2077-
This PR fixes an integer overflow bug that causes results from Turso to
differ from SQLite.
-Commit 1: Fixes incorrect logic that failed to detect integer overflow
when parsing long numeric strings.
-Commit 2: Handles the case where a parsed numeric string is stored as a
float and classified as PureInteger, but lies outside the integer range.
Previously, `parsed_value.as_integer()` would return None, causing Turso
to fall back to text comparison against numeric values. This caused
**another** erroneous result, as shown below.
`$> SELECT (-104614899632619 || 45597) > CAST(0 AS NUMERIC); -- tursodb
= 1(wrong), sqlite = 0`
Now, if Turso fails to convert a very long numeric string to an
integer, it tries to convert it to a float. This is in line with the
`static void applyNumericAffinity` function in sqlite3
**Before**
<img width="623" height="238" alt="Screenshot 2025-08-01 at 12 11 49 PM"
src="https://github.com/user-attachments/assets/796d6ff6-768b-40ef-
ac83-e0c55fff6bd9" />
**After**
`SELECT (104614899632619 || 45597) > CAST(0 AS NUMERIC); -- tursodb = 1,
sqlite = 1`
`SELECT (-104614899632619 || 45597) > CAST(0 AS NUMERIC); -- tursodb =
0, sqlite = 0`
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#2397