Introduces a completion group abstraction that allows grouping multiple
I/O completions together for coordinated tracking and error handling.
This enables:
- Tracking completion status of multiple I/O operations as a group
- Detecting when all operations in a group have finished
- Aborting all operations in a group atomically
- Retrieving errors from any completion in the group
The implementation uses intrusive linked lists for efficient membership
tracking and atomic counters for outstanding operation counts. Each
completion can be linked to a group using the new .link() method.
This lays the groundwork for batch I/O operations and coordinated
transaction handling in the storage layer.
- The code now prevents dropping or indexing `sqlite_sequence`
- make sure that AUTOINCREMENT only works on a single `INTEGER PRIMARY KEY`
- handles `i64::MAX` gracefully by returning `SQLITE_FULL`
- also AUTOINCREMENT now works in both column and table constraints.
fmt
This PR integrates virtual tables into the query optimizer. It is a
follow-up to https://github.com/tursodatabase/turso/pull/1727.
The most immediate improvement is better support for inner joins
involving TVFs, particularly when TVF arguments are column references.
### Example
The following two queries are semantically equivalent, but require
different join orders to be valid:
```sql
-- TVF depends on `t.id`, so `t` must be evaluated in outer loop
SELECT t.id, series.value
FROM target t, generate_series(t.id, 3) series;
-- Equivalent query, but with reversed table order in the FROM clause
SELECT t.id, series.value
FROM generate_series(t.id, 3) series, target t;
```
Without optimizer integration, the second query would fail because the
planner would attempt to evaluate `generate_series` before `t`. With
this change, the optimizer detects column dependencies and produces the
correct join order in both cases.
### TODO
Support for outer joins with TVFs is still missing and will be addressed
in a follow-up PR.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2439
Replace panics with proper errors when a valid plan does not exist.
Currently, this never happens because a naive plan is always available.
However, once virtual tables are integrated into the planner, it may
occur—for example, when table-valued function arguments are column
references, and the function cannot be placed in the join order so that
its arguments can be evaluated.
Although this change is effectively a no-op for now, it is extracted
into a separate commit to avoid polluting the one that introduces
virtual table integration with the planner.
One step further to help to simplify the API for users.
This is in core and not in Rust bind because, in core,
this could benefit a broader set of users/developers
`i64::MAX`. We do this by attempting to generate random values smaller
than `i64::MAX` for 100 times and returns `DatabaseFull` error on
failure
- Introduced `DatabaseFull` error variant
Fixes: https://github.com/tursodatabase/turso/issues/1977
insert() fails if key exists (there shouldn't be two) and panics if
it's different pages, and also fails if it can't make room for the page.
Replaced the limited pop_if_not_dirty() function with make_room_for().
It tries to evict many pages as requested spare capacity. It should come
handy later by resize() and Pager. make_room_for() tries to make room or
fails if it can't evict enough entries.
For make_room_for() I also tried with an all-or-nothing approach, so if
say a query requests a lot more than possible to make room for, it
doesn't evict a bunch of pages from the cache that might be useful. But
implementing this approach got very complicated since it needs to keep
exclusive PageRefs and collecting this caused segfaults. Might be worth
trying again in the future. But beware the rabbit hole.
Updated page cache test logic for new insert rules.
Updated Pager.allocate_page() to handle failure logic but needs further
work. This is to show new cache insert handling. There are many places
to update.
Left comments on callers of pager and page cache needing to update
error handling, for now.
This makes it work like in SQLite where only one schema writer is permitted and readers will return error while preparing statement if the schema is changing.
I noticed that the parse errors were a bit hard to read - only the nearest token and the line/col offsets were printed.
I made a first attempt at improving the errors using [miette](https://github.com/zkat/miette).
- Added derive for `miette::Diagnostic` to both the parser's error type and LimboError.
- Added miette dependency to both sqlite3_parser and core. The `fancy` feature is only enabled for CLI.
Some future improvements that can be made further:
- Add spans to AST nodes so that errors can better point to the correct token. See upstream issue: https://github.com/gwenn/lemon-rs/issues/33
- Construct more errors with offset information. I noticed that most parser errors are constructed with `None` as the offset.
Comparisons.
Before:
```
❯ cargo run --package limbo --bin limbo database.db --output-mode pretty
...
limbo> selet * from a;
[2025-01-05T11:22:55Z ERROR sqlite3Parser] near "Token([115, 101, 108, 101, 116])": syntax error
Parse error: near "selet": syntax error at (1, 6)
```
After:
```
❯ cargo run --package limbo --bin limbo database.db --output-mode pretty
...
limbo> selet * from a;
[2025-01-05T12:25:52Z ERROR sqlite3Parser] near "Token([115, 101, 108, 101, 116])": syntax error
× near "selet": syntax error at (1, 6)
╭────
1 │ selet * from a
· ▲
· ╰── syntax error
╰────
```