This PR addresses https://github.com/tursodatabase/turso/issues/1828 in
a phased manner.
Making database header access async in one PR will be complicated. This
PR ports adds an async API to `header_accessor.rs` and ports over some
of `pager.rs` to use this API.
This will allow gradual porting over of all call sites. Once all call
sites are ported over, one mechanical rename will fix everything in the
repo so we don't have any `<header_name>_async` functions.
Also, porting header accessors over from sync to async would be a good
way to get introduced to the Limbo codebase for first time contributors.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1966
This PR provides Euclidean distance support for limbo's vector search.
At the same time, some type abstractions are introduced, such as
`DistanceCalculator`, etc. This is because I hope to unify the current
vector module in the future to make it more structured, clearer, and
more extensible.
While practicing Euclidean distance for Limbo, I discovered that many
checks could be done using the type system or in advance, rather than
waiting until the distance is calculated. By building these checks into
the type system or doing them ahead of time, this would allow us to
explore more efficient computations, such as automatic vectorization or
SIMD acceleration, which is future work.
Reviewed-by: Nikita Sivukhin (@sivukhin)
Closes#1986
SQLite creates a table if it does not exists, but we just silently
ignore the data. Let's add an error if table does not exist until we fix
this.
Refs #2079
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2080
Current table B-Tree seek code rely on the invariant that if key `K` is
present in interior page then it also must be present in the leaf page.
This is generally not true if data was ever deleted from the table
because leaf row which key was used as a divider in the interior pages
can be deleted. Also, SQLite spec says nothing about such invariant - so
`turso-db` implementation of B-Tree should not rely on it.
This PR introduce 3 options for B-Tree `seek` result: `Found` /
`NotFound` and `TryAdvance` which is generated when leaf page have no
match for `seek_op` but DB don't know if neighbor page can have matching
data.
There is an alternative approach where we can move cursor in the `seek`
itself to the neighbor page - but I was afraid to introduce such changes
because analogue `seek` function from SQLite works exactly like current
version of the code and I think some query planner internals (for
insertion) can rely on the fact that repositioning will leave cursor at
the position of insertion:
> ** If an exact match is not found, then the cursor is always
** left pointing at a leaf page which would hold the entry if it
** were present. The cursor might point to an entry that comes
** before or after the key.
Also, this PR introduces new B-tree fuzz tests which generate table
B-tree from scratch and execute opreations over it. This can help to
reach some non trivial states and also generate huge DBs faster (that's
how this bug was discovered)
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#2065
SQLite creates a table if it does not exists, but we just silently
ignore the data. Let's add an error if table does not exist until we fix
this.
Refs #2079
Let's assert **for now** that we do not read/write less bytes than
expected. This should be fixed to retrigger several reads/writes if we
couldn't read/write enough but for now let's assert.
Closes#2078
- Apart from regular states Found/NotFound seek result has TryAdvance
value which tells caller to advance the cursor in necessary direction
because the leaf page which would hold the entry if it was present
actually has no matching entry (but neighbouring page can have match)
- `OP_NewRowId` now generates new rowid semi randomly when the largest
rowid in the table is `i64::MAX`.
- Introduced new `LimboError` variant `DatabaseFull` to signify that
database might be full (SQLite behaves this way returning
`SQLITE_FULL`).
Now:
```SQL
turso> CREATE TABLE q(x INTEGER PRIMARY KEY, y);
turso> INSERT INTO q VALUES (9223372036854775807, 1);
turso> INSERT INTO q(y) VALUES (2);
turso> INSERT INTO q(y) VALUES (3);
turso> SELECT * FROM q;
┌─────────────────────┬───┐
│ x │ y │
├─────────────────────┼───┤
│ 1841427626667347484 │ 2 │
├─────────────────────┼───┤
│ 4000338366725695791 │ 3 │
├─────────────────────┼───┤
│ 9223372036854775807 │ 1 │
└─────────────────────┴───┘
```
Fixes: https://github.com/tursodatabase/turso/issues/1977
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1985
Simple PR to check minor issue that `INTEGER PRIMARY KEY NOT NULL` (`NOT
NULL` is redundant here obviously) will prevent user to insert anything
to the table as rowid-alias column always set to null by `turso-db`
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2063
This PR adds few functions to the `turso-db` in order to simplify
exploration of CDC table. Later we will also add API to work with
changes from the code - but SQL support is also useful.
So, this PR adds 2 functions:
1. `table_columns_json_array('<table-name>')` - returns list of current
table column **names** as a single string in JSON array format
2. `bin_record_json_object('<columns-array>', x'<bin-record>')` -
convert record in the SQLite format to the JSON object with keys from
`columns-array`
So, this functions can be used together to extract changes in human-
readable format:
```sql
turso> PRAGMA unstable_capture_data_changes_conn('full');
turso> CREATE TABLE t(a INTEGER PRIMARY KEY, b);
turso> INSERT INTO t VALUES (1, 2), (3, 4);
turso> UPDATE t SET b = 20 WHERE a = 1;
turso> UPDATE t SET a = 30, b = 40 WHERE a = 3;
turso> DELETE FROM t WHERE a = 1;
turso> SELECT
bin_record_json_object(table_columns_json_array('t'), before) before,
bin_record_json_object(table_columns_json_array('t'), after) after
FROM turso_cdc;
┌─────────────────┬────────────────┐
│ before │ after │
├─────────────────┼────────────────┤
│ │ {"a":1,"b":2} │
├─────────────────┼────────────────┤
│ │ {"a":3,"b":4} │
├─────────────────┼────────────────┤
│ {"a":1,"b":2} │ {"a":1,"b":20} │
├─────────────────┼────────────────┤
│ {"a":3,"b":4} │ │
├─────────────────┼────────────────┤
│ {"a":30,"b":40} │ │
├─────────────────┼────────────────┤
│ {"a":1,"b":20} │ │
└─────────────────┴────────────────┘
```
Initially, I thought to implement single function like
`bin_record_json_object('<table-name', x'<bin-record')` but this design
has certain flaws:
1. In case of schema changes this function can return incorrect result
(imagine that you dropped a column and now JSON from CDC mentions some
random subset of columns). While this feature is unstable - `turso-db`
should avoid silent incorrect behavior at all cost
2. Single-function design provide no way to deal with schema changes
3. The API is unsound and user can think that under the hood `turso-db`
will select proper schema for the record (but this is actually
impossible with current CDC implementation)
So, I decided to stop with two-functions design which cover drawbacks
mentioned above to some extent
1. First concern still remains valid
2. Two-functions design provides a way to deal with schema changes. For
example, user can maintain simple `cdc_schema_changes` table and log
result of `table_columns_json_array` before applying breaking schema
changes.
* Obviously, this is not ideal UX - but it suits my needs: I don't
want to design schema changes capturing, but also I don't want to block
users and provide a way to have a workaround for scenarios which are not
natively supported by CDC
3. Subjectively, I think that API became a bit more clear about the
machinery of these two functions as user see that it extract column list
of the table (without any context) and then feed it to the
`bin_record_json_object` function.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2057
Let's assert **for now** that we do not read/write less bytes than
expected. This should be fixed to retrigger several reads/writes if we
couldn't read/write enough but for now let's assert.
`i64::MAX`. We do this by attempting to generate random values smaller
than `i64::MAX` for 100 times and returns `DatabaseFull` error on
failure
- Introduced `DatabaseFull` error variant
Fixes: https://github.com/tursodatabase/turso/issues/1977
Currently we deserialize the entire record to compare them or to get a
particular column. This PR introduces efficient record operations such
as incremental column deserialization and efficient record comparison.
### Incremental Column Deserialization
- Introduced `RecordCursor` to keep track of how much of the header and
the record we have already parsed. Each `BTreeCursor` will have its own
`RecordCursor` similar to an `ImmutableRecord`.
- The `RecordCursor` gets the number of columns from schema when the
BTreeCursor is initialized in VDBE. This helps in cutting down heap
allocs by reserving the correct amount of space for underlying `Vec`s.
- `Immutable` record only carries the serialized `payload` now.
- We parse the header up till we reach the required serial type (denoted
by the column index) and then calculate the offsets and deserialize only
that particular slice of the payload.
- Manually inlined most of the deserialization code into `fn op_column`
code because the compiler is refusing to inline even with
`#[inline(always)]` hint. This is probably due to complicated control
flow.
- Tried to follow SQLite semantics, where it returns `Null` when the
requested column falls outside the number of columns available in the
record or when the payload is empty etc.
### Efficient Record Comparison ops
- Three record comparison function are introduced for Integer, String
and for general case which replaces the `compare_immutable`. These
functions compare a serialized record with deserialized one.
- `compare_records_int`: is used when the first field is integer, header
≤63 bytes, ≤13 total fields. No varint parsing, direct integer
extraction.
- `compare_records_string`: is used when the first field is text with
binary collation, header ≤63 bytes.
- `compare_records_generic`: is used in complex cases, custom
collations, large headers. Here we parse the record incrementally field
by field and comparing each field with the one from the deserialized
record. We early exit on the first mismatch saving on the
deserialization cost.
- `find_compare`: selects the optimal comparison strategy for a given
case and dispatches the function required.
### Benchmarks `main` vs `incremental_column`
I've used the `testing/testing.db` for this benchmark.
| Query | Main
| Incremental | % Change (Faster is +ve) |
|-------------------------------------------------------------|---------
-|-------------|------------------------|
| SELECT first_name FROM users | 1.3579ms
| 1.1452ms | 15.66 |
| SELECT age FROM users | 912.33µs
| 897.97µs | 1.57 |
| SELECT email FROM users | 1.3632ms
| 1.215ms | 10.87 |
| SELECT id FROM users | 1.4985ms
| 1.1762ms | 21.50 |
| SELECT first_name, last_name FROM users | 1.5736ms
| 1.4616ms | 7.11 |
| SELECT first_name, last_name, email FROM users | 1.7965ms
| 1.754ms | 2.36 |
| SELECT id, first_name, last_name, email, age FROM users | 2.3545ms
| 2.4059ms | -2.18 |
| SELECT * FROM users | 3.5731ms
| 3.7587ms | -5.19 |
| SELECT * FROM users WHERE age = 30 | 87.947µs
| 85.545µs | 2.73 |
| SELECT id, first_name FROM users WHERE first_name LIKE 'John%' |
1.8594ms | 1.6781ms | 9.75 |
| SELECT age FROM users LIMIT 1000 | 100.27µs
| 95.418µs | 4.83 |
| SELECT first_name, age, email FROM users LIMIT 1000 | 176.04µs
| 167.56µs | 4.81 |
Closes: https://github.com/tursodatabase/turso/issues/1703Closes#1923