When compiling with features disabled, there are lots of clippy
warnings. This PR silences them.
For the utils file, I am using a bit of a hammer and just allowing
unused stuff in the whole file. Due to the box of utilities nature of
this file, it'll always be the case that things will be unused depending
on the feature-set.
There's no such thing as a read-only connection.
In a normal connection, you can have many attached databases. Some r/o,
some r/w.
To properly fix that, we also need to fix the OpenWrite opcode. Right
now we are passing a name, which is the name of the table. That
parameter is not used anywhere. That is also not what the SQLite opcode
specifies. Same as OpenRead, the p3 register should be the database
index.
With that change, we can - for now - pass the index 0, which is all we
support anyway, and then use that to test if we are r/o.
Reviewed-by: Preston Thorpe (@PThorpe92)
Closes#2232
There's no such thing as a read-only connection.
In a normal connection, you can have many attached databases. Some
r/o, some r/w.
To properly fix that, we also need to fix the OpenWrite opcode. Right
now we are passing a name, which is the name of the table. That
parameter is not used anywhere. That is also not what the SQLite opcode
specifies. Same as OpenRead, the p3 register should be the database
index.
With that change, we can - for now - pass the index 0, which is all
we support anyway, and then use that to test if we are r/o.
Currently, each record header is decoded at least twice: once to
determine the record size within the read buffer (in order to construct
the `ImmutableRecord` instance), and again later when decoding the
record for comparison. This redundant decoding can have a noticeable
negative impact on performance when records are wide (eg. contain
multiple columns).
This update modifies the (de)serialization format for sorted chunk files
by prepending a record size varint to each record payload. As a result,
only a single varint needs to be decoded to determine the record size,
eliminating the need to decode the full record header during reads.
Closes#2176
Two of the opcodes we implement (OpenRead and Transaction) should have
an opcode specifying the database to use, but they don't.
Add it, and for now always use 0 (the main database).
The following sequence of events is possible:
- init_chunk_heap() called
- chunk A status is WriteComplete, so chunk.read() gets called on chunk A
- some other chunk B is in WaitingForWrite status after flush()
- init_chunk_heap() returns IOResult::IO
- init_chunk_heap() is called again
- we panic because chunk A is in WaitingForRead status
So - we just allow WaitingForRead status in init_chunk_heap() instead.
This panic was caught thanks to Pedro's IO latency enhancement to the sim!
Fixes#2153.
Not so sure if SQLite doesn't rollback in more cases, we should
definitively check this out.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2154
Span creation in debug mode is very slow and impacts our ability to run
the Simulator faster.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2146
This PR updates to version Rust 1.88.0 ([Release
notes](https://releases.rs/docs/1.88.0/)) and fixes all the clippy
errors that come with the new Rust version.
This is possible in the latest Rust version:
```rust
if let Some(foo) = bar && foo.is_cool() {
...
}
```
There are three complications in the migration (so far):
- A BUNCH of Clippy warnings (mostly fixed in
https://github.com/tursodatabase/limbo/pull/1827)
- Windows cross compilation failed; linking `advapi32` on windows fixes
it
- Since Rust 1.87.0, advapi32 is not linked by default anymore
([Release notes](https://github.com/rust-
lang/rust/blob/master/RELEASES.md#compatibility-notes-1),
[PR](https://github.com/rust-lang/rust/pull/138233))
- Rust is more strict with FFIs and aligning pointers now. CI checks
failed with error below
- Fixed in https://github.com/tursodatabase/turso/pull/2064
```
thread 'main' panicked at
core/ext/vtab_xconnect.rs:64:25:
misaligned pointer dereference: address must be
a multiple of 0x8 but is 0x7ffd9d901554
```
Closes#1807
These are nearly always used together in some form, so it makes sense to
colocate them, and it also makes many code paths simpler, as we don't
separately pass `collations` and `key_sort_order` around
As a side effect, as the bitfield-based `IndexKeySortOrder` is removed,
we now remove the arbitrary 64 column restriction for indexes, see e.g.
this sim failure which fails to 64+ index columns (not sure why it uses
an index if they are disabled):
https://github.com/tursodatabase/turso/actions/runs/16339391964/job/4615
8045158
Closes#2131
Small refactoring to reduce confusion (I was caught in this trap and set
`amount` to one in CDC branch during development)
Also, this PR slightly fix broken `concat_ws` emit logic.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2100
## Background
PR #2065 fixed a bug with table btree seeks concerning boundaries of
leaf pages.
The issue was that if we were e.g. looking for the first key greater
than (GT) 100, we always assumed the key would either be found on the
left child page of a given divider (e.g. divider 102) or not at all,
which is incorrect. #2065 has more discussion and documentation about
this, so read that one for more context.
## This PR
We already had similar handling for index btrees as #2065 introduced for
table btrees, but it was baked into the `BTreeCursor` struct's seek
handling itself, whereas #2065 handled this on the VDBE side.
This PR unifies this handling for both table and index btrees by always
doing the additional cursor advancement in the VDBE.
Unfortunately, unlike table btrees, index btrees may also need to do an
additional advance when they are looking for an exact match. This
resulted in a bigger refactor than anticipated, since there are quite a
few VDBE instructions that may perform a seek, e.g.: `IdxInsert`,
`IdxDelete`, `Found`, `NotFound`, `NoConflict`. All of these can
potentially end up in a similar situation where the cursor needs one
more advance after the initial seek, and they were currently calling
`cursor.seek()` directly and expecting the `BTreeCursor` to handle the
auto-advance fallback internally.
For this reason, I have 1. removed the "TryAdvance"-ish logic from the
index btree internals and 2. extracted a common VDBE helper `fn
seek_internal()` - heavily based on the existing `op_seek_internal()`,
but decoupled from instructions and the program counter - which all the
interested VDBE instructions will call to delegate their seek logic.
Closes#2083
Reviewed-by: Nikita Sivukhin (@sivukhin)
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#2084
PR #2065 fixed a bug with table btree seeks concerning boundaries
of leaf pages.
The issue was that if we were e.g. looking for the first key greater than
(GT) 100, we always assumed the key would either be found on the left child
page of a given divider (e.g. divider 102), which is incorrect. #2065 has more
discussion and documentation about this, so read that one for more context.
Anyway:
We already had similar handling for index btrees, but it was baked into
the `BTreeCursor` struct's seek handling itself, whereas #2065 handled this
on the VDBE side.
This PR unifies this handling for both table and index btrees by always doing
the additional cursor advancement in the VDBE.
Unfortunately, since indexes may also need to do an additional advance when they
are looking for an exact match, this resulted in a bigger refactor than anticipated,
since there are quite a few VDBE instructions that may perform a seek, e.g.:
`IdxInsert`, `IdxDelete`, `Found`, `NotFound`, `NoConflict`.
All of these can potentially end up in a similar situation where the cursor needs
one more advance after the initial seek.
For this reason, I have extracted a common VDBE helper `fn seek_internal()` which
all the interested VDBE instructions will call to delegate their seek logic.
This PR provides Euclidean distance support for limbo's vector search.
At the same time, some type abstractions are introduced, such as
`DistanceCalculator`, etc. This is because I hope to unify the current
vector module in the future to make it more structured, clearer, and
more extensible.
While practicing Euclidean distance for Limbo, I discovered that many
checks could be done using the type system or in advance, rather than
waiting until the distance is calculated. By building these checks into
the type system or doing them ahead of time, this would allow us to
explore more efficient computations, such as automatic vectorization or
SIMD acceleration, which is future work.
Reviewed-by: Nikita Sivukhin (@sivukhin)
Closes#1986