doing e.g. `RUST_LOG=debug target/debug/tursodb foo.db 'SELECT * FROM
bar' &> output.txt` didn't generate traces because the tracer was
initialized after `app.first_run()`
Closes#2114
This unblocks proper testing in simulator where esp. with indexes
enabled, by far the most common reason for sim failure is cache being
full.
Reviewed-by: Pekka Enberg <penberg@iki.fi>
Closes#2135
These are nearly always used together in some form, so it makes sense to
colocate them, and it also makes many code paths simpler, as we don't
separately pass `collations` and `key_sort_order` around
As a side effect, as the bitfield-based `IndexKeySortOrder` is removed,
we now remove the arbitrary 64 column restriction for indexes, see e.g.
this sim failure which fails to 64+ index columns (not sure why it uses
an index if they are disabled):
https://github.com/tursodatabase/turso/actions/runs/16339391964/job/4615
8045158
Closes#2131
<img height="400" alt="image" src="https://github.com/user-
attachments/assets/bdd5c0a8-1bbb-4199-9026-57f0e5202d73" />
<img height="400" alt="image" src="https://github.com/user-
attachments/assets/7ea63e58-2ab7-4132-b29e-b20597c7093f" />
We were copying the schema preemptively on each `Database::connect`, now
the schema is shared until a change needs to be made by sharing a single
`Arc` and mutating it via `Arc::make_mut`. This is faster as reduces
memory usage.
Closes#2022
### Async IO performance, part 0
Relatively small and focused PR that mainly does two things, will add a
.md document of the proposed/planned improvements to the io_uring module
to fully revamp our async IO.
1. **Registration of file descriptors.**
At startup, by calling `io_uring_register_files_sparse` we can allocate
an array in shared kernel/user space by calling register_files_sparse
which initializes each slot to `-1`, and when we open a file we call
`io_uring_register_files_update`, providing an index into this array and
`fd`.
Then for the IO submission, we can reference the index into this array
instead of the fd, saving the kernel the work of looking up the fd in
the process file table, incrementing the reference count, doing the
operation, then finally decrementing the refcount. Instead the kernel
can just index into the array and do the operation.
This especially provides an improvement for cases like this, where files
are open for long periods of time, which the kernel will perform many
operations on.
The eventual goal of this, is to use Fixed read/write operations, where
both the file descriptor and the underlying buffer is registered with
the kernel. There is another branch continuing this work, that
introduces a buffer pool that memlock's one large 32MB arena mmap and
tries to use that wherever possible.
These Fixed operations are essentially the "holy grail" of io_uring
performance (for file operations).
2. **!Vectored IO**
This is kind of backwards, because the goal is to indeed implement
proper vectored IO and I'm removing some of the plumbing in this PR, but
currently we have been using `Writev`/`Readv`, while never submitting >
1 iovec at a time.
Writes to the WAL, especially, would benefit immensely from vectored IO,
as it is append-only and therefore all writes are contiguous. Regular
checkpointing/cache flushing to disk can also be adapted to aggregate
these writes and submit many in a single system call/opcode.
Until this is implemented, the bookkeeping and iovecs are unnecessary
noise/overhead, so let's temporarily remove them and revert to normal
`read`/`write` until they are needed and it can be designed from
scratch.
3. **Flags**
`setup_single_issuer` hints to the kernel that `IOURING_ENTER` calls
will all be sent from a single thread, and `setup_coop_taskrun` removes
some unnecessary kernel interrupts for providing cqe's which most single
threaded applications do not need. Both these flags demonstrate modest
improvement of performance.
Closes#2127
Enables formatting `Expr::Column` by adding the context to `ToTokens`
instead of creating a new unparsing implementation for each node.
`ToTokens` implemented for:
- [x] `UpdatePlan`
- [x] `Plan`
- [x] `JoinedTable`
- [x] `SelectPlan`
- [x] `DeletePlan`
Reviewed-by: Pedro Muniz (@pedrocarlo)
Closes#1949
This PR renames CDC table column names to use "change"-centric
terminology and avoid using `operation_xxx` column names.
Just a small refactoring to bring more consistency as `turso-db` refer
to the feature as capture data **changes** - and there is no word
operation here.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2120
Aftermath of seek-related refactor in #2065, which you can read for
background. The change in this PR is documented pretty well inline - if
we receive a `TryAdvance` seek result when seeking after balancing, we
need to - well - try to advance.
Closes#2116Closes#2115
## What does this fix
This PR fixes an issue with BTree upwards traversal logic where we would
try to go up to a parent node in `next()` even though we are at the very
end of the btree. This behavior can leave the cursor incorrectly
positioned at an interior node when it should be at the right edge of
the rightmost leaf.
## Why doesn't it cause problems on main
This bug is masked on `main` by every table `insert()` (wastefully)
calling `find_cell()`:
- `op_new_rowid` called, let's say the current max rowid is `666`.
Cursor is left pointing at `666`.
- `insert()` is called with rowid `667`, cursor is currently pointing at
`666`, which is incorrect.
- `find_cell()` does a binary search every time, and hence somewhat
accidentally positions the cursor correctly _after_ `666` so that the
insert goes to the correct place
## Why was this issue found
in #1988, I am removing `find_cell()` entirely in favor of always
performing a seek to the correct location - and skipping `seek` when it
is not required, saving us from wasting a binary search on every insert
- but this change means that we need to call `next()` after
`op_new_rowid` to have the cursor positioned correctly at the new
insertion slot. Doing this surfaces this upwards traversal bug in that
PR branch.
## Details of solution
- Store `cell_count` together with `cell_idx` in pagestack, so that
chlidren can know whether their parents have reached their end without
doing IO
- To make this foolproof, pin pages on `PageStack` so the page cache
cannot evict them during tree traversal
- `cell_indices` renamed to `node_states` since it now carries more
information (cell index AND count, instead of just index)
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#2005