Commit Graph

695 Commits

Author SHA1 Message Date
pedrocarlo
ccc22863c6 remove return_if_locked and return_if_locked_maybe_load 2025-08-13 10:24:55 +03:00
pedrocarlo
f95625a06c bubble completions in btree 2025-08-13 10:24:55 +03:00
pedrocarlo
85e86d427b cleanups - use io.block in many functions and return_if_io 2025-08-13 08:32:38 +03:00
pedrocarlo
e94f1f9f14 refactor move_to functions to return IO on start 2025-08-12 12:28:35 -03:00
pedrocarlo
4010dc8f32 state machine for insert 2025-08-12 12:28:35 -03:00
pedrocarlo
fe0e4bcbb7 state machine for seek_end 2025-08-12 12:28:35 -03:00
pedrocarlo
fc05518192 refactor continue_payload_overflow_with_offset 2025-08-12 12:28:34 -03:00
pedrocarlo
81fbf8cb4b balance_non_root validation logic should be done in the next state 2025-08-12 12:28:34 -03:00
pedrocarlo
96a6bc5125 end_tx does not need schema_did_change variable 2025-08-11 18:59:11 -03:00
PThorpe92
d7e4ba21f8 Add explanation for using 3mb limit 2025-08-08 10:55:28 -04:00
PThorpe92
3cff47e490 Fix btree test to properly initialize pool 2025-08-08 10:55:27 -04:00
PThorpe92
9d1ca1c8ca Add ReadFixed/WriteFixed opcodes for buffers from registered arena 2025-08-08 10:55:27 -04:00
PThorpe92
4ffb273b53 Adjust IO to use new buffer pool and buffer API 2025-08-08 10:55:26 -04:00
Jussi Saurio
7fd63d8a5d btree: cache usable_space in the btreecursor constructor 2025-08-08 10:32:18 +03:00
Jussi Saurio
15c429b673 btree: remove completely unused ParseRecordState 2025-08-08 10:08:59 +03:00
Preston Thorpe
7a793b818d Merge 'perf: a few small insert optimizations' from Jussi Saurio
1. We spend a lot of time in `cell_get_raw_region` in the balancing
routine, and especially calling `contents.page_type()` there a lot, so
extract a version that can take some precomputed arguments so those
don't have to be redundantly computed multiple times for successive
calls where those values are going to be the same
2. Avoid calling `self.usable_space()` in a loop in
`insert_into_page()`.
3. Avoid accessing `pages_in_frames` lock if we're not going to modify
it
main improvement is to the "insert 100 rows" bench which ends up doing
balancing a lot:
```
Insert rows in batches/limbo_insert_1_rows
                        time:   [22.856 µs 24.342 µs 27.496 µs]
                        change: [-3.3579% +15.495% +67.671%] (p = 0.62 > 0.05)
                        No change in performance detected.

Benchmarking Insert rows in batches/limbo_insert_10_rows: Collecting 100 samples in estim
Insert rows in batches/limbo_insert_10_rows
                        time:   [32.196 µs 32.604 µs 32.981 µs]
                        change: [+1.3253% +2.9177% +4.5863%] (p = 0.00 < 0.05)
                        Performance has regressed.

Insert rows in batches/limbo_insert_100_rows
                        time:   [89.425 µs 92.105 µs 96.304 µs]
                        change: [-18.317% -13.605% -9.1022%] (p = 0.00 < 0.05)
                        Performance has improved.
```

Reviewed-by: Preston Thorpe <preston@turso.tech>

Closes #2483
2025-08-07 21:33:30 -04:00
Jussi Saurio
1fe32dadf3 PageContent: make read_x/write_x methods private and add dedicated methods
Problem:

A very easy source of bugs is to mistakenly use e.g. PageContent::read_u16()
instead of PageContent::read_u16_no_offset(). The difference between the two
is that `read_u16()` adds 100 bytes to the requested byte offset if and only if
the page in question is page 1, which contains a 100-byte database header.

Case in point: see #2491.

Observation:

In all of the cases where we want to read from or write to a page  "header-sensitively",
those reads/writes are to so-called "well known offsets", e.g. specific bytes in a btree
page header.

In all other cases, the "no-offset" versions, i.e. the ones taking the absolute byte offset
as parameter, should be used.

Solution:

1. Make all the offset-sensitive versions (read_u16() and friends) private methods of
`PageContent`.
2. Expose dedicated methods for things like updating rightmost pointer, updating fragmented
bytes count and so on, and use them instead of the plain read/write methods universally.
2025-08-07 17:00:06 +03:00
Jussi Saurio
6cd7334afc btree/fix: use correct byte offsets for page1 in defragmentation
`defragment_page_fast()` incorrectly didn't use the version of
read/write methods on `PageContent` that does NOT add the 100 byte
database header into the requested byte offset.

this resulted in defragment of page 1 in reading 2nd/3rd freeblocks
from the wrong offset and writing cell offsets to the wrong location.
2025-08-07 15:42:06 +03:00
Jussi Saurio
2ed41bbb35 btree/insert: avoid calling self.usable_space() in a loop 2025-08-07 10:09:35 +03:00
Jussi Saurio
4b27cc0d46 btree: add fast path version of cell_get_raw_region 2025-08-07 09:57:56 +03:00
Jussi Saurio
3db25cf84c perf/btree: add method for getting raw offset of cell payload start 2025-08-07 09:34:05 +03:00
Jussi Saurio
c8d2a1a480 btree: add a few more assertions about balance state 2025-08-06 13:39:20 +03:00
Jussi Saurio
a86a0e194d refactor/btree: cleanup write/delete/balancing states
Problem:

Currently `WriteState` "owns" the balancing state machine, even
though a separate `DeleteState` can also trigger balancing, which
results in awkward back-and-forth switching between `CursorState::Write`
and `CursorState::Delete` during balancing.

Fix:

1. Extract `balance_state` as a separate state machine, since its
state transitions are exactly the same regardless of whether an
insert or a delete triggered the balancing.
2. This allows to remove the different 'Balance-xxx' variants from
`WriteState`, as well as removing `WriteInfo` and `DeleteInfo`, as
those states become just simple enums now. Each of them now has a state
called `Balancing` which just delegates work to the balancing state
machine.
3. This further allows us to remove the awkward switching between
`CursorState::Delete` and `CursorState::Write` during a balance that
happens as a result of a deletion.
2025-08-06 13:37:35 +03:00
Jussi Saurio
5f3cfaac60 refactor/btree: don't clone WriteState in balance_non_root() 2025-08-06 11:30:09 +03:00
Jussi Saurio
a15d7dd2e7 refactor/btree: don't clone WriteState in balance() 2025-08-06 11:30:09 +03:00
Jussi Saurio
1c1f55fdfb refactor/btree: remove cloning of WriteState in insert_into_page() 2025-08-06 08:50:56 +03:00
Jussi Saurio
c3a32b63bf refactor/btree: remove unnecessary ref of self in overwrite_content() 2025-08-06 08:45:34 +03:00
Jussi Saurio
6dd08c21e4 refactor/btree: remove unnecessary mut ref of self in rowid() 2025-08-06 08:44:52 +03:00
Jussi Saurio
839d428e36 core/btree: fix re-entrancy bug in insert_into_page()
We currently clone WriteState on every iteration of `insert_into_page()`,
presumably for Borrow Checker Reasons (tm).

There was a bug in `WriteState::Insert` handling where if `fill_cell_payload()`
returned IO, the `fill_cell_payload_state` was not updated in
`write_info.state`, leading to an infinite loop of allocating new pages.

This bug was surfaced by, but not caused by, #2400.
2025-08-06 08:01:49 +03:00
PThorpe92
f6a68cffc2 Remove RefCell from IO and Page apis 2025-08-05 16:24:49 -04:00
Jussi Saurio
cde8567b1d Merge 'More state machine + Return IO in places where completions are created' from Pedro Muniz
In preparation for tracking IO Completions, we need to start to return
IO in places where completions are created. Doing some more plumbing now
to avoid bigger PRs for the future

Closes #2438
2025-08-05 15:47:51 +03:00
Pekka Enberg
d2fea25fef Merge 'perf/btree: implement fast algorithm for defragment_page' from Jussi Saurio
Implement sqlite's fast path defragment algorithm. This path is taken
when:
1. There are 1-2 freeblocks
2. There are at most `max_frag_bytes` fragmented free bytes (-1..=4)
Instead of reconstructing the entire page, it merges the two freeblocks
and then moves the merged freeblock to the left, effectively turning it
into free space in the unallocated region, instead of a freeblock.
`max_frag_bytes` is particularly important when jnserting a new cell,
because if the page contains (in total) ~just enough space for the new
cell, then there can be hardly any fragmented free space because
otherwise, merging the 1-2 freeblocks won't produce enough contiguous
free space to fit the cell.
## Benchmark
```sql
Insert rows in batches/limbo_insert_1_rows
                        time:   [26.692 µs 27.153 µs 27.695 µs]
                        change: [-9.9033% -2.9097% +1.6336%] (p = 0.55 > 0.05)
                        No change in performance detected.
Insert rows in batches/limbo_insert_10_rows
                        time:   [38.618 µs 40.022 µs 42.201 µs]
                        change: [-8.9137% -6.6405% -4.2299%] (p = 0.00 < 0.05)
                        Performance has improved.
Insert rows in batches/limbo_insert_100_rows
                        time:   [168.94 µs 169.58 µs 170.31 µs]
                        change: [-22.520% -17.669% -12.790%] (p = 0.00 < 0.05)
                        Performance has improved.
```

Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>

Closes #2411
2025-08-05 12:44:48 +03:00
Pekka Enberg
e355fc4c65 Merge 'core/mvcc: implement seeking operations with rowid' from Pere Diaz Bou
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>

Closes #2429
2025-08-05 12:40:48 +03:00
Jussi Saurio
ad35cf07eb Add extra illustrative doodle for pere 2025-08-05 11:24:15 +03:00
Jussi Saurio
a5330aa6fb perf/btree: implement fast algorithm for defragment_page 2025-08-05 11:24:14 +03:00
Jussi Saurio
5b84ad6b0f Merge 'Update defragment page to defragment in-place' from João Severo
Change original code from doing a full copy of the original buffer to
modify the buffer in-place using a temporary vector with offsets.

Closes #2258
2025-08-05 11:22:22 +03:00
pedrocarlo
a4a2425ffd return IO in places where completions are created 2025-08-04 23:28:57 -03:00
pedrocarlo
f2d84a534c adjust clear_overflow_pages 2025-08-04 15:28:06 -03:00
pedrocarlo
718ad5e7fd btree_destroy retunrn IO 2025-08-04 14:12:51 -03:00
pedrocarlo
e0978844e6 adjust integrity_check 2025-08-04 14:12:50 -03:00
pedrocarlo
aa05616845 fix tests 2025-08-04 13:08:30 -03:00
pedrocarlo
5f52d9b6b4 state machine for count 2025-08-04 13:00:43 -03:00
pedrocarlo
1585d5cbee state machine for 'next' and prev 2025-08-04 13:00:43 -03:00
pedrocarlo
f1df9a909e state machine for 'rewind' 2025-08-04 12:59:52 -03:00
Pere Diaz Bou
662da34e7d core/mvcc: implement seeking operations with rowid 2025-08-04 13:52:54 +02:00
Pere Diaz Bou
f26e442597 core/mvcc: fix new rowid
next rowid was being tracked globally for all tables and restarted to 0
every time database was opened
2025-08-04 12:31:17 +02:00
Jussi Saurio
63a5ef596b perf/btree: skip seek in move_to_rightmost() if we are already on rightmost page 2025-08-02 13:56:59 +03:00
Jussi Saurio
66f1ff9ad0 btree/defragment_page: fix corruption check assertion 2025-08-02 13:28:41 +03:00
Joao Severo
1f21d92f6d use turbo_assert! 2025-08-02 13:24:12 +03:00
Joao Severo
71b09727d9 add comment clarifying the cell ordering 2025-08-02 13:24:12 +03:00