mirror of
https://github.com/aljazceru/turso.git
synced 2025-12-29 05:54:21 +01:00
After significant digging into what was causing (particularly writes) to be so much slower for io_uring back-end, it was determined that particularly checkpointing was incredibly slow, for several reasons. One is that we essentially end up calling `submit_and_wait` for every page. This PR (of course, heavily conflicts with my other open PR) attempts to remedy this: addding `pwritev` to the File trait for IO back-ends that want to support it, and aggregates contiguous writes into a series of `pwritev` calls instead of individually ### Performance: `make bench-vfs SQL="insert into products (name,price) values (randomblob(4096), randomblob(2048));" N=1000` # Update: **main** <img width="505" height="194" alt="image" src="https://github.com/user- attachments/assets/8e4a27af-0bb6-4e01-8725-00bc9f8a82d6" /> **this branch** <img width="555" height="197" alt="image" src="https://github.com/user- attachments/assets/fad1f685-3cb0-4e06-aa9d-f797a0db8c63" /> The same test (any test with writes) on this updated branch is now roughly as fast as syscall IO back-end, often runs will be faster. Illustrating a checkpoint. Every `count=N` where N > 1 is M syscalls saved, where M = N - 1. (roughly ~850 syscalls saved) <img width="590" height="534" alt="image" src="https://github.com/user- attachments/assets/a6171ac9-1192-4d3e-a6bf-eeda3f43af07" /> (if you are wondering about why it didn't add 12000-399 and 12400-417, it's because there is a `512` page batch limit that was hit to prevent hitting `IOV_MAX`, in the rare case that it's lower than 1024 and the entire checkpoint is a single run) Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com> Closes #2278