This PR implements a complete JSONB parser and serializer as current PR
draft looks stale.
Sorry for huge PR.
I've choose a recursive parsing approach because:
1. It's simpler to understand and maintain
2. It follows SQLite's implementation pattern, ensuring compatibility
3. It naturally maps to JSON's hierarchical structure
The implementation includes comprehensive test coverage for standard
JSON features and JSON5 extensions. All test cases pass successfully,
handling edge cases like nested structures, escape sequences, and
various number formats.
While the code is ready for review, I believe it would benefit from fuzz
testing in the future to identify any edge cases not covered by the
current tests.
Ready for review, proposals and feedback.
Closes#1114
This PR implements a `VFS` module for our extension library, allowing
extensions to be written that can introduce different I/O back-ends.
EDIT: there is an included plain external/sync vfs example for testing,
as mentioned in #996 they can be combined after they are both merged and
we can keep 1 extension crate just for testing that features can be
added to, without making a new extension just to test stuff.
This PR also adds the `.vfslist` dot command, and replaces the `--io`
CLI argument with `--vfs` to match sqlite.
In order to support building vfs modules at compile time, and to then
support opening a brand new db file using a staticly built-in extension
module, a new method was created `open_with_vfs` that will load any vfs
modules before a `Database` is created, and uses that `IO` to create the
initial file, and returns it: `Result<(Arc<dyn IO>, Arc<Database>)>`. in
keeping with the API of core.
When #1039 is merged, the vfs module can be specified in a query
parameter.
Closes#960
Fuzz testing is great for finding bugs, but until we fix the bugs,
failing CI runs out of the blue for unrelated PRs is not very
productive. Hopefully we can enable this soon again, but until then,
let's not fail the test suite all the time randomly.
There are two bugs in #1085.
1. `find_free_cell` accesses non-existent free blocks and returns their
size to `allocate_space`. This is out of range access error. The fix is
to add a loop termination condition that stops it when we hit the end of
the blocks
2. This bug is caused by `find_free_cell` some how swallowing the blocks
with size `4 bytes`. So `compute_free_space` consistently undercounts by
`4 bytes`. I've refactored that part of the code to make sure 4 sized
block are not deleted.
I've also added a unit test which proves these fixes work and also added
a function called `debug_print_freelist` which prints all free blocks of
a page.
For now I've silenced the `overflow_page` tests.
Fixes#1085Closes#1111
When I added frame reading support I thought, okay, who cares about the page id of this page it we read it from a frame because we don't need it to compute the offset to read from the file in this case. Fuck me, because it was needed in case we read `page 1` from WAL because it has a differnt `offset`.
Previously any block that has a size 4 is deleted resulting in the issue of computed free space
is less than 4 bytes when compared to expected free space.
This PR adds support for `DROP TABLE` and addresses issue
https://github.com/tursodatabase/limbo/issues/894
It depends on https://github.com/tursodatabase/limbo/pull/785 being
merged in because it requires the implementation of `free_page`.
EDIT: The PR above has been merged.
It adds the following:
* an implementation for the `DropTable` AST instruction via a method
called `translate_drop_table`
* a couple of new instructions - `Destroy` and `DropTable`. The former
is to modify physical b-tree pages and the latter is to modify in-memory
structures like the schema hash table.
* `btree_destroy` on `BTreeCursor` to walk the tree of pages for this
table and place it in free list.
* state machine traversal for both `btree_destroy` and
`clear_overflow_pages` to ensure performant, correct code.
* unit & tcl tests
* modifies the `Null` instruction to follow SQLite semantics and accept
a second register. It will set all registers in this range to null. This
is required for `DROP TABLE`.
The screenshots below have a comparison of the bytecodes generated via
SQLite & Limbo.
Limbo has the same instruction set except for the subroutines which
involve opening an ephemeral table, copying over the triggers from the
`sqlite_schema` table and then re-inserting them back into the
`sqlite_schema` table.
This is because `OpenEphemeral` is still a WIP and is being tracked at
https://github.com/tursodatabase/limbo/pull/768


Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#897