This continues the work for `printf` support from from tthe issue
https://github.com/tursodatabase/turso/issues/885.
This PR adds support for `%i` operands in the printf function. An
example of a command that works now and didn't before:
```sql
SELECT PRINTF("Decimals: %d %i", 200, 300);
```
This is Just an initial very simple contribution to get my feet wet and
learn the basics about the project and how contributions here work. I
intend to do something more meaningful in next contributions. Probably
will try to finish support for missing `printf` functionality also.
Closes#2615
Closes#2612
Two commits:
## Simplify ORDER BY sorter column remapping
In case an ORDER BY column exactly matches a result column in the
SELECT,
the insertion of the result column into the ORDER BY sorter can be
skipped
because it's already necessarily inserted as a sorting column.
For this reason we have a mapping to know what index a given result
column
has in the order by sorter.
This commit makes that mapping much simpler.
## Fix DISTINCT with ORDER BY
We had a bug where we were checking for duplicates in the DISTINCT
index based on both the result column count plus any ORDER BY columns
not present in the DISTINCT clause.
This is wrong, so fix it by only using the result columns for the
dedupe check.
Closes#2614
We had a bug where we were checking for duplicates in the DISTINCT
index based on both the result column count plus any ORDER BY columns
not present in the DISTINCT clause.
This is wrong, so fix it by only using the result columns for the
dedupe check.
In case an ORDER BY column exactly matches a result column in the SELECT,
the insertion of the result column into the ORDER BY sorter can be skipped
because it's already necessarily inserted as a sorting column.
For this reason we have a mapping to know what index a given result column
has in the order by sorter.
This commit makes that mapping much simpler.
Closes#2588
SQLite internally implements `.clone` by doing something like piping
`.dump` into a new connection to a database attached to the file you
want. This PR implements that by adding an `ApplyWriter` that implements
`std::fmt::Write`, and refactors our current `.dump` plumbing to work
with any `Write` interface, so we can `.dump` to stdout or `.dump` to a
new connection, therefore cloning the database.
Reviewed-by: Nikita Sivukhin (@sivukhin)
Closes#2590
bringing #1127 back to life, except better because this doesn't add
Tokio as a dependency for extension lib just for tests.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2418
as @avinassh pointed out, if we have an error in a `writev` call, it
could go unnoticed and the checkpoint could proceed as expected after
not writing the expected total amt.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2601
## Problem
There are several problems with our current statically allocated
`BufferPool`.
1. You cannot open two databases in the same process with different page
sizes, because the `BufferPool`'s `Arena`s will be locked forever into
the page size of the first database. This is the case regardless of
whether the two `Database`s are open at the same time, or if the first
is closed before the second is opened.
2. It is impossible to even write Rust tests for different page sizes
because of this, assuming the test uses a single process.
## Solution
Make `Database` own `BufferPool` instead of it being statically
allocated, so this problem goes away.
Note that I didn't touch the still statically-allocated
`TEMP_BUFFER_CACHE`, because it should continue to work regardless of
this change. It should only be a problem if the user has two or more
databases with different page sizes open simultaneously, because
`TEMP_BUFFER_CACHE` will only support one pool of a given page size at a
time, so the rest of the allocations will go through the global
allocator instead.
## Notes
I extracted this change out from #2569, because I didn't want it to be
smuggled in without being reviewed as an individual piece.
Reviewed-by: Avinash Sajjanshetty (@avinassh)
Closes#2596
Implements the unlikely(X) function. Removes runtime implementations of
likely(), unlikely() and likelihood(), replacing them with panics if
they reach the VDBE.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2559
Now that we actually implemented the statement parsing around views,
implementing normal SQLite views is relatively trivial, as they are just
an alias to a query.
We'll implement them now to get them out of the way, and then I'll go
back to DBSP
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2591
Problem
There are several problems with our current statically allocated
`BufferPool`.
1. You cannot open two databases in the same process with different
page sizes, because the `BufferPool`'s `Arena`s will be locked forever
into the page size of the first database. This is the case regardless
of whether the two `Database`s are open at the same time, or if the first
is closed before the second is opened.
2. It is impossible to even write Rust tests for different page sizes because
of this, assuming the test uses a single process.
Solution
Make `Database` own `BufferPool` instead of it being statically allocated, so this
problem goes away.
Note that I didn't touch the still statically-allocated `TEMP_BUFFER_CACHE`, because
it should continue to work regardless of this change. It should only be a problem if
the user has two or more databases with different page sizes open simultaneously, because
`TEMP_BUFFER_CACHE` will only support one pool of a given page size at a time, so the rest
of the allocations will go through the global allocator instead.
Notes
I extracted this change out from #2569, because I didn't want it to be smuggled in without
being reviewed as an individual piece.
Sequential is very rarely actually needed, we can very safely use
Acquire / Release for loads/stores, and some of these aren't guarding
anything and can use Relaxed.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#2548
The op_insert function was incorrectly trying to capture an "old record"
for fresh INSERT operations when a table had dependent materialized
views. This caused a "Cannot delete: no current row" error because the
cursor wasn't positioned on any row for new inserts.
The issue was introduced in commit f38333b3 which refactored the state
machine for incremental view handling but didn't properly distinguish
between:
- Fresh INSERT operations (no old record exists)
- UPDATE operations without rowid change (old record should be captured)
- UPDATE operations with rowid change (already handled by DELETE)
This fix checks if cursor.rowid() returns a value before attempting to
capture the old record. If no row exists (fresh INSERT), we correctly
set old_record to None instead of erroring out.
I am also including tests to make sure this doesn't break. The reason I
didn't include tests earlier is that I didn't know it was possible to
run the tests under a flag. But in here, I am just adding the flag to
the execution script.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#2579