Previously we implemented update as a simple `Delete` + `Insert`
procedure which seemed okay for the moment but it wasn't. `Delete` can
trigger balance and a post balance `seek` which will leave cursor
pointing to an invalid page which `Insert` will try to insert to.
We solve this by removing `Delete` from the execution plan and rely on
`Insert` to properly overwrite the cell where the rowid is the same as
the one we are inserting.
Previously, with the `index_experimental` feature enabled, the query in
the added test would enter an infinite loop. This happened because
`label_grouping_agg_step` pointed to a constant argument that was moved
to the end of the program. As a result, the aggregation loop would jump
to the constant, then return to the start of the main loop, rewind the
index, and re-enter the aggregation loop—causing it to repeat
indefinitely.
Fixes DELETE not emitting conditional jumps at all if the associated
WhereTerm is a constant, e.g.
```sql
limbo> create table t(x);
limbo> explain DELETE FROM t WHERE 5-5;
addr opcode p1 p2 p3 p4 p5 comment
---- ----------------- ---- ---- ---- ------------- -- -------
0 Init 0 7 0 0 Start at 7
1 OpenWrite 0 2 0 0 root=2; t
2 Rewind 0 6 0 0 Rewind table t
3 RowId 0 1 0 0 r[1]=t.rowid
4 Delete 0 0 0 0
5 Next 0 3 0 0
6 Halt 0 0 0 0
7 Transaction 0 1 0 0 write=true
8 Goto 0 1 0 0
```
I was adding more stuff to the simulator in a Branch of mine, and I
caught this error with delete. Upstreaming the fix here. As we do with
Update, I added the translation step for the `WhereTerms` of the query.
Edit: Closes#1732. Closes#1733. Closes#1734. Closes#1735. Closes
#1736. Closes#1738. Closes#1739. Closes#1740.
Edit: Also pushes constant where term translation to `init_loop` for
Update and Select as well.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1746
The `RowData` opcode is required to implement #1575.
I haven't found a ideal way to test this PR independently, but I
verified its functionality while working on #1575(to be committed soon),
and it performs effectively.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1756
This PR has two parts:
1. The first commit refactors how information about which registers
should be populated in the aggregation loop is calculated and
propagated. This simplification revealed a bug, which is addressed as
part of the same commit (see the included test).
2. The second commit fixes incorrect behavior for queries where complex
expressions include both aggregate and non-aggregate components. For
example, the following query previously produced incorrect results:
```sql
SELECT
CASE WHEN c0 != 'x' THEN group_concat(c1, ',') ELSE 'x' END
FROM t0
GROUP BY c0;
```
In such cases, non-aggregate columns like `c0` were not available during
the result construction for each group, leading to incorrect evaluation.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1780
It is easy to chalk this fuzzer issue to erratic floating point
behaviour, but this is not the case here.
Currently, `exec_math_log` calculates log with arbitrary bases by using
the following formula: `log_a(b) ~= ln(b) / ln(a)`. This calculation is
an approximation with lots of its floating point precision lost to
dividing the results of natural logarithms.
By using the specialized versions of the log functions (`log2` &
`log10`), we can avoid this loss of precision.
SQLite also uses these specialized log functions when possible, so it
doesn't hurt to do the same thing when aiming for parity.
Previously, queries like:
```
SELECT
CASE WHEN c0 != 'x' THEN group_concat(c1, ',') ELSE 'x' END
FROM t0
GROUP BY c0;
```
would return incorrect results because c0 was not copied during the
aggregation loop into a register accessible to the logic processing the
grouped results (e.g., the CASE WHEN expression in this example).
The same issue applied to expressions in the HAVING and ORDER BY clauses.
Previously, the logic for collecting non-aggregate columns was duplicated
across multiple locations and implemented inconsistently. This caused a
bug that was revealed by the refactoring in this commit (see the added
test).
Currently indexes are the bulk of the problem with `UPDATE` and
`DELETE`, while we work on fixing those it makes sense to disable
indexing since they are not stable. We want to try to make everything
else stable before we continue with indexing.
By encoding a Vec<u8> (vector of bytes), a lossy conversion from a
`Vec<u8>` to a `String` occurs. The lossy conversion leads to an
incorrect hex value to be displayed.
Avoid the lossy conversion and let the `hex` crate do its thing.
- `Update` query doesn't update `n_changes`. Let's make it work
- Add `InsertFlags` to add meta information related to insert operations
- For update query, add `UPDATE` flag
- Currently, the update query executes `Insn::Delete` and `Insn::Insert`
internally, it increases `n_change` by 2. So, for the update query,
let's skip increasing `n_change` for the `Insn::Insert`
https://github.com/tursodatabase/limbo/issues/1681
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#1683
This PR adds support for the instruction `IntegrityCk` which performs an
integrity check on the contents of a single table. Next PR I will try to
implement the rest of the integrity check where we would check indexes
containt correct amount of data and some more.
<img width="1151" alt="image" src="https://github.com/user-
attachments/assets/29d54148-55ba-480f-b972-e38587f0a483" />
Closes#1719
This is more in line with SQLite:
```sql
sqlite> insert into t values (randomblob(1024*1024 * 6));
Parse error: no such table: t
```
Reviewed-by: Preston Thorpe (@PThorpe92)
Closes#1744