```sql
-- This PR does effectively this transformation:
select
sum(l_extendedprice* (1 - l_discount)) as revenue
from
lineitem,
part
where
(
p_partkey = l_partkey
and p_brand = 'Brand#22'
and p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')
and l_quantity >= 8 and l_quantity <= 8 + 10
and p_size between 1 and 5
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
)
or
(
p_partkey = l_partkey
and p_brand = 'Brand#23'
and p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')
and l_quantity >= 10 and l_quantity <= 10 + 10
and p_size between 1 and 10
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
)
or
(
p_partkey = l_partkey
and p_brand = 'Brand#12'
and p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')
and l_quantity >= 24 and l_quantity <= 24 + 10
and p_size between 1 and 15
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
);
-- Same query with common conjuncts (ANDs) extracted:
select
sum(l_extendedprice* (1 - l_discount)) as revenue
from
lineitem,
part
where
p_partkey = l_partkey
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
and (
(
p_brand = 'Brand#22'
and p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')
and l_quantity >= 8 and l_quantity <= 8 + 10
and p_size between 1 and 5
)
or
(
p_brand = 'Brand#23'
and p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')
and l_quantity >= 10 and l_quantity <= 10 + 10
and p_size between 1 and 10
)
or
(
p_brand = 'Brand#12'
and p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')
and l_quantity >= 24 and l_quantity <= 24 + 10
and p_size between 1 and 15
)
);
```
This allows Limbo's optimizer to 1. recognize `p_partkey=l_partkey` as
an index constraint on `part`, and 2. filter out `lineitem` rows before
joining. With this optimization, Limbo completes TPC-H `19.sql` nearly
as fast as SQLite on my machine. Without it, Limbo takes forever.
This branch: `939ms`
Main: `uh, i started running it a few minutes ago and it hasnt finished,
and i dont feel like waiting i guess`
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#1520
Previously the Operation enum consisted of:
- Operation::Scan
- Operation::Search
- Operation::Subquery
Which was always a dumb hack because what we really are doing is an
Operation::Scan on a "virtual"/"pseudo" table (overloaded names...)
derived from a subquery appearing in the FROM clause.
Hence, refactor the relevant data structures so that the Table enum now
contains a new variant:
Table::FromClauseSubquery
And the Operation enum only consists of Scan and Search.
```
SELECT * FROM (SELECT ...) sub;
-- the subquery here was previously interpreted as Operation::Subquery on a Table::Pseudo,
-- with a lot of special handling for Operation::Subquery in different code paths
-- now it's an Operation::Scan on a Table::FromClauseSubquery
```
No functional changes (intended, at least!)
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#1529
```sql
-- This PR does effectively this transformation:
select
sum(l_extendedprice* (1 - l_discount)) as revenue
from
lineitem,
part
where
(
p_partkey = l_partkey
and p_brand = 'Brand#22'
and p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')
and l_quantity >= 8 and l_quantity <= 8 + 10
and p_size between 1 and 5
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
)
or
(
p_partkey = l_partkey
and p_brand = 'Brand#23'
and p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')
and l_quantity >= 10 and l_quantity <= 10 + 10
and p_size between 1 and 10
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
)
or
(
p_partkey = l_partkey
and p_brand = 'Brand#12'
and p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')
and l_quantity >= 24 and l_quantity <= 24 + 10
and p_size between 1 and 15
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
);
-- Same query with common conjuncts (ANDs) extracted:
select
sum(l_extendedprice* (1 - l_discount)) as revenue
from
lineitem,
part
where
p_partkey = l_partkey
and l_shipmode in ('AIR', 'AIR REG')
and l_shipinstruct = 'DELIVER IN PERSON'
and (
(
p_brand = 'Brand#22'
and p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')
and l_quantity >= 8 and l_quantity <= 8 + 10
and p_size between 1 and 5
)
or
(
p_brand = 'Brand#23'
and p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')
and l_quantity >= 10 and l_quantity <= 10 + 10
and p_size between 1 and 10
)
or
(
p_brand = 'Brand#12'
and p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')
and l_quantity >= 24 and l_quantity <= 24 + 10
and p_size between 1 and 15
)
);
```
Reviewable commit by commit. CI failures are not related.
Adds support for e.g. `select first_name, sum(distinct age),
count(distinct age), avg(distinct age) from users group by 1`
Implementation details:
- Creates an ephemeral index per distinct aggregate, and jumps over the
accumulation step if a duplicate is found
Closes#1507
Previously the Operation enum consisted of:
- Operation::Scan
- Operation::Search
- Operation::Subquery
Which was always a dumb hack because what we really are doing is
an Operation::Scan on a "virtual"/"pseudo" table (overloaded names...)
derived from a subquery appearing in the FROM clause.
Hence, refactor the relevant data structures so that the Table enum
now contains a new variant:
Table::FromClauseSubquery
And the Operation enum only consists of Scan and Search.
No functional changes (intended, at least!)
I was implementing `ALTER TABLE .. RENAME TO`, and I noticed that
`COLLATE` was necessary for it to work.
This is a relatively big PR as to properly implement `COLLATE`, I needed
to add a field to a couple of instructions that are emitted frequently,
and there is a lot of boilerplate that is required when you do such a
change.
My main source of reference was this site from SQLite:
https://sqlite.org/datatype3.html#collation. It gives a good description
of the precedence of collation in certain expressions.
I did write a couple of tests that I thought caught the edges cases of
`COLLATE`, but honestly, I may have missed a few. I would appreciate
some help later to write more tests.
`Collate` basically just compares two `TEXT` values according to some
comparison function. If both values are not `TEXT`, just fallback to the
normal comparison we are already doing. `Collate` happens in four main
places:
- `Collate` Expression modifier
- `Binary` Expression
- `Column` Expression
- `Order By` and `Group By`
In `Binary`, `Order By`, `Group By` expressions, the collation sequence
for the comparisons can be derived from explicitly with the use of
`COLLATE` keyword, or implicitly if there is a `COLLATE` definition in
`CREATE TABLE`. If neither are present it defaults to `Binary`
collation.
For the `Column` expression, it tries to use collation in `CREATE TABLE`
column definition. If not present it defaults to `Binary` collation.
Lastly, there was some repetition on how the `Binary` expression was
being translated, so I removed that part. As mentioned in the
`COMPAT.md`, I did not implement custom collation sequences yet, as it
would deter me from properly implementing. I have some ideas of how I
can extend my current implementation to support that with FFI, but I
think that is best served for a different PR.
Closes#1367
I still didn't find a good way to implement variadic functions, we
should have some sort of wrapper in JS layer but it didn't work so well
for me so far. But once done it will be easily transferable to any
function.
It also should probably be async, but AFAIC napi doesn't have a straight
way to implement async iterators.
Closes#1515
Various things to improve speed of long fuzz test execution time:
* remove unnecessary debug_validate_cell calls
* Add SortedVec for keys in fuzz tests
* Validate btree's depth in fuzz test every 1K inserts to not overload
test with validations. We add `VALIDATE_BTREE` env variable to enable
validation on every insert in case it is needed.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1521
1. `group_by_contains_all` was incorrect - it was not checking that all
order by columns are in group by; it was instead checking that all group
by columns are in order by, which is absolutely incorrect for the
intended purpose.
2. remove ORDER BY clause if GROUP BY clause can sort the rows in the
same way.
Test failures are not related
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#1511
We've run into trouble in multiple places due to the fact that we delete
terms from the where clause (e.g. when a constant condition is removed,
or the term becomes part of an index seek key).
A simpler solution is to add a flag indicating that the term is consumed
(used), so that it is not translated in the main loop anymore when WHERE
clause terms are evaluated.
note: CI failures are unrelated
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#1477
Various things:
* remove unnecessary debug_validate_cell calls
* Add SortedVec for keys in fuzz tests
* Validate btree's depth in fuzz test every 1K inserts to not overload
test with validations. We add `VALIDATE_BTREE` env variable to enable
validation on every insert in case it is needed.
This PR adds a new function `read_write_payload_with_offset` to support
reading and writing payload data at specific offsets, handling both
local content and overflow pages. This is a port of SQLite's
`accessPayload` function in `btree.c` and will be essential for
supporting incremental blob I/O in the coming PRs.
- Added a state machine called `PayloadOverflowWithOffset` to make the
procedure reentrant.
- Correctly processes both local payload data and payload stored in
overflow pages
Testing:
- Reading and writing to a column with no overflow pages.
- Reading and writing at an offset with overflow pages (spanning 10
pages)
Reviewed-by: Pere Diaz Bou <pere-altea@homail.com>
Closes#1476
As there were many merge conflicts for the other PR, I rewrote the code
and condensed it here.
ORIGINAL PR TEXT: Provides the code to almost close
https://github.com/tursodatabase/limbo/issues/1251 . The JsonSchema is
derived, but I am still not sure how to automate the distribution to
SchemaStore for autocomplete. I added some docs for that want to see the
config file description. I still am not sure how to automate this
documentation. Maybe some macro magic?
Reviewed-by: Preston Thorpe (@PThorpe92)
Closes#1430
Closes#1482. I needed to change the `key_exists_in_index` function
because it zips the values from the records it is comparing, but if one
of the records is empty or not of the same length, the `all` function
could return true incorrectly.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1514
Closes#1508 . There were two small issues to fix:
1. We were not checking in the IndexMap of columns, if the unique column
name is declared in the composite declaration exists in the IndexMap.
This solved the first this statement `create table t4(a, unique(b));`.
2. The second thing was that we forgot to add the column_name to the
HashSet of columns.
```rust
Some(PrimaryKeyDefinitionType::Simple { column, .. }) => {
let mut columns = HashSet::new();
columns.insert(std::mem::take(column));
// Have to also insert the current column_name we are iterating over in primary_key_column_results
columns.insert(column_name.clone()); <-- Fix here
primary_key_definition =
Some(PrimaryKeyDefinitionType::Composite { columns });
}
```
The rest of the modifications are just some small simplifications for
readability and avoiding some clones
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#1512
This prevents the new wal checkpoint tests in `sqlite3/tests/compat`
from writing/creating `test` table to `testing/testing.db`, which is
queried in later tests which fail for having an extra table.
There is another issue with failing tests related to the new `count`
impl that I am in the process of fixing as well, but that will be a
separate PR.
Closes#1513