This PR adds support for partial indexes, e.g. `CREATE INDEX` with a
provided predicate
```sql
CREATE UNIQUE INDEX idx_expensive ON products(sku) where price > 100;
```
The PR does not yet implement support for using the partial indexes in
the optimizer.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3228
UNION queries, while useful on their own, are a cornerstone of recursive
CTEs.
This PR implements:
* the merge operator, required to merge both sides of a union query.
* the circuitry necessary to issue the Merge operator.
* extraction of tables mentioned in union and CTE expressions, so we can
correctly populate tables that contain them.
Closes#3234
The Merge operator is a stateless operator that merges two deltas.
There are two modes: Distinct, where we merge together values that
are the same, and All, where we preserve all values. We use the rowid of
the hashable row to guarantee that: In Distinct mode, the rowid is set
to 0 in both sides. If they values are the same, they will hash to the
same thing. For All, the rowids are different.
The merge operator is used for the UNION statement, which is a
cornerstone of Recursive CTEs.
The population code extracts table information from the select statement
so it can populate the materialized view. But the code, as written
today, is naive. It doesn't capture table information correctly if there
is more than one select statement (such in the case of a union query).
We currently only support column / literal comparisons in the filter
operator. But with JOINs, comparisons are usually against two columns.
Do the work to support it.
Unlike the other operators, project works just fine with ambiguous
columsn, because it works with compiled expressions. We don't need to
patch it, but let's make sure it keeps working by writing a test.
We already did similarly for the AggregateOperator: for joins
you can have the same column name in many tables. And passing schema
information to the operator is a layering violation (the operator may be
operating on the result of a previous node, and at that point there is
no more "schema"). Therefore we pass indexes into the column set the
operator has.
The FilterOperator has a complication: we are using it to generate the
SQL for the populate statement, and that needs column names. However,
we should *not* be using the FilterOperator for that, and that is a
relic from the time where we had operator information directly inside
the IncrementalView.
To enable moving the FilterOperator to index-based, we rework that code.
For joins, we'll need to populate many tables anyway, so we take the
time to do that work here.
This PR improves the DBSP circuit so that it handles the JOIN operator.
The JOIN operator exposes a weakness of our current model: we usually
pass a list of columns between operators, and find the right column by
name when needed.
But with JOINs, many tables can have the same columns. The operators
will then find the wrong column (same name, different table), and
produce incorrect results.
To fix this, we must do two things:
1) Change the Logical Plan. It needs to track table provenance.
2) Fix the aggregators: it needs to operate on indexes, not names.
For the aggregators, note that table provenance is the wrong
abstraction. The aggregator is likely working with a logical table that
is the result of previous nodes in the circuit. So we just need to be
able to tell it which index in the column array it should use.
The operator.rs file was so huge, that we didn't even notice there was a
test block in the middle of the file that was testing things that were
long moved to dbsp.rs (the HashableRow). Move the tests there now.
The join operator is also a stateful operator. It keeps the input deltas
stored in the state, for both the left and right branches of the join.
JOINs extract a join key, which is the values that were used in the
join's equality statement. That key is now our zset_id, and it points
to a collection of rows.
Ahead of the implementation of JOINs, we need to evolve the
IncrementalView, which currently only accepts a single base table,
to keep a list of tables mentioned in the statement.
Our code for view needs to extract the list of columns used in the view.
We currently extract only from "the base table", but once we have joins,
we need a more complex structure, that keeps the mapping of
(tables, columns).
This actually affects both views and materialized views: for views, the
queries with joins work just fine, because views are just aliases for
a query. But the list of columns returned by pragma table_info on the
view is incorrect. We add a test to make sure it is fixed.
For materialized views, we add extensive tests to make sure that the
columns are extracted correctly.
Nothing fancy yet, assuming you merge this I'll do this one next:
```
warning: function pointer comparisons do not produce meaningful results since their addresses are not guaranteed to be unique
--> core/types.rs:403:5
|
398 | #[derive(Debug, Clone, PartialEq)]
| --------- in this derive macro expansion
...
402 | pub step_fn: StepFunction,
| ^^^^^^^^^^^^^^^^^^^^^^^^^
403 | pub finalize_fn: FinalizeFunction,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: the address of the same function can vary between different codegen units
= note: furthermore, different functions could have the same address after being merged together
= note: for more information visit <https://doc.rust-lang.org/nightly/core/ptr/fn.fn_addr_eq.html>
```
And fix a test failure that I resolved in Python (specific to macOS
hosts). Basically this PR is putting my toe in the water to see how open
you are to contribs!
Closes#3211
this PR improves 3-6% for `prepare` benchmark without slowing down
others. After this PR we don't have to store `InsnFunction` in
`Program` and `ProgramBuilder` anymore, because `to_function` will
return result without matching.
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3098
We have not implemented them before because they require the raw
elements to be kept. It is easy to see why in the following example:
current_min = 3;
insert(2) => current_min = 2 // can be done without state
delete(2) => needs to look at the state to determine new min!
The aggregator state was a very simple key-value structure. To
accomodate for min/max, we will make it into a more complex table, where
we can encode a more complex structure.
The key insight is that we can use a primary key composed of:
1) storage_id
2) zset_id,
3) element
The storage_id and zset_id are our previous key, except they are now
exploded to support a larger range of storage_id. With more bits
available in the storage_id, we can encode information about which
column we are storing. For aggregations in multiple columns, we will
need to keep a different list of values for min/max!
The element is just the values of the columns.
Because this is a primary key, the data will be sorted in the btree.
We can then just do a prefix search in the first two components of
the key and easily find the min/max when needed.
This new format is also adequate for joins. Joins will just have
a new storage_id which encodes two "columns" (left side, right side).
And also change the schema of the main table. I have come to see the
current key-value schema as inadequate for non-aggregate operators.
Calculating Min/Max, for example, doesn't feat in this schema because
we have to be able to track existing values and index them.
Another alternative is to keep one table per operator type, but this
quickly leads to an explosion of tables.
This is a collection of fixes for materialized views ahead of adding
support for JOINs.
It is mostly issues with how we assume there is a single table, with a
single delta, but we have to send more than one.
Those are things that are just objectively wrong, so I am sending it
separately to make the JOIN PR smaller.
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3009
A DeltaSet is a collection of Deltas, one per table.
We'll need that for joins. The populate step for now will still generate
a single set. That will be our next step to fix.
Ahead of the implementation of JOINs, we need to evolve the
IncrementalView, which currently only accepts a single base table,
to keep a list of tables mentioned in the statement.
We are validating that the weights on the materialized view table are
-1, 0, and 1. This is only true for the aggregator operator. For DBSP
in general, any number will do.
Our algorithm, however, would have deleted anything from the BTree that
is <= 0. So we don't expect them here.
We have code written for BTree (ZSet) persistence in both compiler.rs
and operator.rs, because there are minor differences between them. With
joins coming, it is time to unify this code.
indexes with the naming scheme "sqlite_autoindex_<tblname>_<number>"
are automatically created when a table is created with UNIQUE or
PRIMARY KEY definitions.
these indexes must map to the table definition SQL in definition order,
i.e. sqlite_autoindex_foo_1 must be the first instance of UNIQUE or
PRIMARY KEY and so on.
this commit fixes our autoindex creation / parsing so that this invariant
is upheld.