This is just the bare minimum that I needed to convince myself that this
approach will work. The only views that we support are slices of the
main table: no aggregations, no joins, no projections.
drop view is implemented.
view population is implemented.
deletes, inserts and updates are implemented.
much like indexes before, a flag must be passed to enable views.
Makes it easier to test the feature:
```
$ cargo run -- --experimental-indexes
Limbo v0.0.22
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database
limbo> CREATE TABLE t(x);
limbo> CREATE INDEX t_idx ON t(x);
limbo> DROP INDEX t_idx;
```
OwnedValue has become a powerhouse of madness, mainly because I decided
to do it like that when I first introduced AggContext. I decided it was
enough and I introduced a `Register` struct that contains `OwnedValue`,
`Record` and `Aggregation`, this way we don't use `OwnedValue` for
everything make everyone's life harder.
This is the next step towards making ImmutableRecords the default
because I want to remove unnecessary allocations. Right now we clone
OwnedValues when we generate a record more than needed.
We currently have two value types, `Value` and `OwnedValue`. The
original thinking was that `Value` is external type and `OwnedValue` is
internal type. However, this just results in unnecessary transformation
between the types as data crosses the Limbo library boundary.
Let's just follow SQLite here and consolidate on a single value type
(where `sqlite3_value` is just an alias for the internal `Mem` type).
The way this will eventually work is that we can have bunch of
pre-allocated `OwnedValue` objects in `ProgramState` and basically
return a reference to them all the way to the application itself, which
extracts the actual value.
Move result row to `ProgramState` to mimic what SQLite does where `Vdbe`
struct has a `pResultRow` member. This makes it easier to deal with result
lifetime, but more importantly, eventually lazily parse values at the edges of
the API.
This PR brings the Go database/sql driver to it's first working state
and adds a Go package to demonstrate.
The example pkg demonstrates (in it's most bare/naive form at this
point):
1. Open database (memory, in this case)
2. Create connection
3. Prepare statement (Create table)
4. `Exec`
5. Prepare statement (Insert, bind 3 arguments (int, string, blob) (
6. `Exec`
7. Prepare statement (Select *)
8. `Columns` -> print columns
9. `Query` -> print rows
10. Close db connection

still tons of work to do but I at least wanted to get it to a state
where it's not totally broken.
I'll add some actual tests tomorrow
Closes#796
This simple patch makes sure we can operate with a reference to the
string instead of being forced to transform it to a string, and makes
sure that the Arc doesn't have to be cloned (which can be expensive in
multi-core systems).
This doesn't really make a large difference in benchmarks, given how
expensive Parse::new() is.