We currently have two value types, `Value` and `OwnedValue`. The
original thinking was that `Value` is external type and `OwnedValue` is
internal type. However, this just results in unnecessary transformation
between the types as data crosses the Limbo library boundary.
Let's just follow SQLite here and consolidate on a single value type
(where `sqlite3_value` is just an alias for the internal `Mem` type).
The way this will eventually work is that we can have bunch of
pre-allocated `OwnedValue` objects in `ProgramState` and basically
return a reference to them all the way to the application itself, which
extracts the actual value.
Move result row to `ProgramState` to mimic what SQLite does where `Vdbe`
struct has a `pResultRow` member. This makes it easier to deal with result
lifetime, but more importantly, eventually lazily parse values at the edges of
the API.
This PR brings the Go database/sql driver to it's first working state
and adds a Go package to demonstrate.
The example pkg demonstrates (in it's most bare/naive form at this
point):
1. Open database (memory, in this case)
2. Create connection
3. Prepare statement (Create table)
4. `Exec`
5. Prepare statement (Insert, bind 3 arguments (int, string, blob) (
6. `Exec`
7. Prepare statement (Select *)
8. `Columns` -> print columns
9. `Query` -> print rows
10. Close db connection

still tons of work to do but I at least wanted to get it to a state
where it's not totally broken.
I'll add some actual tests tomorrow
Closes#796
This simple patch makes sure we can operate with a reference to the
string instead of being forced to transform it to a string, and makes
sure that the Arc doesn't have to be cloned (which can be expensive in
multi-core systems).
This doesn't really make a large difference in benchmarks, given how
expensive Parse::new() is.