This change enables the Go adapter to embed platform-specific libraries
and extract them at runtime, eliminating the need for users to set
LD_LIBRARY_PATH or other environment variables.
- Add embedded.go with core library extraction functionality
- Update limbo_unix.go and limbo_windows.go to use embedded libraries
- Add build_lib.sh script to generate platform-specific libraries
- Update README.md with documentation for the new feature
- Add .gitignore to prevent committing binary files
- Add test coverage for Vector operations (vector(), vector_extract(),
vector_distance_cos()) and sqlite core features
The implementation maintains backward compatibility with the traditional
library loading mechanism as a fallback. This approach is inspired by
projects like go-embed-python that use a similar technique for native
library distribution.
https://github.com/tursodatabase/limbo/issues/506
Reviewed-by: Preston Thorpe (@PThorpe92)
Closes#1434
systems report x86_64. This change maps the architecture names to ensure the built libraries are placed in the correct directories for Go's embedded loading system (e.g., libs/linux_amd64)
OwnedValue has become a powerhouse of madness, mainly because I decided
to do it like that when I first introduced AggContext. I decided it was
enough and I introduced a `Register` struct that contains `OwnedValue`,
`Record` and `Aggregation`, this way we don't use `OwnedValue` for
everything make everyone's life harder.
This is the next step towards making ImmutableRecords the default
because I want to remove unnecessary allocations. Right now we clone
OwnedValues when we generate a record more than needed.
We currently have two value types, `Value` and `OwnedValue`. The
original thinking was that `Value` is external type and `OwnedValue` is
internal type. However, this just results in unnecessary transformation
between the types as data crosses the Limbo library boundary.
Let's just follow SQLite here and consolidate on a single value type
(where `sqlite3_value` is just an alias for the internal `Mem` type).
The way this will eventually work is that we can have bunch of
pre-allocated `OwnedValue` objects in `ProgramState` and basically
return a reference to them all the way to the application itself, which
extracts the actual value.
Move result row to `ProgramState` to mimic what SQLite does where `Vdbe`
struct has a `pResultRow` member. This makes it easier to deal with result
lifetime, but more importantly, eventually lazily parse values at the edges of
the API.
This PR brings the Go database/sql driver to it's first working state
and adds a Go package to demonstrate.
The example pkg demonstrates (in it's most bare/naive form at this
point):
1. Open database (memory, in this case)
2. Create connection
3. Prepare statement (Create table)
4. `Exec`
5. Prepare statement (Insert, bind 3 arguments (int, string, blob) (
6. `Exec`
7. Prepare statement (Select *)
8. `Columns` -> print columns
9. `Query` -> print rows
10. Close db connection

still tons of work to do but I at least wanted to get it to a state
where it's not totally broken.
I'll add some actual tests tomorrow
Closes#796
This simple patch makes sure we can operate with a reference to the
string instead of being forced to transform it to a string, and makes
sure that the Arc doesn't have to be cloned (which can be expensive in
multi-core systems).
This doesn't really make a large difference in benchmarks, given how
expensive Parse::new() is.