- On each interaction we assume that the value is NULL, so we need to
set it like so for every interaction in the list. So we force to not
emit this NULL as constant;
- Forces a copy so IN expressions works inside an aggregation
expression. Not ideal but it works, we should work more on the query
planner for sure.
I found an application in the open that expects sqlite_version() to
return a specific string (higher than 3.8...).
We had tons of those issues at Scylla, and the lesson was that you
tell your kids not to lie, but when life hits, well... you lie.
We'll add a new function, turso_version, that tells the truth.
- errors are hard to handle in case of some scan operations (something went wrong in the middle - whoe query aborted)
- it will be more flexibly if we will return NaN and let user handle situation
Made a new PR based on @sivukhin 's PR #2869 that had a lot of
conflicts. You can check out the PR description from there.
## The main idea is:
Before, if we had an index on `x` and had a query like `WHERE x > 100
and x < 200`, the plan would be something like:
```
- Seek to first row where x > 100
- Then, for every row, discard the row if x >= 200
```
This is highly wasteful in cases where there are a lot of rows where `x
>= 200`. Since our index is sorted on `x`, we know that once we hit the
_first_ row where `x >= 200`, we can stop iterating entirely.
So, the new plan is:
```
- Seek to first row where x > 100
- Then, iterate rows until x >= 200, and then stop
```
This also improves the situation for multi-column indexes. Imagine index
on `(x,y)` and a condition like `WHERE x = 100 and y > 100 and y < 200`.
Before, the plan was:
```
- Seek to first row where x=100 and y > 100
- Then, iterate rows while x = 100 and discard the row if y >= 200
- Stop when x > 100
```
This also suffers from a problem where if there are a lot of rows where
`x=100` and `y >= 200`, we go through those rows unnecessarily. The new
plan is:
```
- Seek to first row where x=100 and y > 100
- Then, iterate rows while x = 100 and y < 200
- Stop when either x > 100 or y >= 200
```
Which prevents us from iterating rows like `x=100, y = 666`
unnecessarily because we know the index is sorted on `(x,y)` - once we
hit any row where `x>100` OR `x=100, y >= 200`, we can stop.
Closes#3644
## Summary
Implemented Calendar-based Date/Time/Timestamp getter methods in
JDBC4ResultSet to support timezone conversions.
## Changes
- Implemented `getDate(int, Calendar)` and `getDate(String, Calendar)`
- Implemented `getTime(int, Calendar)` and `getTime(String, Calendar)`
- Implemented `getTimestamp(int, Calendar)` and `getTimestamp(String,
Calendar)`
- Fixed timezone conversion logic (changed from subtraction to addition)
- Added comprehensive test cases for all implemented methods
Test Results
- All tests passed successfully
- New tests validate timezone conversion with UTC and Seoul (UTC+9)
Reviewed-by: Kim Seon Woo (@seonWKim)
Closes#3607
Before, we just skipped evaluating `Id`, `Qualified` and
`DoublyQualified` if `referenced_tables` was `None`, leading to shit
like #3621. Let's eagerly return `"No such column"` parse errors in
these cases instead, and punch exceptions for cases where that doesn't
cleanly work
Top tip: use `Hide whitespace` toggle when inspecting the diff of this
PR
Closes#3621
Reviewed-by: Preston Thorpe <preston@turso.tech>
Closes#3626
`Property::AllTableHaveExpectedContent` adds a lot of simple Select full
table scans to check the DB state. These statements were being counted
in `InteractionStats`. The stats are used to calculate the Remaining
queries we need to make. By not counting these simple checks we allow
the simulator to create more meaningful interactions.
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3643
If we don't clear the dirty pages, we will initiate a rollback. In the
rollback, we will attempt to clear the whole page cache, but it will
then panic because there will still be dirty pages from the failed
writev
Reviewed-by: Jussi Saurio <jussi.saurio@gmail.com>
Closes#3189
Nyrkiö is piloting a GitHub Runner service, and congratulations, you are
the pilot customer! In order to reduce the risk somewhat, we'll
introduce this as a parallel workflow, so the existing benchmarks will
continue to run on the regular runners and they won't have any
discontinuity in their results because of this. Also since these runners
are actually a bit more expensive, we can manage the cost by only
running them periodically. Similarly, this allows us to fine tune the
Nyrkiö instance size, for example to have an EBS disk that is the
optimal size and IOPS configuration, so you don't pay for empty space.
I will show up in discord to discuss.
And congratulations on the beta release by the way! I worked hard to get
this done for your beta period, let's hope it works now.
Closes#3619