You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

55 lines
5.0 KiB

# Date
bitmask 7 bits
year 8192 13 bits
month 12 16 4 bits
day 31 32 5 bits
hour 24 32 5 bits
minute 60 64 6 bits
second 61 64 6 bits
millisecond 1000 1024 10 bits
56 bits = 7 bytes
# Duration
bitmask 7 bits
sign 1 1 1 bit
years 4096 12 bits
months 12 16 4 bits
days 31 32 5 bits
hours 24 32 5 bits
minutes 60 64 6 bits
seconds 61 64 6 bits
milliseconds 1000 1024 10 bits
56 bits = 7 bytes
- It must be possible to evaluate schema migrations statelessly; that is, without zero knowledge of what data *currently* exists in the database, a sequence of migrations up to any point must *always* result in a valid schema that:
- does not allow for data to exist in the database which violates the schema's constraints (eg. missing required fields)
- allows full rollback to any previous point in history, with data loss only being permitted in that process if it is fundamentally unavoidable due to the nature of the migration (eg. rolling back the addition of a new field)
- for *any* sequence of migrate-to and rollback operations within the same set of linear migrations, continues to uphold the above two properties
- Make sure that a colum default can be specified separately for new vs. migrated rows - in some cases, the user may want to initialize existing rows with a value derived from that row (eg. to emulate application insertion logic) rather than with the usual column default.
- If both a regular and migration default is specified: use either for its relevant purpose
- If only a migration default is specified: use that for migration, and disallow NULL values in new records
- If only a regular default is specified: use that for both cases
- If neither is specified: this is an error in a changeFields, but allowed in an addFields, and just means NULL values in new records are disallowed
- For the regular default, default functions *do not* receive the previous value; if the user wants to use this, they should specify a migration default
- A migration default only applies for *that specific migration step*, not for any migrations after it, even if the same field is affected. This needs to be specifically ensured to avoid bugs.
- When applying arithmetic directly to integer-encoded decimal numbers, magnitude scaling may be needed; for example:
1.1 * 1.2 = 1.32 (= 1.32 in decimal representation)
11 * 12 = 132 (= 13.2 in decimal representation, as original numbers were scaled by 10x, but this is WRONG)
11 * 12 (/ 10) = 13.2, rounded to 13 (1.3 in decimal representation, CORRECT, even though some precision is lost to conform to the storage precision)
- For user-specified reversal operations in migrations, automatically do a test with some random values to detect errors?
- Make sure to version the DSL import; so that old migrations can continue using older versions of the DSL! At least until there is some kind of codemod mechanism for this.
- Should be some way to 'inherit' an instance from the base database connection, allowing for configuring things like type adapters - this would let the user choose whether to eg. define custom type adapters globally or only for a specific table or such. Need to figure out how this fits into the DSL design where queries are stateless by default. Maybe a custom filter hook that lets the user semi-declaratively specify what queries to apply custom adapters to, or so?
- unsafeForbidRollback must make rollbacks impossible even in hot reload mode; although in *some* cases there might be a default value that could be reset to, it is possible for fields to exist that absolutely require an application-provided value. Therefore, it is not consistently possible to rollback even in a controllably-unsafe manner, when no rollback operation is specified.
Query planning:
- Make list of all 'queried fields', ie. fields which are used to filter or order
- If the first sorting criterium is also a filtering field *and* there is an index for that field, it should be selected as the first index to select from, because then we can implicitly use the order from the index
- Otherwise: apply filters, and if the remaining results set is more than __% of the full collection, and the sorting criterium has an index, reorder the resultset according to that index; if not, do a regular sort on the retrieved-and-decoded record data instead
- Any descending sorts should come *before* any record-fetching filters/criteria, so that it doesn't have to reverse a full result set in memory
- Sorting criteria should be internally rearranged as-needed, to prefer sorting by indexed fields with high cardinality (ie. many different values) first and low cardinality last
- Possible optimization: if the filtered subset appears to comprise most of the table, do a sequential filtering scan of the table instead of retrieving each matched item individually? This might be more efficient for some backends. Maybe backends should be able to configure whether this is the case for them?