If a forward/backward offset is too big, we'll clamp to the edges of
the file rather than failing completely. We return the number of
bytes moved so callers can still validate, but the stream API can now
deal with these situations a bit more effectively.
Standard old test functions, but they don't call TEST_INIT or
TEST_PASSED. They're placed at the start and at the end of the test
array.
Those macros just do printing anyway, so they're not necessary.
We might need to setup a prelude for initialising a file in the
filesystem for testing here - not only does stream_test_file need it,
but I see later tests requiring an equivalence check for files and
strings (variants of a stream).
Because of the not_inlined trick, a 0 initialised SBO vector is
completely valid to use if required. Future /vec_ensure/'s will deal
with it appropriately. So there's no need to initialise the vector
ahead of time like this.
While the previous method of in-lining a stack allocated array of
tests into the suite struct declaration was nice, we had to update
size manually.
This macro will allow us to just append new tests to the suite without
having to care for that. It generates a uniquely named variable for
the test array, then uses that test array in the suite declaration.
Nice and easy.
lisp_free will do a shallow clean of any object, freeing its
associated memory. It won't recur through any containers, nor will it
freakout if you give it something that is constant (symbols, small
integers, NIL, etc).
MODE=full will initialise a debug build with all logs, including test
logs. Otherwise, MODE=debug just sets up standard debug build with
main logs but no testing logs. MODE=release optimises and strips all
logs.
TEST_VERBOSE is a preprocesser directive which TEST is dependent on.
By default it is 0, in which case TEST simply fails if the condition
is not true. Otherwise, a full log (as done previously) is made.
By default we should test all our code for regressions. I'm always
running the examples anyway to test new features, so we should have
that recipe run afterwards.
If a test fails, that's first priority to fix.
If an example fails, time to continue working.