I've decided to split the project into 2 repositories: the assembler
and the runtime. The runtime will contain both the executable and
lib/ while the assembler will have the runtime as a git submodule and
use it to build. I think this is a clean solution, a lot cleaner than
having them all in one project where the Makefile has to massively
expand.
Instead of %const(<name>) ... %end it will now be %const <name>
... %end i.e. the first symbol after %const will be considered the
name of the constant similar to %use.
The preprocess_* functions are now privately contained within the
implementation file to help the preprocesser outer function.
Furthermore I've simplified the API of the preprocess_* functions by
making them only return pp_err_t and store their results in a vector
parameter taken by reference.
We leave the parameter tokens alone, considering it constant, while
the parameter vec_out is used to hold the new stream of tokens. This
allows the caller to have a before and after view on the token stream
and reduces the worry of double frees.
Another great thing for C++: the ability to tell it how to print
structures the way I want. In C it's either:
1) Write a function to print the structure out (preferably to a file
pointer)
2) Write a function to return a string (allocated on the heap) which
represents it
Both are not fun to write, whereas it's much easier to write this.
This removes the problem of possibly expensive copies occurring due to
working with tokens produced from the lexer (that C++ just... does):
now we hold pointers where the copy operator is a lot easier to use.
I want expensive stuff to be done by me and for a reason: I want to
be holding the shotgun.
This C++ rewrite allows me to rewrite the actual API of the system.
In particular, I'm no longer restricting myself to just using enums
then figuring out a way to get proper error logging later down the
line (through tracking tokens in the buffer internally, for example).
Instead I can now design error structures which hold references to the
token they occurred on as well as possible lexical errors (if they're
a FILE_LEXICAL_ERROR which occurs due to the ~%USE~ macro). This
means it's a lot easier to write error logging now at the top level.
I've decided to split the module parsing into two modules, one for the
preprocessing stage which only deals with tokens and the parsing stage
which generates bytecode.
One thing I've realised is that even methods such as this require
error tracking. I won't implement it in the tokenise method as it's
not related to consuming the string per se but instead in the main method.