Age | Commit message (Collapse) | Author |
|
A token_stream being constructed on the spot has different
used/available properties to a fully constructed one: a fully
constructed token stream uses available to hold the total number of
tokens and used as an internal iterator, while one that is still being
constructed uses the semantics of a standard darr.
Furthermore, some loops didn't divide by ~sizeof(token_t)~ which lead
to iteration over bound errors.
|
|
|
|
Also error now points to the correct place in the file.
|
|
|
|
We have distinct functions for the use blocks and the macro blocks,
which each generate wholesale new token streams via `token_copy` so we
don't run into weird errors around ownership of the internal strings
of each token.
Furthermore, process_presults now uses the stream index in each
presult to report errors when stuff goes wrong.
|
|
So when a presult_t is constructed it holds an index to where it was
constructed in terms of the token stream. This will be useful when
implementing an error checker in the preprocessing or result parsing
stages.
|
|
So %USE <STRING> is the expected call pattern, so there's an error if
there isn't a string after %USE.
The other two errors are file I/O errors i.e. nonexistent files or
errors in parsing the other file. We don't report specifics about the
other file, that should be up to the user to check themselves.
|
|
Forgot to increment buffer->used and memcpy call was just incorrect.
|
|
This essentially just copies the internal string of the token into a
new buffer.
|
|
|
|
Doesn't do much, invalid for most operations.
|
|
Preprocessor handles macros and macro blocks by working at the token
level, not doing any high level parsing or instruction making.
Essentially every macro is recorded in a registry, recording the name
and the tokens assigned to it. Then for every caller it just inserts
the tokens inline, creating a new stream and freeing the old one. It
leaves actual high level parsing to `parse_next` and
`process_presults`.
|
|
|
|
|
|
|
|
Lots to refactor and test
|
|
This is mostly so labels get to have digits. This won't affect number
tokens as that happens before symbols.
|
|
|
|
Was used in a previous fix but not necessary anymore
|
|
Set the program structure correctly with a header using the parsed
global instruction.
|
|
Creates a jump address to the label delegated by "global" so program
starts at that point.
|
|
|
|
|
|
Lexer now will straight away attempt to eat up any type or later
portions of an opcode rather than leaving everything but the root.
This means checking for type in the parser is a direct check against
the name rather than prefixed with a dot.
Checks are a bit more strong to cause more tokens to go straight to
symbol rather than getting checked after one routine in at on the
parser side.
|
|
Makes more sense, don't need to fiddle around with strings as much in
the parser due to this!
|
|
Not necessary when you can just push the relevant word onto the stack
then just do OP_JUMP_STACK.
|
|
Essentially a presult_t contains one of these:
1) A label construction, which stores the label symbol into
`label` (PRES_LABEL)
2) An instruction that calls upon a label, storing the instruction
in `instruction` and the label name in `label` (PRES_LABEL_ADDRESS)
3) An instruction that uses a relative address offset, storing the
instruction in `instruction` and the offset wanted into
`relative_address` (PRES_RELATIVE_ADDRESS)
4) An instruction that requires no further processing, storing the
instruction into `instruction` (PRES_COMPLETE_INSTRUCTION)
In the processing stage, we resolve all calls by iterating one by one
and maintaining an absolute instruction address. Pretty nice, lots
more machinery involved in parsing now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This is because comparators may apply to signed types, so I need to
use the right parsing function.
|
|
|
|
Much simpler, uses a switch case which is a much faster method of
doing the parsing. Though roughly equivalent in terms of LOC, I feel
that this is more extensible
|
|
More useful tokens, in particular for each opcode possible. This
makes parsing a simpler task to reason as now we're just checking
against an enum rather than doing a string check in linear time.
It makes more sense to do this at the tokeniser as the local data from
the buffer will be in the cache most likely as the buffer is
contiguously allocated. While it will always be slow to do linear
time checks on strings, when doing it at the parser we're having to
check strings that may be allocated in a variety of different places.
This means caching becomes a harder task, but with this approach we're
less likely to have cache misses as long as the buffer stays there.
|
|
As strto(ul|ll) allow the parsing of hex literals of the form `0x`, we
allow lexing of hex literals which start with `x`.
They're lexed into C hex literals which work for strtol.
|
|
|
|
As it has no dependencies on vm specifically, and it's more necessary
for any vendors who wish to target the virtual machine, it makes more
sense for inst to be a lib module rather than a vm module.
|
|
|
|
Currently only for invalid character literals, but still a possible
problem.
|
|
Just takes the character literally as a number.
|
|
|
|
Prints useful and pretty messages when verbose being at least 1.
|
|
Pretty simple implementation, I've stopped printing the tokens cos I
think the lexer is done.
|
|
Introduced some functions to parse differing types of opcodes. Use
the same style of a.b.c... for namespacing or type specification for
certain opcodes. Bit hacky and not tested, but does work.
Parse errors can be reported with an exact location using the token
column, line.
|
|
Easier to do it here than at the parser.
|
|
Accurate error reporting can be introduced using this.
|
|
Just prints instructions so far.
|