Memory operations used operands to encode positional/size arguments,
with stack variants in case the user wanted to programmatically do so.
In most large scale cases I don't see the non-stack variants being
used; many cases require using the stack for additions and
subtractions to create values such as indexes or sizes. Therefore,
it's better to be stack-first.
One counter point is inline optimisation of code at runtime: if an
compile-time-known object is pushed then immediately used in an
operation, we can instead encode the value directly into an operand
based instruction which will speed up execution time because it's
slower to pop the value off the stack than have it available as part
of the instruction.
OP_HALT = 1 now. This commit also adjusts the error checking in
inst_read_bytecode.
The main reasoning behind this is when other platforms or applications
target the AVM: whenever a new opcode may be added, the actual binary
for OP_HALT changes (as a result of how C enums work).
Say your application targets commit alpha of AVM. OP_HALT is, say,
98. In commit beta, AVM is updated with a new opcode so OP_HALT is
changed to 99 (due to the new opcode being placed before OP_HALT). If
your application builds a binary for AVM version alpha and AVM version
beta is used instead, OP_HALT will be interpreted as another
instruction, which can lead to undefined behaviour.
This can be hard to debug, so here I've made the decision to try and
not place new opcodes in between old ones; new ones will always be
placed *before* NUMBER_OF_OPCODES.
Instead of using a linked list, which is incredibly fragmented, a
vector keeps all pointers together. Keeps all our stuff together and
in theory we should have less cache misses when deleting pages.
It does introduce the issue of fragmenting, where if we allocate and
then delete many times a lot of the heap vector will be empty so
traversal will be over a ton of useless stuff.
Copied the code from stack overflow without thinking about it. The
first byte in little endian order should always be LSB so I construct
a more contrived example (0xFFFF0000) which should make it easier to
detect what the first byte is considered on the machine. If it's 0
then the LSB is the first byte hence little endian, otherwise it's big
endian.
On a greater note: Don't never copy no code from stack overflow, bro.
I went up there at 11 o'clock last night trynna get me some code.
Bro, I copied that shit, woke up, my motherfucking LITTLE_ENDIAN
detection don't work. Explain, bro.
While it helped with understanding to use unions as a safe way to
access the underlying bits, this shift based mechanism actually makes
more sense at a glance, particularly by utilising WORD_NTH_BYTE
In particular, __LITTLE_ENDIAN__ was not a functioning macro.
Instead, I implemented a version by hand (copied from IBM) that
actually figures out if the machine is little endian or not.
Thank you unit testing!
No longer relying on darr_t or anything other than the C runtime and
aliases. This means it should be *even easier* to target this via FFI
from other languages without having to initialise my custom made
structures! Furthermore I've removed any form of allocation in the
library so FFI callers don't need to manage memory in any way.
Instead we rely on the caller allocating the correct amount of memory
for the functions to work, with basic error handling if that doesn't
happen.
In the case of inst_read_bytecode, error reporting occurs by making
the return of a function an integer. If the integer is positive it is
the number of bytes read from the buffer. If negative it flags a
possible error, which is a member of read_err_t.
prog_read_bytecode has been split into two functions: prog_read_header
and prog_read_instructions. prog_read_instructions works under the
assumption that the program's header has been filled, e.g. via
prog_read_header. prog_read_header returns 0 if there's not enough
space in the buffer or if the start_address is greater than the count.
prog_read_instructions returns a custom structure which contains an
byte position as well as an error enum, allowing for finer error
reporting.
In the case of inst_write_bytecode via the assumption that the caller
allocated the correct memory there is no need for error reporting.
For prog_write_bytecode if an error occurs due to
In the case of inst_read_bytecode we return the number
Due to reordering I need to have two macros for checking if an opcode
is of a type. If the type is signed then the upper bound must be
OP_<type>_LONG whereas if it is unsigned then the upper bound must be
OP_<type>_WORD.
Moved all opcodes that use unsigned types before the signed types AND
ordered signed types into BYTE, CHAR, HWORD, INT, WORD, LONG. This is
not only logically consistent but also looks prettier.
This simple fix made the routine for OP_POP not require an additional
dispatch step on top of the conditional due to OPCODE_DATA_TYPE,
instead using data_type_t as a map from an opcode's base type to a
specific type.
This "header" is now embedded directly into the struct. The semantic
of a header never really matters in the actual runtime anyway, it's
only for bytecode (de)serialising.