|
Now I just have to figure out how that's going to map to assembly. Initial idea is to maybe embed the bit mapset right into the assembly code as data bytes perhaps? Though I'm not sure of the consequences and performance implications of doing it this way but it seems like a natural approach to mimic what's currently being done in the vm.
The buffer is not a pointer, it is the buffer itself. As you explained, the LPeg compiler allocates the correct space in the opcode array. If buff was declared with its correct size, all instructions would need those additional 32 bytes...
That part I understood though initially when I looked at the definition I was a bit confused on making that `buff[1]`. Since this is a union, the size of `Instruction` is going to be *at least* sizeof(int) + any alignment padding by the compiler. On all the platforms that matter today, this usually means at least 4 bytes. It made me wonder for a moment why not declare buff to something more like `byte buff[sizeof(int)]`. So even though buff is made to have 1 element, in reality it'll always have more space to hold 3 more because it's not the smallest field in this union.
On an unrelated sidenote, I noticed LPeg defining `BITSPERCHAR` to 8 in one of the headers instead of using `CHAR_BITS` from the standard which I found a bit unusual.