Progress

Those of you who have followed #moarvm or github closely may already know, but this week I've finally checked in code that calculates 2 + 2 = 4 and returns that value to its' caller. To be very specific, I can make a frame that does the following operations:


const_i64_16 r0, 2
const_i64_16 r1, 2
add_i, r2, r1, r0
return_i r2

As a proof of concept, this is a breakthrough, and it shows that the strategy we've chosen can pay off. I didn't quite succeed without help of FROGGS, jnthn, nwc10, timotimo and others, but we're finally there. I hope. (I'll have to see about windows x64 support). The next thing to do is cleanup and extension. Some objectives for the following week are:
  • Cleanup. The JIT compiler still dumps stuff to stderr for my debugging purposes, but we shouldn't really have that. I've tried moving ad.all output to the spesh log but I can hardly find the data in there, so I think I'll make a separate JIT log file instead. Similarly, the file for the JIT compiler's machine code dump - if any - should be specified. And I should add padding to the dump, so that more than one block can be dumped.
  • Adding operations to compile. MoarVM supports no fewer than 638 opcodes, and I support 4 yet. That is about 0,62% of all opcodes :-). Obviously in those terms, I have a long way to go. jnthn suggested that the specialized sp_getarg opcodes are a good way to progress, and I agree - they'll allow us to pass actual arguments to a compiled routine.
  • Translate the spesh graph out of SSA form into the linear form that we use for the JIT 'graph' (which is really a labeled linked list so far).
  • Compile more basic blocks and add support for branching. This is probably the trickiest thing of the bunch.
  • Fix myself a proper windows-x64 virtual machine, and do the windows testing myself.
  • Bring the moar-jit branch up-to-date with moarvm master, so that testers don't have such a hard time.
 As for longer-term goals,we've had some constructive contact with Mike Pall (of LuaJit / DynASM fame), and he suggested ways to extends DynASM to support dynamic registers. As I've tried to explain last week, this is important for 'good' instruction selection. On further reflection, it will probably do just fine to introduce expression trees - and the specialized compiler backend for them, which would need register selection - gradually, i.e. per supported instruction rather than all at once.

However, the following features are more important still:
  • Support for deoptimisations. Up until now (and the foreseeable future) we keep the memory layout exactly the same
  • JIT-to-interpreter calls. This is a bit tricky - MoarVM doesn't support nesting interpreters. What we'll have to do instead is return to the interpreter with a label that stores our continuation, and continue at that continuation when we return.
  • At some point, JIT-to-JIT calls. Much the same problems apply - in theory, this doesn't have to differ from JIT-to-interpreter calls, although obviously we'd rather optimise the interpreter out of this loop.
  • Support for exceptions, obviously, which - I hope - won't be as tricky as it seems, as it ultimately depends on jumping in the bytecode at the right place.
  • Support for simple optimisations, such as merging various MoarVM opcodes into a single opcode if that is more suitable.
So that is it for now. See you next week!

    Reacties

    Populaire posts van deze blog

    Reverse Linear Scan Allocation is probably a good idea

    Retrospective of the MoarVM JIT

    Something about IR optimization