lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


I would like to announce the availability of llvm-lua.

llvm-lua, converts Lua bytecode to LLVM IR code and supports JIT and static 
compiling.  Using LLVM gives Lua JIT support on cpu architectures other then 
x86.

I converted the Lua bytecode dispatch loop code into a set of C functions one 
for each opcode.  The opcode functions take two parmeters one is the current 
Lua function's state, and the other is the 32bit opcode.  The function state 
holds all the local variables that where outside the dispatch loop 
(lua_State, stack base, constants, closure).  That set of opcode functions is 
compiled into a LLVM bitcode file (called lua_vm_ops.bc) that is used by 
llvm-lua at runtime.  llvm-lua load that bitcode file on start-up (right now 
it only looks in the working directory for 'lua_vm_ops.bc', I need to find a 
better way to load it).

Each Lua function in converted one LLVM function with each Lua opcode in that 
function being converted into a function call to one of the opcode functions 
in lua_vm_ops.bc with the opcode's 32bit value being a constant for the 
second parmeter.  Most of those function calls are inlined before running a 
few LLVM optimization passes over the whole function.

The performance is better then the normal lua vm on most scripts that I have 
tried.  For one script I had to disable JIT for any Lua function that had 
more then 800 opcodes since the LLVM codegen phase was taking atleast a 
second to codegen the LLVM IR to machine code.  The first time I ran the 
meteor.lua script it used more then 30 seconds to codegen all the Lua code, 
where as the normal Lua vm took less then 4 seconds to run the whole script.  
See the attached benchmark.log comparing the normal Lua vm with llvm-lua, 
native, and luajit.  The "native" column shows the scripts after they have 
been compiled to machine code and made into standalone executables, the time 
for that column doesn't include the compile time.  All the code used in this 
benchmark to compare the VMs can be downloaded from the project's website.

The code can be downloaded from the project's website:
http://code.google.com/p/llvm-lua/

-- 
Robert G. Jakabosky
script        lua      llvm-lua  native   luajit   
ackermann     2.03     2.19      1.89     0.50     
ary           1.02     0.53      0.54     0.80     
binarytrees   1.63     1.41      1.36     1.03     
chameneos     1.12     0.66      0.63     0.52     
except        1.19     0.91      0.88     0.58     
fannkuch      1.45     0.74      0.69     0.84     
fibo          0.83     0.48      0.49     0.16     
harmonic      1.23     0.31      0.28     0.24     
hash          0.88     0.86      0.91     0.81     
hash2         1.10     0.94      0.86     0.95     
heapsort      1.13     0.52      0.50     0.58     
hello         0.00     0.00      0.00     0.00     
knucleotide   0.60     0.58      0.57     0.47     
lists         1.08     0.76      0.70     0.65     
matrix        1.06     0.58      0.57     0.85     
meteor        3.93     5.97      4.47     1.17     
methcall      1.21     0.76      0.71     0.61     
moments       1.08     0.80      0.73     0.91     
nbody         1.09     0.65      0.54     0.54     
nestedloop    1.06     0.31      0.28     0.17     
nsieve        1.15     0.84      0.79     0.92     
nsievebits    1.50     0.64      0.66     0.60     
objinst       1.13     0.91      0.87     0.83     
partialsums   1.03     0.65      0.63     0.56     
pidigits      1.27     1.66      1.29     0.88     
process       0.00     0.65      0.89     0.54     
prodcons      0.97     0.62      0.59     0.43     
random        1.07     0.47      0.46     0.23     
recursive     1.23     0.70      0.65     0.20     
regexdna      1.03     0.99      1.22     0.93     
regexmatch    0.26     0.28      0.27     0.25     
revcomp       0.35     0.41      0.37     0.29     
reversefile   0.52     0.43      0.42     0.42     
sieve         1.07     0.57      0.54     0.64     
spectralnorm  1.20     0.60      0.55     0.41     
spellcheck    0.77     0.71      0.69     0.66     
strcat        0.93     0.77      0.71     0.71     
sumcol        0.88     0.75      0.74     0.82     
takfp         0.41     0.25      0.26     0.10     
wc            1.04     0.93      1.08     1.43     
wordfreq      0.70     0.62      0.65     0.72     
Total         43.23    34.41     31.93    24.95