lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


my noob input :o)

long time ago, I used Smalltalk for many projects, of course Smalltalk is a vm base language with incremental compile byte code, IBM, pushed it a bit farther incorporating a JIT compile cache of code.

One cool feature was that smaltalk save everything inside its "image" at exit and reload with everything when you restart it... IBM did that for the cache of code too. So you get the full speed up optimization right away... What was kept in the cache was base on frequency hit of that piece of code ...
 of course smaltalk image were big very big :o)

so I was just asking myself what about a small language and JIT, is there a speed gain when you save compile bytecode and jit compile code into a permanent file and reload that file at re-execution of the script ?
how big will that cache should/could be ...
is loading few k or meg faster that recompiling it ?


surely that there is some studies benchmarks /metrics  on that already...
can some one gave few pointers?  :o)

:o)

On 2015-07-15 9:52 PM, Coda Highland wrote:
On Wed, Jul 15, 2015 at 6:28 PM, Paige DePol <lual@serfnet.org> wrote:
I do not know much about LuaJIT, is it a totally standalone compiler
for lua code not based on any other compiler system?
Yes, it's completely hand-constructed. Additionally, it's explicitly a
JIT ("just-in-time") compiler, with no AOT ("ahead-of-time")
functionality.

Hmm, I guess I am left wondering what magic LuaJIT is doing to achieve
such dramatic compilation speeds compared to LLVM or GCC?
Lua is a VERY small language. Specializing for the specific set of
functionality that Lua needs allows the compiler to be made more
compact and efficient than a more general-purpose compiler
architecture.

Additionally, LuaJIT is ONLY a JIT compiler. Compilation speed is
therefore VERY important, and as such tradeoffs are made -- LuaJIT
only applies optimization transformations when they can be done
quickly, even if the resulting algorithm could theoretically be made
to run faster by a more intensive optimization process. LLVM and GCC
are both intended for AOT use with JIT being something of an
afterthought, so they focus on producing the fastest output they can
even if it takes longer to generate it.

It also aborts compilation if it expects it would take too long, or if
it hits code that it knows it can't optimize or trace, falling back to
its (also hand-constructed and fine-tuned) interpreter. If this causes
a performance hit visible to the developer, it's the developer's job
to make the changes to allow LuaJIT to compile the code successfully.
(It provides tools to see what's going on so you can do so.) So part
of being a LuaJIT user is helping the compiler do its best.

Another benefit: LuaJIT is a trace compiler, so it only compiles code
that's actually used -- and it does so at a finer granularity than
just whole functions at once, and it can make decisions based on the
values of variables. This is a big deal. It also won't waste time on
code that's only ever executed once.

The above points together require an additional point: LuaJIT does
LOTS of tiny compilations. If a code path gets too long, it breaks it
up. If a code path is hit with parameters that invalidate earlier
optimization assumptions, it recompiles it with the assumptions
broadened.

There's been a lot of talk (not just in the Lua/LuaJIT communities,
but in CS in general -- Java and C# both use JIT compilers in their
VMs, for example) about the benefits of tracing JIT compilation; in
theory, a tracing JIT compiler could potentially produce faster code
than AOT compilation in certain circumstances, because it can reason
about the code with real-world data instead of abstract static
analysis.

/s/ Adam