[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Another attempt to create JIT compiler for Python has failed
- From: William Ahern <william@...>
- Date: Tue, 21 Feb 2017 18:54:22 -0800
On Tue, Feb 21, 2017 at 12:47:43AM +0000, Dibyendu Majumdar wrote:
> On 21 February 2017 at 00:43, Hisham <firstname.lastname@example.org> wrote:
> > On 20 February 2017 at 21:28, Dibyendu Majumdar <email@example.com> wrote:
> >> successful JIT implementations.
> > V8 was the relevant player there when talking stand-alone JS
> > implementation, so this got me curious. Which are the other ones?
> Mozilla SpiderMonkey
> Microsoft ChakraCore
> And interesting none are tracing JITs as far as I know.
ChakraCore seems to implement both trace-based and method-based JIT
When ChakraCore notices that a function or loop-body is being invoked
multiple times in the interpreter, it queues up the function in
ChakraCore's background JIT compiler pipeline to generate optimized JIT'ed
code for the function. Once the JIT'ed code is ready, ChakraCore replaces
the function or loop entry points such that subsequent calls to the
function or the loop start executing the faster JIT'ed code instead of
continuing to execute the bytecode via the interpreter.
SpiderMonkey also seems to have a tracing JIT compiler:
method-based JIT, but the documentation suggest some loop-based profiling
FTL JIT kicks in for functions that are invoked thousands of times, or
loop tens of thousands of times.
compiles loops to tiny inline functions? They have a very complex
multi-stage compilation process that first compiles many methods to simple,
unoptimized machine code and then selects a subset of those for aggressive
compilation, so superfically seems like a possibility.