[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: Optional static types for Lua - experimental derivative Ravi
- From: Richard Hundt <richardhundt@...>
- Date: Mon, 19 Jan 2015 01:08:13 +0100
>
> How is adding type checking, in any form, optimizing Lua? Isn't it one of
> Lua's great benefits that it is actually not statically typed?
The way I like to think about it is that static-dynamic and
strong-weak are two orthogonal axes of a language's characteristics.
Static-dynamic to me means whether type checks happens at compile-time
or run-time. Strong-weak indicates how strict the language is about
its types and how much coercion is going on behind the scenes.
Being dynamically typed doesn't mean no type checks. Lua is actually
pretty strongly typed (the checks are just made at runtime). Contrast
with C which is statically typed, but has a fairly weak type system
because you can really cast anything to anything, or Perl which is
dynamic and weakly typed.
Actually Perl is a good case in point. In Perl, empty arrays and
hashes, the empty string, zero, and `undef` all evaluate to false in a
boolean context. Lua is far stricter than this. Only `false` and `nil`
are logically false. There is some coercion in Lua and it's been the
subject of debate, but generally Lua is on the stricter side.
I think the performance gains in the OP relate more to using static
type annotations to generate specialized instructions which operate
more efficiently on floating point numbers and integers, thereby
reducing runtime coercion (at the C level, converting doubles to ints
and back has a cost). That's as far as I can tell from a cursory
glance at the code. I might have missed something.
> In my experience, with a reasonably large code base, Lua not being
> statically typed is much more of a benefit than a defect, so I wonder why
> this idea of making Lua statically typed pops up from time to time.
>
> Michael Schröder starts his text with claiming that "Like other dynamically
> typed languages, Lua spends a significant amount of execution time on type
> checks.". Is that really the case? Are type checks really using a lot of
> CPU cycles? I have my doubts, but I have not done the research.
I think "significant" here doesn't mean that the time is dominated by
checks. Just that they're not negligible.
The paper indicates a 20% performance gain in some cases (probably
safer to say 10% in general, as their results vary), so yep, although
underwhelming, they've shown these optimizations to have had an
impact. What struck me as particularly interesting is that, as they
mentioned in the paper, typically it's the dispatch loop that gets
optimized to avoid branch mis-predictions (via direct or context
threading), but they argue that the cost is dominated by what's
happening in the instruction. This is relevant for register-based VMs
because they have a denser encoding (i.e. none of the PUSH/POP
instructions for stack-based machines), and more load pressure, so it
seems like a sensible approach.
Also, this kind of optimization might become more interesting for
Lua's internals in future as the types get richer (thinking of the
changes in 5.3 in particular).