On 12/04/2011 01:34 AM, C Bielert wrote:
I've been experimenting with exactly this sort of thing here:
https://github.com/richardhundt/lupa/tree/method-accessors
It's essentially a transpiler which compiles an OO curly
brackety language to Lua source code. It supports guard
expressions which are evaluated at run-time. I had this
extended at some point to include cdata types.
The tests show examples of the language:
https://github.com/richardhundt/lupa/blob/method-accessors/test/class.ku
run with:
./bin/kula <file>
I decided to do a complete rewrite though in this branch:
https://github.com/richardhundt/lupa/tree/setfenv-for-namespaces
and I haven't got guards in yet since I've decided to build a
bytecode compiler for LuaJIT2 first:
https://github.com/richardhundt/luajit2-codegen
It's all highly experimental and a bit messy, the branches
diverge hugely, but right now I prefer to keep it that way
because I haven't settled on the syntax and semantics yet.
Hopefully, though, you can get an idea of what can be done and
how it performs. You need a fresh version of LPeg to make it
work.
Cheers,
Richard
This looks extremely interesting. The syntax seems pretty neat
compared to lua, and in fact there seem to be ideas there that
haven't been explored in many languages yet. I might go as far
as using this language, if the inheritance mechanisms are
powerful enough.
Also, as suggested, I have performed a benchmark on the
typecheck thing, and the results are.. surprising.
function A(n : Number, i : Number) {
return n + i
}
n = 0
for i=1,5_000_000_000 {
n=A(n,5)
}
print(n)
If I remove the guard from A, the total time required to execute
the script seems to not change at all. Now, this may be because
the example is stupidly simple, but it's still very interesting.
Does luajit optimize away the typecheck? If so, I wonder if it can
do so in more complex situations as well.
Using regular luajit, I get around the same timing by the way.
Try 5e10 iterations or more. If you've got a newish Intel or AMD
CPU, you'll probably be seeing times of 20-50 ms for that loop with
LJ2. Most of that time will be spent parsing, compiling and garbage
collecting the AST (It's way more expensive than it should be, which
is why I started the rewrite: the eval() implementation was just
painfully expensive. The cost was mainly in the parser actually,
which is pretty horrid as it does far too much local backtracking,
and the newer code proves that it can be made really fast).
Also the primitive type checking operation is based on
getmetatable(), which is compiled by LJ2. And LJ2 is just blazingly
fast anyway. In general I found that the runtime typechecks had
negligible impact on performance when running under LJ2 with JIT
turned on. Turning JIT off makes it noticeable, but still, when
comparing performance to Python or Ruby, with LJ2 in interpreter
mode, performance is very competitive.
If you're interested in contributing ideas or code, let me know off
list. I think that if you were going to go with the language, that
we could collaborate in making it do what you need in terms of OO
goodies and such. Thus far I've been working in isolation and it'd
be useful to have someone with a real use case for trying it out ;)
Cheers,
Richard
|