lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


I sent this to the list but I didn't see it appear so I'm resending it... I
apologize if anyone gets two copies of it.

On Tue, Feb 17, 1998 at 12:35:33PM -0800, Steve Dekorte wrote:
> > the thing about compiling to machine code is that the semantics of Lua are
> > a bit demanding, in that almost everything may trigger a tag method.
> 
> That's a good point. Hmmm. What if tag methods had to be staticaly defined..

I don't think that would be a good idea. It shouldn't be all that difficult to
just recompile dependent code if adding a tag method made a significant change.
Adding tag methods isn't something that is done once every several thousand
instructions, it's done much less on average.

In fact, most of the time you setup tag methods before you execute any
significant amount of code. So as long as you took a lazy approach, it wouldn't
hurt much.

However, I would think that much of the code generation would just require
function calls for things like "calling a lua function" or "setting a table
value" anyhow, so you wouldn't have to recompile that code as long as you put
the tag evaluation inside the function which handled the condition.

Incidentally, I think Lua would see a much greater speed increase from some kind
of table lookup cache than it would from native code generation. Namely, for
method calls, or deeply nested method calls, dereference the target "slot" (i.e.
NOT the actual method, the slot in the table which holds it) once, if the slot
changes, or if any of the source variables used to get there change, then make
sure it gets recomputed somehow. Either a computed tag, or references back to
the source cache, or whatever.

I think one difficulty in this is that multi-level table lookups are spread over
several instructions, so the cache would have to know how to bypass several
instructions with a "cache op". The interpreter could look for "static" 
sequences of table lookups, and put in a cache operation which would check to
see that the "input table" to the static stream (or the global variable first
dereferenced)  was the same as "last time" and bypass the whole stream of
lookups. For "dynamic" sequences, a static cache op could be put around the
whole sequence, and a tag could be put on each "dynamic" piece for it to
invalidate the static cache op if it's variable changed.

For example, static case:

local boo = <get table from somewhere>

boo.a:c(a,v,c);

Normal sequence does:

- given boo on the stack
lookup a in boo
lookup c in a


Cached sequence does:

if (boo == previous boo)
     grab cached slot 
  else 
     do normal sequence and re-cache


For dynamic case:

local boo = (get table from somewhere)
local y   = (get number from somewhere)

boo.a[y]:c(a,v,c);

Normal sequence does:

- given boo on the stack
lookup a in boo
push y
lookup [y] in a
lookup c in result

Cached sequence does:

if (boo == previous boo && y == previous y)
   grab cached slot
  else
   do normal sequence and re-cache

OR we could do:

if (boo == previous boo) 
   grab cached slot where y == (current y)
  else
   do normal sequence and store result for current y

This second option would allow it to store multiple cached values based on the
variables used. So the first iteration of the loop would "create" all the cached
values, and the second iteration of the loop would just use them. (given that
the source table and everything were the same)


> >..right now, I'd need to be convinced that you do need that extra speed in Lua
> >code...
> 
> I'm not convinced it's needed for most situations either. But it sure would
> be nice to have a high-level language that could be used for most any application.
> Self's compiler and optimizations got Self code running at C++ speeds.
> I'd like to see Lua used as more than just an extension language.

Self was running at around half C++ speed (faster for some things slower for
others) while needed at least 64megs RAM to get anywhere near that speedup for a
program of any size. I don't think shooting for this kind of RAM usage is
necessarily a good idea.

-- 
David Jeske (N9LCA) + http://www.chat.net/~jeske/ + jeske@chat.net