lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


On Thu, Nov 18, 2010 at 2:09 PM, Mike Pall <mikelu-1011@mike.de> wrote:
Petri Häkkinen wrote:
> For example,
>
> normalize(mul(transformMat, vec1 + vec2))
>
> is very convenient syntax, and something I've been using with C++ and
> shaders.
>
> What do you think, is the LuaJIT compiler smart enough to eliminate these
> allocs?

Not yet. But adding escape analysis plus generalized store sinking
and allocation sinking is on my TODO list for a long time.

> But as a commercial game developer I would be very concerned about the
> alloc/gc overhead.

That's speculation and/or premature optimization. Of course such
overhead is easily demonstrable in isolated benchmarks. But I'd
need to see hard numbers from full-game profiling before I'd
consider this to be a serious issue.

There are two problems with this. First, if I trust your opinion on this (not saying I don't) and I invest some serious amount of time making a big ass game project with LuaJIT and not caring about the alloc/gc overhead, only to discover at the end that all those crazy vector operations in thousands of places in source code show up in profiling, it's almost impossible to go and change them everywhere. Some system wide optimizations are better to evaluate sooner than later...

The second problem is easier to solve. I simply don't know enough details about LuaJIT internals at the moment to know how severe this problem could be.

Would it help if I counted how many temp vectors a real commercial game written in C++ does each frame? Is there any other statistics that would help in estimating the impact?

> I'm talking about pushing
> LuaJIT to extremes here, so that only a minimal core would need to be
> implemented in C++.

Actually we're in violent agreement here. You just revealed my
plan for world domination. Dang!

Good to know that you think this is a realistic goal!

> Yes, I agree. Bloating the size of all values is not a good solution. I was
> wondering would it be somehow magically possible to only make the vector
> type bigger without adding the overhead to all other types?

Sorry, I'm not a magician. I only play one on this list.

Ok, too bad. But this can't mean that it's impossible to have values with different memory footprints in a dynamic language, does it?

Cheers,

Petri