lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On Thu, Nov 18, 2010 at 12:40 PM, Mike Pall <> wrote:
Petri Häkkinen wrote:
> Just to be sure that we're on the same page; I think vectors should be like
> other atomic data types, e.g. like numbers, rather than being objects. This
> would eliminate the allocation cost altogether.

Well, I disagree. There are other ways to eliminate allocations.
Proper API design (pass a reference to a destination vector)
and/or adding escape analysis to the compiler will solve this in
most cases.

Well, there's proper API design and proper API design... An API designed purely for efficiency may not be convenient to use. When the code base gets big and hairy this can make code harder to read and maintain. In my experience a functional approach for vectors (e.g. HLSL or Cg) works well and this can be implemented efficiently with C++. Unfortunately this approach leads to a lot of temp vectors and thus temp allocations if those vectors are allocated dynamically.

For example,

normalize(mul(transformMat, vec1 + vec2))

is very convenient syntax, and something I've been using with C++ and shaders.

What do you think, is the LuaJIT compiler smart enough to eliminate these allocs?
> Modern games are full of vector manipulation, so this is basically the only
> thing stopping us from creating big commercial quality games with LuaJIT.

Please don't spread FUD. This hasn't stopped anybody else from
creating games with LuaJIT. Most of that vector stuff is running
on the GPU or in a physics engine nowadays. Whatever is left, is
either not time-critical and/or the allocations can be eliminated.

You probably know the extent of LuaJIT usage in commercial games better than me. But as a commercial game developer I would be very concerned about the alloc/gc overhead. My impression is that Lua is used for scripting in games but not as the primary programming language. I'm talking about pushing LuaJIT to extremes here, so that only a minimal core would need to be implemented in C++. Why? Because it makes sense from the production point of view - writing code in a higher level language is just a lot faster, and live code updates is the real killer feature.

There are still a lot of uses for vectors in games outside GPU rendering and physics. Game objects store various transforms as state which need to be manipulated, object properties are stored in vectors (material properties, colors, positions, etc.). AI needs to compute direction vectors and distances, and so on. All this builds up to a lot of operations per frame.

> Basically what we did with lua-vec was to double the size of Value struct so
> that it could hold a 4D float vector. Doubling the size of every value is
> quite nasty yes, but perhaps this could be implemented better in LuaJIT...
> What do you think?

No way. Memory bandwidth is one of the top performance bottlenecks
on modern CPUs. You don't want a narrow use case to destroy the
performance of the common case.

Yes, I agree. Bloating the size of all values is not a good solution. I was wondering would it be somehow magically possible to only make the vector type bigger without adding the overhead to all other types?