[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: matrix operations and temporary objects?
- From: Leo Razoumov <slonik.az@...>
- Date: Tue, 4 Aug 2009 08:29:35 -0400
On 8/4/09, Luiz Henrique de Figueiredo <lhf@tecgraf.puc-rio.br> wrote:
> > Is it possible to do something in Lua and somehow control use of
> > temporary objects when dealing with typical +-*/ math operations in Lua??
>
>
> It is not possible to know directly when Lua is creating a temporary object,
> though you could do something along the lines of the "Using fallbacks" section
> in the SPE paper: http://www.lua.org/spe.html (I have the equivalent code for
> Lua 5.1 somewhere).
>
> Another alternative is to provide "begin_computation" and "end_computation"
> functions that will store all objects created between these calls in an
> internal C stack that can be freed or recycled in one go.
>
>
> for n=1,1000 do
>
> begin_computation()
> m= m*m1 + m2
> end_computation() -- must not free m, ie, the last object created
> end
This is what I have been doing recently. I am even playing with Matrix
"Factory" allocators that reuse temporary matrices of compatible sizes
to avoid malloc/free overhead. When matrix is deallocated it returns
back to the factory for reuse.
But this approach has a very serious problem, as illustrated below
for n=1,1000 do
begin_computation()
m = m*m1 + m2
m1= m*m2 + m1
end_computation() -- must not free new m, m1
end
How could I tell end_computation() that objects referenced by m and m1
have to be preserved?? It is difficult and error-prone to predict
location of object "m" on the C stack which is tracking new object
allocations.
Even though it looks ugly, I would rather do this:
for n=1,1000 do
begin_computation() -- start tracking newly allocated objects
m = compute(m*m1 + m2) -- compute and cleanup
m1= compute(m*m2 + m1) -- compute and cleanup
end_computation()
end
function compute(x)
end_computation() -- cleanup all but the last allocated object
begin_computation() -- rearm
return x
end
I hope that together with factory allocators that reuse objects, the
overhead can be acceptable.
--Leo--