[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: optimisation question
- From: Max Ischenko <max@...>
- Date: Fri, 4 Jul 2003 15:12:33 +0300
Philippe Lhoste wrote:
> > > > Both runs on the same speed and profiling tells that most time spent
> in
> > > > a [for] loop.
> >
> > As it turns out, my profiling was a bit wrong.
> > But stills, the results are rather close.
> Out of curiosity, how do you do your profiling? I use the classical
> time spent between the start and the end of a code fragment, but
> running on Win98, results are not consistent from one call to the
> next, sometime even inverting results. I choose the most frequent
> result... Alas, on Windows, we don't have access to CPU time of a
> process.
function timeit(title, n, f)
collectgarbage()
collectgarbage(1000000) -- set BIG threshhold to avoid gc in loops
local started = os.clock()
for i = 1, n do f(i) end
local finished = os.clock()
local total = finished - started
printf(title, n, total)
end
I run with n = 10000
> >
> > On my data (the most typical range is about 6 elements) it is a little
> > SLOWER than string.format or concat version: 0.18 vs 0.16 for 1000
> > calls on range with 6 elements and 0.72 vs 0.62 on range with 100
> > elements.
> Slower than the concat version? It *is* the concat version, I just
> avoid the call to table.insert, and unecessary computations. I am
> puzzled.
Uh, sorry. By "concat" version I mean string concatenation (with ..) not
table.concat. ;) Sorry for confusion.
--
Regards, max.