[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: Repeated processing of large datasets
- From: Francisco Olarte <folarte@...>
- Date: Tue, 28 Mar 2017 13:53:18 +0200
On Tue, Mar 28, 2017 at 1:19 PM, John Logsdon
<j.logsdon@quantex-research.com> wrote:
( Swapped order )
> My question is whether this is an efficient way to process the data or
> would it be better to use a database such as SQLITE3?
It depend on your concrete data & processing but I'd like to point:
> Then in the main program I read each line at a time:
> local Linez=readValues(tickStream)
> while Linez ~= nil and #AllLinez < maxLines do
> table.insert(AllLinez,Linez)
> Linez=readValues(tickStream)
> end
I'm not sure if this is efficient in luajit, but IIRC in lua # is not
constant time in tables, and table.insert uses # as default insert
position, so if I were worried about speed I would normally do:
local nlines = 0
while nlines <maxLines do
Linez = readValues(tickStream)
if Linez==nil then break end
nlines = nlines+1
AllLinez[nlines]=Linez
end
-- Probably stash nlines in AllLinez.n for easier passing around...
> The processing is then a matter of looping over AllLinez:
> for thisLine = 1,#AllLinez do
And here I would use nlines instead of #AllLinez, although I think and
ipairs loop maybe faster ( just time it ).
Francisco Olarte.