[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: Repeated processing of large datasets
- From: Geoff Leyland <geoff_leyland@...>
- Date: Wed, 29 Mar 2017 12:04:03 +1300
> On 29/03/2017, at 12:19 AM, John Logsdon <j.logsdon@quantex-research.com> wrote:
>
> for thisLine = 1,#AllLinez do
> V1,V2,V3 = unpack(AllLinez[thisLine])
> -- ... and then the data are processed
> --
> end
I'm sure you know that V1, V2 and V3 should be local variables.
> Processing involves a very large number of repeated optimisation steps so
> it is important that the data are handled as efficiently as possible. I
> am using luajit of course.
>
> My question is whether this is an efficient way to process the data or
> would it be better to use a database such as SQLITE3?
If lua can hold all your data in memory, and you're just iterating through the rows in your processing, not performing sql-like queries to find subsets of rows, then I don't see why luajit shouldn't be pretty quick.
If your data is all numeric, then you might do well out of moving it to cdata rather than a lua table?