lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


maybe you can try to cache the nested proxy table also,  just use a single weak table to store all your proxy table.

在 2013-4-12 下午6:20,"Choonster TheMage" <choonster.2010@gmail.com>写道:
On Fri, Apr 12, 2013 at 7:58 PM, Laurent Faillie <l_faillie@yahoo.com> wrote:
> As my messages have only decent sizes, I convert JSON to LOM using so
> everything is stored in memory. And I don't care about data protection
> as it's not a library but a final application.
>> 1) Use recursive proxies instead of shallow ones, i.e. access to a
>> nested table returns a proxy instead of the real table. This results
>> in a small overhead for each table lookup.
> Would be my preferred solution.
>> 2) Deep copy the cached table and return the copy. This is simple, but
>> it may take a long time to copy the table version of link 3. This
>> results in a high initial overhead with raw table lookups for all
>> nested tables instead of the overhead of __index lookups.
> Are U copying inner tables as well ? With 600k messages, the memory
> footprint will be quite high, especially if end application is not well
> designed and is doing lot of objects copy or pass these copy as function
> argument. Definitively not my choice :)
>
>> 3) Store the JSON string and decode it each time. This also has a high
>> initial overhead with raw table lookups (I'm not sure how much
>> overhead compared to the deep copy).
> So you will redo again and again and again the same job. Not resources
> conservative.

Thanks for the response Laurent. I'll probably go with recursive
proxies and see how that works out.