[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Web API Library Cache Design Question
- From: Laurent Faillie <l_faillie@...>
- Date: Fri, 12 Apr 2013 11:58:55 +0200
Le 12/04/2013 07:57, Choonster TheMage a écrit :
> I'm working on a library that provides an easy interface to a REST API
> serving JSON. The smallest results are about 1,000 characters
> long and the largest are about 684,000 characters long.
Hum, funny, I'm working also on a REST application based on Lua :
It's a small GUI for reporting engine and connection is done thru
webservices. This appl is in beta stage but fully working at least for
my needs, but I have to write the documentation, including a
comprehensive description in SF :)
Anyway, my REST code is here :
As my messages have only decent sizes, I convert JSON to LOM using so
everything is stored in memory. And I don't care about data protection
as it's not a library but a final application.
> 1) Use recursive proxies instead of shallow ones, i.e. access to a
> nested table returns a proxy instead of the real table. This results
> in a small overhead for each table lookup.
Would be my preferred solution.
> 2) Deep copy the cached table and return the copy. This is simple, but
> it may take a long time to copy the table version of link 3. This
> results in a high initial overhead with raw table lookups for all
> nested tables instead of the overhead of __index lookups.
Are U copying inner tables as well ? With 600k messages, the memory
footprint will be quite high, especially if end application is not well
designed and is doing lot of objects copy or pass these copy as function
argument. Definitively not my choice :)
> 3) Store the JSON string and decode it each time. This also has a high
> initial overhead with raw table lookups (I'm not sure how much
> overhead compared to the deep copy).
So you will redo again and again and again the same job. Not resources