On Saturday, May 16, 2015, KHMan wrote:
On 5/16/2015 3:41 PM, zejian ju wrote:
2015-05-16 11:50 GMT+08:00 KHMan:
He wishes the app was running at a sustained 4
instructions/clock/core... :-P
Or, he hopes someone else would do the hard work of
making a
cache-aware GC...
[intended as good-natured ribbing, let us be able to keep
things slightly loose...:-p]
Well..., I take back my words about "not-so-great Lua GC".
Lua's
GC is great. I read all the GC code of Lua5.1/5.2/5.3. Its
authors
put great effort to improve it and I like the elegant
implementation. what I mean is it can be made faster for
my server
application.
I'm not totally sold on the idea that an awesome GC is the
best solution. The node.js ecosystem works because of V8's (is
it still V8?) performance.
Here, interpreted Lua is still interpreted, and if your
dataset is big, assuming for example for an online multiuser
app backend you will still touch plenty of data and churn the
CPU cache. Now if you write a test script that has a workload
not too different from your server load, then test GC-running
and no-GC-running versions, then we'd have an idea what kind
of difference we can get. I love data (de best thing to back
up any claims), some timing data would be lovely here. But
remember you can't get anywhere close to JIT performance with
an interpreter...
[snip]
Is it possible that the issues that people experience with the GC
might be more accurately framed as integration issues, rather than
more generically? Integration might mean collecting userdata that
points to large blocks, dealing with reference counted objects and
errors, etc. I'm giving sub-optimal examples.l, but it
does seemsl like GC issues are either nebulous, not really
there, or are there, but out of Lua's hands.
Maybe the GC is fine as is, but could be improved with hinting
or some other mechanism?