[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Squeezing more coroutines ?
- From: mpb <mpb.spam@...>
- Date: Fri, 1 Jun 2007 23:52:57 -0700
On 5/30/07, Bogdan Harjoc <email@example.com> wrote:
While writing a performance testing tool, I chose Lua to describe a set
of tests that
run in parallel.
So far things are looking great, as far as both available cpu and memory
are concerned: after increasing LUAI_MAXCSTACK, I've been testing with
concurrent and mostly idle threads, created with lua_newthread(). Memory
for these is about 350M, i.e. a little over 1KB per created thread.
After a glance at the lua_State definition, it looks like there a couple
that could be left out but nothing that would noticeably reduce
Has anyone managed to create something on the order of 1,000,000
If so, any tips on what to leave out from lua_State (or other places)
I'm curious - have you considered using Stackless Python?
I've never used Stackless, but I've heard that Stackless threads
(which seem to be called Tasklets) are "just a few bytes each".