lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


Monday, April 3, 2006, 11:09:37 PM, Diego Nehab wrote:

>>> As for mixing with HelperThreads, it is completely orthogonal to what I
>>> am suggesting. Although LuaSocket is not completely thread-safe, it can
>>> be made so with just a little work.
>>
>> I see that HelperThreads can, e.g., take a Lua file handle (FILE *)
>> and use it in a non-blocking way using threads. This is something a
>> Windows IOCP library couldn't do.

> Right, but if a thread blocks on a call that chooses never to return,
> the thread is dead, right?

I have been assuming that for AIO there are no threads, only
coroutines. A major benefit of AIO is that, if the library is designed
properly, nothing waits, and no threads are needed. A coroutine
scheduler at the Lua level can handle everything.

>> I note that the libevent API is based on (fd and timeval) callbacks,
>> whereas IOCP is based on polling (GetQueuedCompletionStatus). I.e.,
>> the dispatcher is built outside library in my IOCP approach, whereas
>> it is inside the library in libevent. Which approach do you prefer?

> Can't you use WaitForMultipleObjects with an associated condition or
> something like that instead of polling?

Not with IOCP. Nothing waits in an IOCP world except the one poll call
(it has an optional timeout). All pending events' completions are
reported in the same queue. (It is possible to have one queue to wait
upon, or multiple queues, but the most sensible design, IMHO, is to
have one queue.) The scheduler polls the queue, and resumes coroutines
to continue their work.

Typically, one instantiates a bunch of coroutines. Each coroutine uses
AIO calls to perform I/O. The coroutine is suspended while the I/O is
pending. The scheduler polls, and when it receives an I/O completion
it resumes the waiting coroutine. This way, each coroutine can be
written as a simple serial process. An accept() coroutine may spawn
other helper/client coroutines that operate similarly.

Unfortunately, this model loses (in a performance sense) when some
coroutine calls a blocking I/O function.

This model wins when all the coroutines are cooperative. No OS threads
are needed; memory consumption and context switching time is much lower.

Regards,

e

-- 
Doug Currie
Londonderry, NH