lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


There has been a fair bit of discussion lately about async socket io and coroutines. In an attempt to clarify the issues (and my thoughts) here is a rough summary, with comments.

Note : for the purpose of this discussion, real-time (<n ms) response is not required. This avoids issues such as interrupting the gc., etc.

1. Non-blocking versions of blocking api/os calls.
- Difficult, os-specific, and requires background thread(s), with all the design issues involved (pooling, thread-per-request, etc). Ignored for now.

2. Non blocking file io.
- Not too bad if supported by the os, but still very os-specific.

3. Event-driven callbacks.
see mike pall's event api documentation <http://lua-users.org/files/wiki_insecure/users/MikePall/event.html>
for a more detailed discussion

  There are 2 fundamental design choices here
 - are the callbacks to share a common Lua universe ?
 - is the main processing loop in Lua or c ?

The simplest (and fastest ?) model here would be to have the main loop in c(++), using os threading and events to run each lua callback in it's own lua state (vm). We found this model perfect for a specific large commercial task. nb. the ability to clone a lua state would be lovely in this case, as we could load all libraries and perform initialisation of one "master" state, which is then cloned for each child thread.

If we want the main loop to be in Lua, we need an efficient way for Lua to sleep between events. We can use select() for socket io, but other events, signals, etc become os-specific. A single shared vm also increases granularity (response time) : events can only be serviced between lua instructions. This will suffice for most applications. If we want the event handlers to be interruptible, we need to check for events between lua instructions, and yield to the scheduler as required. This can be simulated using the debug linehook, but that imposes a fair amount of overhead - the event test should be in the core lua vm. The callback handlers otherwise need to be written with multiplexing in mind, and yield regularly to the scheduler.


4. Multiplexed socket/file io.
- See #2 re file io.

A simple way to handle socket multiplexing is to have a main scheduler based around select(), and to override the read() and write() functions to yield back to the scheduler if they would block. This allows re-use of existing blocking socket code unchanged.

I have built a library which emulates/wraps the luasocket interface, using this approach; i can thus re-use Diego's http libraries (thanks Diego). This lib was first developed for a http proxy prototype, and is fine for light-duty web serving / prototyping. I have tested it with 50 simultaneous http requests on windows without a problem : it runs into winsock issues when i try too many requests.

I found it a very nice model for simple multiplexing :

-- sample code ----------------------------
-- a, b are connected sockets
-- this fn is called from a couroutine
function simplecopy(a,b)
  while true do
    local s, err = a:read("*l") -- yields if insufficient data
    if err then return nil, err
    if not s then return true end
    local ret, err = b:write(s.."\n") -- yields until done
    if err then return nil, err
  end
end -- sample ------------------------------

The main drawback with this approach is response time : the scheduler is only invoked when a coro explicitly yields, or calls one of the yielding io functions.
It is also difficult to fully exploit native async. io (IOCP in windows).

ps. A wonderful side-effect of using coroutines is the error behaviour - you effectively get a free "try", and errors can be propgated / handled in a very flexible manner.

pps For high performance web serving, Apache + FastCGI Lua ?

ppps These comments represent my limited problem set; actual mileage may vary.

Adrian