lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


Hi,

replying to myself ... sometimes taking a shower is a good idea. :-)

Well, who needs multiple event containers anyway? This is only useful
if you have multiple native threads. And native threads do not work too
well with a shared Lua universe (500 lock + 500 unlock calls just for the
startup code ...). There is a partial solution (explicitly unlocking
only around blocking I/O calls), but this won't work here, since we
are using non-blocking calls exclusively ...

Considering this, it doesn't make sense to use more than one event container
per Lua universe. So we might as well create it when the "event" namespace
is loaded and store it in the registry. Then whenever a socket call
needs to access the event container it consults the registry of the
current Lua universe. Voila ... no more passing of event containers
to socket calls.

This has some added advantages:

- Multiple containers still work with the one-universe-per-native-thread
  approach.

- Drop event.new() and move all ev:xxx() methods to event.xxx(). Passing
  the event container with the ':'-syntax is unnecessary. The calls could
  use an upvalue instead of the registry, for added speed.

- Applications and schedulers do not have to pass around the event
  container object because it is implicit in all calls.

- The socket module no longer needs to depend on the event module because
  the userdata passed in the registry may contain a pointer to the event
  container plus two function pointers for registering and triggering
  an event. The overlapped I/O stuff is still necessary in wsocket.c,
  though. But the required changes would be localized.

- New extension modules can just use the well known registry key to
  get the event container. E.g. a module for native Windows file I/O
  (ReadFileEx, WriteFileEx) would have a clean API to the event subsystem.
  Ditto for native POSIX file I/O (read(), write()).

- A socket can be put into event mode with sock:seteventmode(true). That
  would get the event container from the registry and store it right after
  the WSAOVERLAPPED structure in the socket USERDATA. This is used both
  as a flag and for passing it to the completion handler. The event id
  can also be stored this way, to avoid passing an int inside the hEvent
  member.

- The event id no longer needs to be unique across multiple containers.
  This solves some implementation issues because it is a bitfield
  internally.

I think this looks much cleaner now ... I'll update the docs unless someone
sees a flaw in this simplification.

(The issue about passing the event id back from the socket call is
 still open, though.)

BTW: I retract my comment about IOCP and threads. After rereading the docs
     I think I have a better understanding now. Basically it's a kernel
     queue for completion notifications that can be read from multiple
     threads (if required) and waited for via a single handle. I hope
     I got that right now?
     I think this is not too useful here since it amounts to copying the
     kernel queue to the internal event queue of the event container.
     Triggering a virtual event directly moves it to the internal event
     queue with just a few CPU cycles. The latter is likely to be more
     efficient.

Bye,
     Mike