lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]



On 27-Jan-05, at 11:54 PM, skaller wrote:

It makes select() utterly useless. You might was well just
attempt to write to each socket in turn (nonblocking write I mean),
perhaps with a  delay each pass all write fail and you want to
politely  yield CPU to the OS.

It's an indication of which sockets you don't need to poll.
It doesn't have to work 100% of the time to be useful. If it
worked 10% of the time, it probably would be "utterly useless"
but if, as I suspect is the case, it worked 95% of the time
or more, it would be "fairly useful".

Neither the world nor computer systems are that black and white,
imho.

After a little googling, I'm starting to wonder how atomic
a write() is supposed to be. It is at least a question which
appears to arouse strong emotions, which I generally find
not particularly useful.

(http://www.uwsg.iu.edu/hypermail/linux/kernel/0208.0/0251.html
was amusing, for example.)

Leaving Linus' chainsaw aside (I hope), consider the normal use
cases for select() and friends (kqueue() is my favorite but I'm
not going to get religious about it).

First of all, you have the case where there are a few sockets,
possibly only one, and you're basically using the select() as
a way of not hogging the CPU. If select() gives you the
occasional false positive, you have succeeded in the goal. If
it consistently gives you immediate false positives, then you
have a problem.

Then you have the case where you have a zillion sockets and
you don't have the time to poll them one at a time with a
system call, given that it requires an expensive context switch
and all that. Again, if select() gives you an occasional false
positive, you end up wasting the occasional context switch.
If it consistently gives you a false positive on certain
sockets, but only a handful of them, you're still OK. Even
if it gives you a lot of false positives, you're still
better off than polling every single socket.

If select() ever fails to report a socket to be ready when
it is in fact ready, you have a serious problem in either case.
You could actually lose communication in that case. So if there
is any doubt, it ought to be resolved in the direction of issuing
false positives.

Undoubtedly, OSs ought to be perfect. I look forward to
such an implementation, but not with my respiration
contingent on it. In the meanwhile, I'm prepared to
settle for the lesser of two evils.

Now, it is entirely possible that a connection has simply
vanished into the ether, possibly because your pet iguana
chewed through an ethernet cable. That might mean that
a socket() remains unready for a long time, possibly forever.
In the worst case, where write() really is atomic and the
send buffer has just enough room for select() to conclude
that you could write something but you actually wanted to
write more than that (atomically), you have the problem
where select() consistently immediately returns a false
positive on a particular socket. (At this point, I really
do not know what, if any, guarantees different OS's provide
on the atomicity of write()s, but it seems like a possible
scenario.)

In the first use case, above, that causes excess CPU
consumption; effectively a busy wait. If that's an
issue for a particular application, I guess you have
to be defensive about it, and possibly reconsider your
pet maintenance policy.

Rici.

PD: I'm sorry -- I didn't really mean to start off a
huge debate. I am perfectly prepared to believe that
"x is flawed" for any value of 'x', and that "Windows
is more flawed than y" for a large number of values
of 'y'.

R.