[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: [ANNOUNCE] luahttpd
- From: Javier Guerra <javier@...>
- Date: Wed, 12 Jan 2005 07:06:33 -0500
On Tuesday 11 January 2005 10:37 pm, Joseph Stewart wrote:
> here's a sketch of mine:
>
> master = socket.bind("*",PORT)
> read_list = {master}
> write_list = {}
> thread_list = {}
> while true do
> readable,writeable,error = socket.select(read_list, write_list, TIMEOUT)
> if error == "timeout" then
> -- do periodic tasks here such as checking to see
> -- if a coroutine needs to be resumed and
> -- discarding closed sockets
> end
> for _,which in readable do
> if which == master then
> client = master:accept()
> table.insert(read_list, client)
> thread_list[client] = create_thread(client)
> else
> coroutine.resume(thread_list[which])
> end
> end
> end
it's similar to what i tried before using threads; but i kept stumbling with
trying to read and write without blocking. at the end, i decided to use
threads
quoting something i read somewhere:
a) threads are for people that can't program state machines
b) computers are state machines
that's the great thing about coroutines, make the state implicit in the
context, instead of asking you to manage it (the coroutine-based iterator
still makes me drool...). unfortunately, managing two channels (read _and_
write) and trying to avoid blocks (even short blocks, with very small
timeouts) seems to be too much for my attention span.
> > also, even if i think most of the queries could be answered quickly, some
> > of them could try to use other blocking libraries (SQL?). it wouldn't be
> > nice to make all queries wait on one.
>
> threading/forking is probably the safer, accepted way to do things...
> i'm looking for a lighter-weight way of doing things, though...
me too, that's why i tried coroutines first. as i said before, it might be
interesting to use a coroutine-based scheduler with the option of creating
threads when needed.
another solution (commonly seen on cross-platform porting libraries) would be
to use two threads: one that manages the sockets, and can block for short
times, and another with the coroutine scheduler that can't (or shouldn't)
block.
still, with the big optimizations of threads on modern kernels, the
performance difference between coroutines and threads aren't so bad. also,
my server uses the 'keep-alive' option of http/1.1 to keep a thread with a
connection for as long as possible. the downside of that is that the
response has to report its datasize on the header. that's why i give the
option of just setting a variable with the data instead of directly writing
to the socket.
in other words; a handler has three options to build it's response:
a) set res.content to the output data, as a string
b) set res.content to the output data, as an array of strings
c) call http_send_res_data()
on the first two options, after the handler returns, a final routine will
calculate the data lenght and add it to the headers. that way, it's not
needed to close the socket to mark the end of the data.
> > i'm not fond of sandboxes... and it shows in my code, it's still too easy
> > to break the whole thing.
>
> i can understand how forking is safer, but are threads really any safer?
of course not, that's what i mean: my code is still fragile because i don't
use any kind of sandbox and the error handling is still too sparse
--
Javier
Attachment:
pgpgB1RaRlIaH.pgp
Description: PGP signature