lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

On Mon, Sep 28, 2009 at 1:10 PM, Petite Abeille
<> wrote:
> Always been under the impression that HTTP is meant to be proxy'ed. That
> apparently some common web servers make it difficult is, well,
> unfortunate... for those web servers that is :)

HTTP wasn't meant to be proxied at all. Tim Berners-Lee created HTTP
to serve static documents and provide simple interactive services. And
the main problems with proxying are that (a) it takes more system
resources to run multiple servers (especially if one is written in an
interpreted language like Lua [although it isn't as bad as a Ruby
server in terms of resources]), and (b) it turns one HTTP request into
two, which is much more overhead than communicating over FastCGI or
running a script inside the server.

> Very similar to the basic concept of stdin/stdout in Unix. Not every tool
> needs to implement everything. Instead, one can pipeline processing from
> one to the other. The lingua franca being HTTP.

Though I admire the Unix philosophy, you are misapplying it here. In
Unix, the tools have highly non-overlapping *functionality* - 'grep'
filters lines that match a pattern, 'cut' selects different parts of a
line. "Do one thing and do it well" works well here, because the
interface between 'grep' and 'cut' is very simple - plain text only,
in a sequential, ordered stream, and they have differing enough
functionality that it is practical to split them instead of combining
the two.

But with HTTP, there are *many* things that each server has to do that
overlap related to parsing the request, like handling encodings
correctly, validating headers, etc. In addition, they have similar
enough functions - 'respond to a user's request for a Web page' - that
this kind of code is not worth repeating across various different
programs. Also, programs like 'grep' and 'cut' spawn, do their job,
and close quickly, in non-time-critical settings (for the most part [I
still think writing initscripts in sh is a mistake]). But for a
reasonably big Web site, your arrangement would have several different
server processes all using sockets, files, RAM, and other resources
for long periods of time, to do the same jobs multiple times per

> I always wonder why people want to be "isolated" from HTTP: what's the
> benefit of ignoring the most fundamental protocol a web application is
> supposed to deal with?

Do you write your programs directly invoking POSIX system calls and
using primitive types? No, because while this may be the more "pure"
method, it is more complicated and puts more work on you, the
programmer. That's why Lua has the 'os' and 'io' tables instead of a
'syscalls' table - or tables at all. The reason WSAPI, WSGI,
[Fast]CGI, etc. all exist is so that people don't have to try getting
everything about HTTP right themselves. They have a layer that does it
for them, and a layer that is guaranteed to do it well, so they can
focus on the *application*, rather than the infrastructure.

> A bit like wanting to access a relational database,
> but not wanting to bother with SQL.

As Pierre said, that is a faulty analogy. SQL is more like a layer on
top of the low-level transport protocol (HTTP) - but it's not like
WSAPI either (an API that "wraps" the transport protocol). Instead,
it's more like XML-RPC - a common method for making requests over the
transport protocol.

-- Leaf
"There are 10 types of people in the world - those who understand
binary and those who don't."