lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

You may want to look at LuaSocket to implement your own wrapper for persistent TCP sessions. On which you'll reimplement the HTTP protocol.
The bad thing is that it's not easily integrable on basic Lua servers without the native C support library (the biggest difficulty being that to fully implement persistent sessions and streaming, the simple consumer/receiver pattern for I/O used by Lua (which is synchronous and based on coroutines whose exectution is controled by blocking yield/resume calls) will not easily allow you to react on asynchronous send/receive events that a streaming protocol usually requires (as well you'll need to support true multithreading and nnot just cooperative threading).

Given these limits, the "http" package offers no real "resume" facility, and as the socket it creates is temporary and can be garbage collected (and then the FIN_WAIT state of the socket starting for a long time, forbidding reuse of the dynamically assigned TCP port numbert) as soon as it terminates a request, it is not easy to avoid its closure. So each query opens its own new separate socket and you'll "burn" a lot of outgoing local TCP ports if you intend to use it for streaming many small HTTP requests.

A solution/emulation however is possible (just like with old versions of WinSockets, in old cooperative-only versions of 16-bit Windows, also based on yields/resume with an message loop, which can easily adapted to pure-Lua coroutines, already used by the basic I/O library of Lua), provided that your Lua application is cooperative (and provides enough "yield" calls to serve both the "send" and "receive" events and manage the two message queues on that socket: one queue for outgoing HTTP requests, the other queue for the incoming responses). A smart implementation of HTTP would use not just a single pair of queue (one TCP session), but could create at least 4 pairs of queues per destination (i.e. host and port number, where a host is either a domain name or an IPv4 or IPv6 address). With that you would emulate what web browsers already do to load pages and multiple dependant scripts and images, without abusing remote server resources.

Note that the effect of absence of persistence of TCP sessions does not concern only the local host (for local outgoing TCP port numbers allocated to each new socket created by the client), but also the remove server (for local incoming TCP port numbers allocated to each accepted incoming requests): the exhaustion of port numbers on server may be even more critical, servers also needing to keep the FIN_WAIT delays if they want to secure their communications and avoid sending private data to other new incoming clients, or to avoid that incoming data coming to late from a previously connected client comes in to pollute the incoming data from new connections !

All HTTP clients and servers today need to support "keepalive" as described in HTTP/1.1. The old behavior without them (in HTTP/1.0) is no longer acceptable and cause severe security problems (notably it exposes servers to DOS attacks if they permit the same remote client to use an arbitrary number of new temporary incoming queries).

Le ven. 9 nov. 2018 à 19:02, Srinivas Murthy <> a écrit :
Appreciate all the discussion. I have experience with this before and yes its not simple to do it properly.
For now though, I'm in a tight time frame and need a simple solution that works with non - https solution. The closest I see is the curl wrapper that is mentioned. Any other ideas?

On Thu, Nov 8, 2018 at 6:06 PM Daurnimator <> wrote:
On Fri, 9 Nov 2018 at 04:36, Srinivas Murthy
<> wrote:
> Is anyone aware of a lua http lib that supports keepalive? Using a "local HTTP proxy" will still be a significant overhead if the client still has to setup a new conn for each req. These are streaming events and could be very frequent.

lua-http is gaining support for it soon
It's a much trickier problem than you may think at the surface!
especially once SSL is involved (and infact I have found bugs in
nginx's and curl's implementations while doing research for