lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


What would be needed is that the "http" package included in the created instance an optional boolean "keepalive" property which (once set to true) woulc cause it to NOT close the session automatically once the result has arrived, but maintains the session open, and that it includes a "close()" method that your application can use when he non longer needs the session. That package should also include a "isalive()" method to check if the session is still opened (in "idle" state or "running" state waiting for replies).

That package could also handle a "request queue", to allow sending multiple queries in order (each query could be associated with some user data, so that when you get the results, you can identify to which query the results is coming. Ideally, the userdata should be the "query" itself which has its own state (possibly the resource name or query string, the verb like "GET"/"POST", other user data you can use for example to hold a persistent cookie jar or other persistant data needed by your aplpication; when you receive a reply (success or failure) you also get the reference to that userdata, and the state of the HTTP session.

Note: an HTTP session is not necessarily bound to a TCP session (it could be any bidirectional I/O channel): this binding is only done when you connect it (by opening a socket with the target indicated by the host and port number, i.e. the first part of the URL, all what is before first "/" or "?" or "#" of the full URL, but then excluding all what is after the first "#" which is an anchor): the HTTP protocol itself does not "understand" the domain name, host address, port number, and anchor, it only needs the query.

Note as well that an HTTP query is not limited to send only a single verb (like "GET", "POST", "PUT", "HEAD"...) and a resource name (usually starting with a "/"): it sends also a set of MIME headers, and you can also "stream" an output attachment which may be arbitrarily long: if you want the query to be handled asynchronously, you'll need that the async event handler can check the status of the session (idle, sending headers, sending an attachment, waiting for a reply from remote server, and notification that a reply starts to come, i.e. you've received at least HTTP response with a status, and then if the response received is complete and not just some of the MIME headers, and includes the full content body).

Normally it's up to the "http" package to handle itself and internally some responses like intermediate status while server confirms it has received a query and starts running but cannot give a definitive status to your query, or if the server replies with a redirect (it's up to the client to accept the redirect live "moved to", and if it accepts it, reexecute the query but for a new target).

The package should also include internally the support of "streamed" format for partial responses for the attachment, and encode/decode it for you), it should support also itself the negociation of options like data compression, encryption/decryption, and should handle itself the cookie jar or provide an API so that your client is notified when the server sends you a cookie that your client will store or discard as he wants.

If the "http" package works only in synchronous mode, then all queries are blocking, but then you cannot handle a queue of requests (so you don't need at all any private data: the query will be terminated, but this is very limitating because it does not allow sending large queries or receive large responses (either in MIME headers, or in the content body). Running asynchronously allows much better management, but doe not necessarily implies multithreading, and the Lua coroutines (with yield and resume) can be used to handle the state of a session in a "semi-blocking" way, just like I/O on files. Effectively the HTTP protocol is just using the generic "producer/consumer" concept over a single bidirectional I/O channel (it's not up to HTTP itself to open that channel and negociate the options, not even the HTTPS securisation, and it is agnostic about the transport protocol used, which may be TCP, or a TCP-like VPN over UDP, or a serial link, HTTP as well will not resolve itself DNS hostnames, needed before you can open an outgoing socket; and the protocol itself does not need that you initiated yourself the session before sending a query: that bidirectional I/O channel may have be initiated by the remote agent: HTTP queries are asymetric with a server and a client, but the asymetry is not necessarily the same for the bidirectional I/O channel on which it is established, so the same channel, once it's established and both agents are waiting for a query to execcute, may be used by one or the other agent to start a query in which case it will be a HTTP client and the other agent will be a HTTP server replying to the query, but the roles can be swapped later and it's up to each agent to decide when he wants to terminate/close the I/O channel otself, or to indicate to the other agent that he should terminate/close the session once it has processed the query or the response, and it is the role of the "keepalive" option in MIME headers; as well HTTP allows any of the two agent to terminate the session: this is indicated by an I/O error or close event deteted by the agent that was waiting the other party. HTTP has well does not provide itself the facility to close the I/O channel itself: the HTTP protocol is by default illimited in time).


Le jeu. 8 nov. 2018 à 14:05, Dirk Laurie <dirk.laurie@gmail.com> a écrit :
Op Do., 8 Nov. 2018 om 14:46 het Philippe Verdy <verdy_p@wanadoo.fr> geskryf:
>
> A cookie jar to store cookies is for something else: it does not create "keepalive" HTTP sessions, but allows restarting sessions that have been already closed by proving again the stored cookies.
>
> No, there's NO need at all of ANY cookie jar storage in HTTP to use "keepalive" sessions.
...
> Using "curl" to create a cookie jar does not mean that you are using keepalive, but at least it allows all requests you send using the same cookie jar file to reopen new sessions using the same resident cookies that were exchanged in the previous requests (so this "simulates" sessions, but still you'll see many outgoing TCP sockets that are "half-terminated" in FIN_WAIT state and that still block the unique port numbers that were assigned to that socket).

Thanks a lot for this explanation.

-- Dirk