Hello everyone.
we have a simple scenario of a lua webserver that receives a big blob (hundreds of MB) of binary data.
Setting the socket in blocking mode, the reception of the data is very fast (10s of ms), which is expected when data is sent from a client running on the on the same machine. Unfortunately, blocking mode is unacceptable for our deployment.
But, using a runtime that by default uses sockets in non-blocking mode produce really poor results (on average 1.2 seconds on a linux 64bit, and ~20 seconds on mac OS X!!)
In our runtime, there is a main while-loop that checks for the state of a socket (using socket.select): when there is
some data, the coroutine associated with the socket is resumed, executed (for a while) and this goes on until the total amount of data is fully received.
This happens here:
Due to continuous 'timeout' errors returned by the receiving socket (which is set to nonblocking, socket:settimeout(0)), the coroutine continuosly yields: in this case, is the reception of the data somehow put 'on hold' ?
This happens here:
The continuous yield/resume of the coroutine associated with the socket is apparently very costly.
Is the yield/resume known to be a costly operation that should be used carefully ?
Isn't there any clean way to have the coroutine associated with the socket to somehow keep running in background without being continuously interrupted ?
Thanks for reading this far:-)
Valerio