lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


On 7/12/2011 9:10 AM, Matt Towers wrote:
> On Jul 11, 2011, at 12:49 , Tim Mensch wrote:
>> As cool as Mongrel/Mir are, I decided to go with the Nginx/ngx_lua
>> approach
> How have you found that stack to work for you? We looked very
> closely at that one as well, but ultimately went with Mongrel2/Tir
> largely so we could have dedicated processes for each service. The
> ZeroMQ layer also opens up a number of possibilities for a more
> distributed architecture.


For me, with my limited needs, yes, it's working so far. BUT I haven't gone live with it, so I can't actually tell you what the real-world performance will be.

Dedicated processes for each service wasn't something I was trying to achieve. I'm FAR from being an expert in server design, so feel free to consider that decision made from ignorance rather than from some highly considered opinion.

But from what I do know, Nginx gets its speed precisely from not using a lot of processes. The ngx_lua plug-in is set up to be completely non-blocking for the process (other tasks can be executed while any time-consuming task is being processed), so that requests made from Lua shouldn't slow down the server in general. From what little I've been able to learn reading ABOUT how these things work, that seemed to be the same strategy that makes Nginx itself fast.

Where I KNOW I'm not getting the speed is my connection to CouchDB; it's on the same host, and those requests should be non-blocking as I described above, but RESTful queries are not going to be as fast as MongoDB's binary interface. But I was in a hurry and I couldn't find a ready-made MongoDB interface for Lua. The Nginx server package I'm using came with a built-in Redis connection, but I'm on an inexpensive VPN with limited RAM, and Redis looks like it prefers to use LOTS of RAM; I know it can spool to disk, but it looks like most of its speed comes from the "it's in RAM!" advantage. Really, though, it was the bidirectional replication of CouchDB that was the killer feature that made me settle on it. Keeping user data safe is important for my use case.

On my current setup (on a 512Mb Linode instance) I get about 8000 queries per second for a simple query (echos the headers back to the user), or 1800 queries per second for a request that does some small amount of work in Lua including a CloudDB query. (using ab to test the speed on a mostly-empty database, so nothing sophisticated or even remotely real-world) Which I'm defining to be "good enough" for my app, which currently is only about 20k users worldwide, of which no more than 10% of them would need to be doing queries at at time regardless, so unless CloudDB bogs down terribly when it has a few tens of thousands of entries (not likely from what I've read), then I should be fine until I have so many users that I can afford a few 4Gb Linode instances to spread the load among.   :)

I know about ZeroMQ, and when I do an app that's more network-intensive I plan to read more about it, but my current knowledge of it doesn't extend past "good for game networking." ;) But my current app really, really doesn't need anything complicated -- I'm just polling my server for what virtual currency users have earned, so each app polling at (about) once to twice per minute while they're on the screen that shows them their balance is as much as I need. Like I said, my needs currently are very, very modest. :)

Tim