I've been prototyping several different implementations of lisp-style linked lists. To get an idea of their performance characteristics, I wrote a series of braindead but probably indicative micro benchmarks.
Now, because I'm trying to get a feel for how the manipulations themselves perform in isolation, I am passing "stop" to collectgarbage() between tests, collecting my data, then using the "restart" command and running two full passes of the gc to clean up before the next test.
The data collection consists of using the "count" subcommand and os.clock to get memory usage wall time before the test, running the micro benchmark, and then checking the difference. The memory usage numbers seem incredibly high (like 40 MB of memory gets allocated when I load up 2 KB (256 * sizeof(double)) of numbers and iterate over them).
Turning the gc off at the start and leaving it off the whole time doesn't seem to make any difference in timing or memory per test. What *does* make a huge difference though is the order I run the tests in. If I use a pair list (where l is the value and l is the next pair node) after two other implementations, it is an order of magnitude slower us uses a lot more memory.
Obviously os.clock has limitations–which I'm okay with, since I don't need exact profiling data–but how accurate is collectgarbage("count")? The manual says it returns usage in "K bytes", so to get the answer in megs I divide by 1024; is this a reasonable adjustment for what it's actually returning? Does Lua actually respect the "stop" subcommand? Is the perhaps an issue more of the allocator than the garbage collector? I could be leaking memory, but 40 MB seems excessive.
Thoughts? Am I approaching my measurements completely wrong?