lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


On Tue, Aug 23, 2016 at 11:29 AM, Sean Conner <sean@conman.org> wrote:
> It is indeed, Adam's answer that provides, to me, some cases I
> did not consider at all


>   Just because *you* know does not mean *everybody* knows.


I'm happy to have been of service, and this was the mindset I had
going into that message.

11k data files and 8k code files is a pretty hefty project, and it
does sound like you've individually reimplemented most of the useful
features that the mature test harnesses provide. I actually wasn't
aware of the sequence randomization feature myself (learn something
new every day!) but it makes sense that it would provide a useful test
of isolation. (But it's a TEST OF isolation, not a MECHANISM FOR
isolation; what I was thinking of includes runners that wrap around
the tests and spin up an isolated sandbox for each suite or group of
related suites. I don't know if this exists for Lua.)

Other useful features include:

* Parallelization. When you start getting a lot of tests, being able
to parallelize them across cores can provide a big speedup.

* Composability. The good runners have hooks for things that need to
run before and after a suite, and before and after each test in a
suite, which lets you set up an environment outside of the actual
function under test.

* Automatic retries. I hinted at this, but when tests only fail SOME
of the time, automatic retries are useful for identifying whether a
failure is due to a straight-up bug or due to nondeterminism of some
sort.

* Build system / project management integration. Testing is best when
it's run automatically every time you want to check something in.
Features for e.g. posting test results as comments on your GitHub pull
request make a big deal in making testing an integral part of the
workflow.

* Mocks. I mentioned this in passing, but more specifically, a good
test toolkit (technically this is independent of the test runner) will
provide tools for injecting fake implementations of things like
network requests so that the logic around it can be tested without
having dependencies on external things like a network service, or a
fake clock so that varying system load won't change the
characteristics of the test.

* Cross-platform testing. Using a well-defined testing structure
allows you to integrate with automated cross-platform testing services
that will run the tests on multiple servers simultaneously.
Technically this is independent, but the services usually require you
use their test runner.

/s/ Adam