lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]

The framework I'm developing needs to run untrusted code and keep it within certain usage constraints.  It's going to be used as a proof of work, so I need both upper and lower bounds.  My plan is to instrument the lua VM with oblivious hashing[1], which generates a hash of the execution trace to enforce a specific calculation.  However, the recent discussion on hash collisions has made me realize that for this to work I need to ensure a bound on the algorithmic complexity of the execution of each opcode.  My problem is actually worse than the hash collision problem people have been talking about because, for instance, suppose lua is changed to use the entire string in its hash function (necessary for this application.)  Then someone could provide code for the proof of work involving generation of some really long strings which can be identified by the character at position 1000000, for instance.  For most people, the number of operations for a table lookup from those strings would be O(length of string), but for a specially modified lua it would be O(1) because it could check position 1000000.  This would compromise the use of that code as a proof of work.  That particular vulnerability I can avoid by only allowing short strings (say 16 bytes each) and the hash collision problem can be avoided by seeding the hash differently.  But I also need to figure out which other lua VM operations could lead to an unbounded number of instructions by the runtime.  So far, I have SELF (method lookups could traverse an insane class hierarchy), *UPVAL (insane depth of closures) and the passing of arbitrary numbers of parameters and results between functions (RETURN, VARARGS, SETLIST, etc.).  These are all problems I think I can address without degrading the usefulness of the lua runtime too much.  Are there any other ways to engineer a large number of operations per lua opcode?