I'm growing more convinced that "wants precision in computation" is a property of code, not of dynamically typed values. I know I may be a victim of confirmation bias.
My own gut feeling is the same. But I'm unsure how one might implement such a code property in a high level language like Lua.
Perhaps we could follow the _ENV model? Have an upvalue _PRECISION that's implicitly set to "float64" any time a new chunk in loaded. Meaning that inside that scope, numeric operations should overflow as 64-bit floats.
One could then allow various other _PRECISION definitions, such as the coercing rule Roberto's been suggesting (numbers overflow as floats if at least one of them is a float, otherwise, they overflow as integers), or the sanity checking rule you suggested (integers are coerced to floats but, any overflow throws an error). I might call Roberto's proposed _PRECISION standard "mixed64", and yours "sane64".
My hope would be that, for the case of "float64" and "mixed64", the overheads involved in converting any arguments to new internal types as needed would tend to be small.
I'd suggest making "float64" the default value because it's a simpler rule, and also one that Lua programmers are familiar with. But there's probably also a good argument for making "mixed64" the default. Either way -- in those rare cases where you know a piece of code should be using one kind of overflow behavior or another, I'd prefer if the language made specifying the correct behavior easy.
(Mathematica uses a mixed representation much the like the one Roberto's been suggesting -- and while I can certainly live with it -- the need to wrap function arguments with tofloat conversions can lead to cluttered code. I'd prefer a solution that kept my Lua scripts cleaner.)
-Sven