|
But that was my whole point. An *explicit* array size, which means it’s set explicitly by the user. The default for a table is 0. If you set size to (say) 5, then you have 5 elements, (1..5). Doesn’t matter if some are nil, doesn’t matter if you have keys 6, 7 and 8 .. the size is still 5. This is, as I said, similar to “.n” but it’s *first class* .. so #t returns 5, and obeys __len metamthods (so there is no ugly “.n”, just the built-in # operator and a yet-to-be-determined way to set the size). And ipairs() uses #t, so there is no surprise or ambiguity. And #t runs in O(1) time instead of O(log n) time. And table.pack() / table.unpack() are mirror pairs. Modifying the size of an array is explicit; you have to set the new size. Of course, some library functions might be modified to do this (or have new versions) .. insert/delete are obvious candidates as I previously noted. This is where you handle grow/shrink (insert grows, delete shrinks), whereas t[#t+1] doesn’t change the size. I’m undecided if a table initializer should set the size: should {1,2} have a size of 0 or 2? What about more complex initializers? And of course there are issues with non-integral keys etc. One “surprise” that I would expect a newcomer to fall into is thinking that “shrinking” the array (that is, setting the size to a smaller value) will discard elements in the table, whereas (to my mind) it simply leaves then alone, but beyond the end of the array. As I said, the scheme is far from perfect, but it’s more consistent that the current incoherent approach of .n, # and sequences imho. —Tim |