|
The point is that you don't currently have a non-leaky implementation.
It still leaks. It just leaks slower than if you weren't using weak
keys. Meanwhile, it's not exactly "leaky" if you think of it as an
interning pool or like memoization, where it's EXPECTED that the
memory usage is monotonically increasing as new unique values appear
-- allowing the memory usage to decrease is an optimization, from that
perspective.
The standard isn't "never make tradeoffs." The standard is "make sure
the tradeoffs are worth it" with a side of "avoid breaking/penalizing
existing code if you can". And one tradeoff that's ALMOST NEVER worth
it is making an extremely common operation more expensive. It's ALMOST
ALWAYS preferable to add a little bit more per-call cost to a
less-frequent scenario.
> Why would they get more expensive? It's the same hashmap algorithm except
> now the hash function is a sum of the hashes of the elements and I don't
> think anyone would object that that should be O(#t) as it is for strings.
> What am I missing?
It's not O(#t) for short strings (that is, the kind you're most likely
to use as table keys), it's O(1) thanks to string interning.
A built-in tuple would either have more garbage from having multiple
deep copies of non-unique tuples,
or more long-term memory usage from
having a canonical copy of the tuple interned in a pool.
All in all: To me it strongly feels like tuples would be better served
by library code (or maybe a power patch) than core code. They aren't
broadly used enough to warrant the corresponding increase in Lua's
footprint, and the people who would benefit from its existence can use
a library or patch.