• Subject: Re: table.maxn...
• From: Coda Highland <chighland@...>
• Date: Sat, 30 May 2015 22:12:47 -0700

```On Sat, May 30, 2015 at 9:28 PM, Brigham Toskin <brighamtoskin@gmail.com> wrote:
> This is kind of an interesting point, actually. Does anyone know the
> rationale for not just automatically tracking the number of entries in a
> table? Similar data structures in many other languages do this. I know
> that's not in an of itself a reason to do anything, but it does seem useful.
> If you know you're dealing with a sequence a sequence, then #t effectively
> becomes constant instead of logarithmic. And if you don't know it's a
> sequence (or you know it's not) then you have a convenient counting method
> that doesn't involve an iterator.
>
> Is it just the (perceived or actual) runtime overhead?

to the table, in exchange for making length determination
constant-time. But since you write to tables much more frequently than
you evaluate the length of a table, it's not a particularly GOOD
necessarily O(1) in and of themselves -- consider the following:

t[1] = 1
t[3] = 2
t[2] = 3

After the first step, obviously #t == 1. After the second step, #t
isn't well-defined. It could be 1 (because that's the length of the
sequence) but the table isn't a proper sequence. After the 3rd step,
#t needs to return 3. For this example, setting #t == 3 after t[3] = 2
would make it work in constant time THIS time, but... consider
something trickier:

t[1] = 1
t[1000] = 2
t[2] = 3
t[1000] = nil

#t is undefined for the second and third steps, but after the last
step, #t is well-defined as 2... but how do you determine that in O(1)
time?

So the decision was made to make #t O(log n), because an O(1)
implementation would make a series of table writes O(n log n) instead
of O(n) like you would expect.