[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Closure of lexical environment in Scheme closures
- From: "Dr. Rich Artym" <rartym@...>
- Date: Fri, 12 Nov 2004 13:19:48 +0000
On Fri, Nov 12, 2004 at 12:22:54AM -0800, Mark Hamburg wrote:
> To your quotes I will throw back Sussman's _Structure And Interpretation of
> Computer Programs_ which shows all sorts of things that you can build
> because closures capture variables rather than values. (It covers a lot of
> other things but review Chapter 3, Modularity, Objects, and State.
> The closure part has to do with the fact that unlike dynamic scoping,
> evaluating the function won't go looking for the variable again. The
> implementation determines which variable a token corresponds to at the time
> the function is constructed based on the lexical scope of the token. A
> function captures it's environment at the time it is created and that
> environment is based on the lexical arrangement of code rather than the
> dynamic arrangement. That environment provides a mapping from variables
> names to values.
Thanks Mark, that's another good quote to make exactly the same point I
was making: "A function captures it's environment at the time it is
created" ... and "That environment provides a mapping from variables
names to values", ie. very specifically not a mapping from names to
addresses of variable cells.
What Sussman is saying so very explicitly is very very key: the (name,value)
pairs for each free variable in the open functional expression are "captured"
when the closure is made (it's not an arbitrary choice of words), because it's
precisely that capture that allows us to consider the resulting composite
functional object (function + state) as a closed function, one without any
to-be-evaluated dependencies apart from its lambda-bound function arguments.
In Scheme and related languages, the captured FV state data in the closure
still looks like the original environment from which the closure was made,
but that's just an implementation detail.
In other implementations of the same concept of closures the data may be
copied out from the lexical state directly into the closure space in a
copy-based interpreter, or the free variable references may be rewritten
with the defining expressions in a rewrite system, or new independent data
packets are created for the FV references in the open expression in a
dataflow-based system. And there are many other approaches too, all
yielding exactly the same thing: to turn what used to be free variables
into a new state invariant logically hardwired into the closed onject.
The benefits of that closure are enormous, and that's why closures were
invented of course, otherwise we could have just continued using plain
old functions with free variables instead. Closures give us referential
transparency, which in turn gives the ability to use the closed functions
in parallel computation without being sensitive to the order of evaluation,
they allow us to lazy-evaluate since closures are not dependent on the
time of evaluation relative to their environment, and in general they
allow allow us to reason mathematically with them as pure functions ---
and that's not just a benefit for theorists (I'm an engineer) because
it's just as good for writing game AI as for MIT theorem solvers.
Think I'll stop there. :-)
Existing media are so disconnected from reality that our policy debates
spin around a fantasy world in which the future looks far too much like
the past. http://www.zyvex.com/nanotech/MITtecRvwSmlWrld/article.html