[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Incremental garbage collection coming to Lua soon?
- From: David Jones <drj@...>
- Date: Thu, 07 Aug 2003 09:42:05 +0000
In message <OF8E1610BB.DF71A418-ON05256D7A.006A0834@oxfam.org.uk>, RLake@oxfam.
> >> As deployed the collector is a mostly copying read barrier collector:
> >> the Dijkstra tricolor invariant is maintained by turning grey objects
> >> black whenever the mutator attempts to read a grey object. The
> >> Virtual Memory page protection system is used to implement the read
> >> barrier meaning that the mutator has no extra code to implement the
> >> read barrier.
> > How does this work? Is the VM managed in software, not hardware? Are you
> > suggesting an (optional?) VM type system for Lua so a copying collector
> > could be written?
> I don't think he is suggesting anything :) just describing some
> possibilities. Since there is no ANSI standard way of interfacing
> with VM systems (and indeed no requirement in standard C that
> there even be a VM system), it would not be a plausible strategy
> for Lua.
> However, fwiw, the "usual way" of doing this sort of thing is to
> mark pages as unreferenceable (or unreadable), and then do the work
> when an memory exception is raised by the VM. I'm just guessing, but
since David says it is a mostly-moving GC, it probably colours all
> grey objects on the page black before returning to the mutator; if so,
> there is no need to field another interrupt on the same page (until
> another object on the page becomes grey).
Exactly. It's a moving collector so it uses a read-barrier (technically
there are algorithms which don't do this, but they're more obscure). In
order to remove the barrier on the page one must turn all grey objects
on the page black; this enables the mutator to be restarted.
Normally scanning an entire page isn't so bad but if the object in
question contains weak references then scanning it can prematurely
"promote" all the weak references to strong references and so cause lots
of objects to be preserved which would otherwise be reclaimed. In such a case
we emulate the instruction that caused the fault and keep the protection
on the page. This is of course even less portable than the page
protection interfaces and somewhat hairy. And, as it happens, a good
source of bugs in various host operating systems.
> I continue to believe that garbage collection is a Good Idea, that
> it generally improves performance (or at least does not degrade it)
> and that it certainly improves coding time; it would be even better
> if more widely adopted. For example, cooperating with garbage
> collectors could become a priority for C compiler writers, and
> hardware-assisted garbage collection could become more generally
> available; these things would probably happen if there were enough
This is the wrong list to say this on, but in my mind the most useful thing
that could be done in the current computing environments is to improve
the operating system support. Chiefly this would involve standard
interfaces to control caching and virtual memory and _fast_ exception
When a garbage collector reclaims objects it is in a prime position to
tell the OS that these bytes are no longer needed, can be removed from
the cache _without_ writing them to main RAM, and can be removed from
physical RAM _without_ writing them to swap. Probably most memory
managers, not just garbage collectors, could benefit from such an