[Date Prev][Date Next][Thread Prev][Thread Next]
- Subject: Re: Thinking about Lua memory management
- From: Nick Barnes <Nick.Barnes@...>
- Date: Mon, 24 Sep 2001 14:04:23 +0100
At 2001-09-21 06:37:51+0000, "John Belmonte" writes:
> Thatcher Ulrich wrote:
> > From a game programming perspective, an incremental collector
> > is more crucial than lower overall overhead using a generational
> > collector.
> I agree with this, but will try to explain it more clearly.
> First let's throw out the term "game programming" as too general. A
> graphical strategy game running on a PC is worlds apart from a first-person
> action game running on a console.
> The people most concerned about gc cpu performance are those writing
> programs with hard real-time limits. For example the main pass of the
> program must complete within a video frame time. Likely rationale for such
> a limit are to avoid jerky animation, to deal with hardware limitations
> (full resolution double buffer not available for interlace mode), or because
> it's easier to code for a constant frame rate (and we are always in a
> In such a program, with Lua as it is now, you may as well force garbage
> collection every frame (that is, call lua_setgcthreshold(L, 0)). The
> important quantity is the worst-case gc time over the life of the program
> (or critical loop). Forcing garbage collection will result in worst-case gc
> time less than or equal to letting it happen automatically.
As I understand it, some games are able to cope with occasional GC
blips, by postponing some work to later frames (e.g. AI for NPCs, some
logic, some physics) to later frames or by simply reducing the work
for this frame (e.g. by simplifying physics for this single frame).
In such a game, a GC which takes 10 ms once every thousand frames is
very different from a GC which takes 10 ms every frame.