lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


We spent a while testing the allocator and at least for our usage the memory
is perfectly stable.
The 4k pages thing was an experiment but we noticed that at the end all pages were always
full and cut in half the time spent for freeing memory(that cut in half the lua GC time).

200k/300k of internal structures of the allocator on 100Mb of allocated memory sound
pretty good to me, I do not understand the smile :-)

> I assume each pool size would have an most recently used stack, only
> allocating a new pool once you've checked the existing pools for space?

I don't really understand the question :P
What do you mean as most recently used stack?

ciao
Alberto

---------------------
Alberto Demichelis
alberto@crytek.de
Crytek Studios GmbH
www.crytek.com
---------------------

-----Original Message-----
From: Nick Trout [mailto:ntrout@rockstarvancouver.com]
Sent: Donnerstag, 17. Oktober 2002 23:07
To: Multiple recipients of list
Subject: RE: Functions and memory usage



> In our game we managed to reduce fragmetation to 0%.

By your own admission, the fragmentation is not 0%! :  ''we "waste"
200/300k of memory every 100Mb allocated.'' :-)

> What we do is managing small allocations(smaller than 513 
> bytes) and big allocations in a different way.
> for every allocation size between 1 and 512 bytes we allocate 
> a 4K page where we will store only object of a certain size;
> for example 1 page with only 4 bytes chunks 1 page with only 
> 32 bytes page.we round the allocations size to multiple of 4.
> for all chunk bigger than 512 bytes, we allocate it with a 
> normal general pourpose allocator using a "best fit" algorithm.
> we "waste" 200/300k of memory every 100Mb allocated.
> With this solution we have an allocator that is almost CPU 
> free for small alloc/free and relatively fast for big alloc/free
> because the number of chunk managed by the "best fit" are not so many.

The schema you suggest would actually enforce fragmentation due to
unfilled 4k pages. But, such a recursive allocator is good way of
localising fragmentation with the benefits of fast pool allocation. I
assume each pool size would have an most recently used stack, only
allocating a new pool once you've checked the existing pools for space?

Regards,
Nick