[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index]
[Thread Index]
- Subject: Re: typedef void * (*lua_Alloc) caching
- From: C++ RTMP Server <crtmpserver@...>
- Date: Mon, 4 Apr 2011 11:03:16 +0300
On Apr 4, 2011, at 10:37 AM, Massimo Sala wrote:
> I agree with Luiz.
>
> This test case with Android and phone seems to be good.
>
> Other test cases are different, and the latest implementations of
> malloc / realloc are good ( see nedmalloc / ptmalloc3 / hoard ).
Perfectly valid. However, they are not available everywhere, forcing you to roll out your own
>
> Moreover
> - for many applications running, take memory and don't release it is
> a nightmare. One application runs better, the other apps suffer the
> shortage.
>
> - I fear Lua becoming like Python, a "pac-man" for your memory.
> And release memory a lot after its "mallocs" is not so easy.
> Complex garbage collectors have still many troubles regarding
> refcount, memory leaks, and so on.
I don't agree on this last 2. We can have nice memory limits (no more than 4MB of mem, or less). If the cache is reaching a certain limit, just try to reclaim unused memory from cache or do standard malloc/free. As for the memory leaks, this is not true at all: when mempool is designed, memory leaks is not allowed. One have to design it with care in mind. memory leaks are not accepted anywhere anyway...
I have done extensive tests with mem allocations and the things are looking pretty ugly when allocating/deallocating mem which is not PAGE_SIZE aligned. When one requests 564 bytes for example, the allocator will alloc PAGE_SIZE increments anyway (which is usually 4096 bytes). So is a total waste of resources NOT to use that buffer in the future: just make it PAGE_SIZE and re-use it when the time comes. Even with this little optimization the things are looking far brighter.
I totally agree: sometimes the compiler can do wonders and the app runs much smoother without mempool. But that can be experimented and the mempool can be (de)activated at compile time depending on the target platform used.
What I learned so far, is that solely relaying on the mem allocator and abuse it thinking that will cope with all the possible situations, is a very bad practice. mem alocators are not gods :)
Cheers,
Andrei
>
>
> ciao, Massimo
>
>
>
> On 3 April 2011 20:37, C++ RTMP Server <crtmpserver@gmail.com> wrote:
>> Table 7: Real Scenario Test
>> new/delete static_mem_pool (default) static_mem_pool (thread-specific)
>> Linux GCC 3.2.21 5.87 0.84 0.83
>> Linux GCC 3.4.21 5.91 2.61 0.77
>> Linux GCC 3.2.22 12.82 8.77 0.84
>> Linux GCC 3.4.22 12.73 8.45 0.76
>>
>>
>> http://wyw.dcweb.cn/static_mem_pool.htm
>>
>> The improvements are quite significant. Especially in single-threaded envs.
>>
>> On Apr 3, 2011, at 9:31 PM, Luiz Henrique de Figueiredo wrote:
>>
>>>> Is it worth implementing a nice caching mechanism for doing allocations inside lua_Alloc function?
>>>
>>> I think the accepted wisdom is to avoid trying to outsmart malloc.
>>>
>>>> I've done some synthetic tests, and it looks like there is a lot of improvements especially when working with android on a phone.
>>>
>>> Perhaps use a better malloc from the start then?
>>>
>>
>> ------
>> Eugen-Andrei Gavriloaie
>> Web: http://www.rtmpd.com
>>
>>
>>
>
------
Eugen-Andrei Gavriloaie
Web: http://www.rtmpd.com