lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


>>1) In lmem.h the macros
>>#define luaM_newvector(n,t)      ((t *)luaM_malloc((n)*sizeof(t)))
>>#define luaM_reallocvector(v,n,t)     ((v)=(t *)luaM_realloc(v,(n)
>>can pass a wrong size value to luaM_realloc() if Int is composed by 16
>>bits and "n" is big enough.
>luaM_malloc is a macro that calls luaM_realloc, whose declaration is
> void *luaM_realloc (void *block, unsigned long size);
>So, there should be no problems, unless sizeof returns int.
There could be problems if "n" and sizeof() are int or unsigned int, 
because their product could be bigger than the max unsigned int value 
(that is 65535 in my compiler) even if luaM_realloc() cast the product 
to unsigned long.
The cast to unsigned long is made too late.

>>IMO the macros should contains a cast to unsigned long:
>This makes sense, but malloc is defined as
> void *malloc(size_t size);
>and so if sizeof returns int then casting to unsigned long wouldn't 
>Perhaps a cast to size_t works.
>Could you try this on your example?
The cast to unsigned long resolve the problem because in luaM_realloc()
there is a test to check if "size" is too much big:
size_t s = (size_t)size;
if (s != size)
   lua_error("memory allocation error: block too big");
This gives correctly the error message instead hanging the program.
I try to explain better: in my example, Lua hangs when it grows the 
table from 3203 to 6421 elements. In ltable.c->hashnodecreate() the line
Node *v = luaM_newvector(nhash, Node);
calls luaM_newvector() with nhash = 6421 int and sizeof(Node) = 20 
unsigned int, so luaM_malloc((n)*sizeof(t)) is called with 6421*20U -> 
62884U instead of 128420UL.
I haven't tried the cast to size_t because I think that it has the same
problem of int (size_t is unsigned int).

I still think that the macro luaM_newvector() and luaM_reallocvector() 
must calculate the product between "n" and sizeof() as unsigned long 
instead int or size_t.
Lua 4.0 seems to have the same problem.

I k
that I can't use strings or grow a table to be more than
64k and I don't expect that Lua allocates big table in multiple chunks.
I think that Lua should give an error message when luaM_realloc() can't
allocate the memory, instead of allocating a wrong size block of memory 
and then hangs.

Can you (lhf or ri) tell me if the final version of Lua 4.0 will fix 
these checks or if I'll do it in my own copy of Lua ?

>>  - lbuiltin.c->luaB_predefine() with the macro DEBUG defined:
>>luaB_opentests() gives "unresolved external" (there is also the
>>function's prototype at line 44).
>luaB_opentests is only used here, for testing.
>But we are considering distributing it too.
I mean that if I compile with DEBUG defined to enable the internal
debugging, the linker can't make the executable because luaB_opentests
() doesn't exists.

>>IIRC, this warning was already posted but related to luac and without
>I don't recall this.
Emails from Ashley Fryer at Sat Apr 29, 2000 7:29am and 7:36am about the
missing test.c, opcode.c, opt.c (I suppose that test.c contains

>>1) Sometimes I need to return the elements of a table as single
>>variables. For examples:
>>function read(...)
>>        dosomething()
>>        local retlist = call(%read, arg, "p")
>>        dosomethingelse()
>>        return tunpack(retlist)  [see forward for tunpack()]
>>a, b = read("*n", "*n")
>"read" already does that. I guess your point it the dosomething() and
>dosomethingelse(). I hope that dosomethingelse() uses retlist :-)
No, dosomethingelse() doesn't uses retlist. In my real new read(),
dosomething() slow down the CPU clock speed and dosomethingelse() speed 
up the CPU c.s. without using retlist. I redefined Lua's read() with my 
new read(), instead of using another function name, to be sure that all 
calls to read() executes my new read().

> Is there a better way to do this ? (I would like to avoid the
> recursion's overhead and to destroy the table)
IMO the Roberto I.
and the quickest).
Luiz C.S.'s solution works fine but it is slow probably due to the
string concatenation and the compilation overhead.

Many thanks for your help.