Hi Roberto
I had test the two hashint macro. and when insert 10000000 keys, the new macro cost about 2.7s, while old macro cost about 4s.
this is indeed faster.
and I also test the three hashint macro by use auto increase int key. this is the test lua code:
function alloc_id(index)
return index & 0xffffffff
end
local src = "">local index = os.time()
for i = 1, 10000000 do
local id = alloc_id(index)
index = index + 1
table.insert(src, id)
end
local dst = {}
local begin = os.clock()
for k, v in ipairs(src) do
dst[v] = 1
end
print("cost " .. os.clock() - begin)
and the result is:
1. hashpow2(t, i) ----> 2.5s
2. hashmod(t, l_castS2U(i)) ----> 3.4s
3. hashmod(t, cast_uint(l_castS2U(i) & (~0u)) ^ \ ----> 2.9s
cast_uint(l_castS2U(i) >> (sizeof(int) * 8 - 8) >> 8));
so maybe the third method can better deal with different scenarios.
thank you.
At 2021-03-11 22:05:34, "Roberto Ierusalimschy" <roberto@inf.puc-rio.br> wrote:
>> I tried it, and the running time of the test code reduce to zero second.
>> I think maybe this is more adaptable to various situations if lua code change like that.
>
>Maybe a slightly better option would be this one:
>
>#define hashint(t,i) \
> hashmod(t, cast_uint(l_castS2U(i) & (~0u)) ^ \
> cast_uint(l_castS2U(i) >> (sizeof(int) * 8 - 8) >> 8));
>
>It uses an unsigned 32-bit division, which is cheaper than an unsigned
>64-bit division. (If lua_Integer == int, the compiler should be able
>to optimize this out.)
>
>-- Roberto