lua-users home
lua-l archive

[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]


On Jan 2, 2012, at 7:56 AM, Matthew Wild wrote:

> On 2 January 2012 14:54, David Kolf <kolf@gmx.de> wrote:
>> Leo Razoumov wrote:
>>> 
>>> On Sun, Jan 1, 2012 at 12:48, Vladimir Protasov<eoranged@ya.ru>  wrote:
>>>> 
>>>> To avoid the problem we just should generate random salt at lua startup,
>>>> then use it during hash generation. It will prevent attacker to guess which
>>>> values will be placed in the same bucket.
>>> 
>>> 
>>> Salt would not help if one keeps ignoring characters the way Lua does.
>>> Two strings that differ only in those characters that are ignored by
>>> the hash function still hash to the same value.
>> 
>> 
>> I considered picking the 16 to 32 characters which aren't ignored at random,
>> too. But then I noticed that an attacker only needs to pick 2 or 3
>> characters that aren't used for the hash in order to generate enough
>> collisions to cause trouble -- the probability for hitting ignored chars at
>> random is way too high.
>> 
>> And Lua has to skip chars or else the operation file:read("*a") would be a
>> problem for big files.
>> 
>> So I am afraid that Lua still needs to raise an error when there is an
>> extreme number of collisions. (As Mark suggested it).
> 
> Or handle collisions more efficiently.
> 
> Consider that throwing an error is still an effective
> denial-of-service attack against Lua network services. DoS is about
> one or more users impairing quality of service for other users of the
> service. The original post achieves this through hogging CPU time.
> This proposed solution would basically allow an attacker to prevent
> certain sets of strings from being sent by other users of the service,
> probably leaving the server with little choice but to close their
> connections.
> 
> For example, a theoretical attack against a Lua HTTP server would be
> to make requests with lots of headers that collide with standard HTTP
> header names/values. For example I could make requests to the server
> containing user-agent strings that match Chrome's user-agent string,
> *except* for the skipped characters. When a legitimate Chrome user
> hits the server, an error would be thrown by not being able to store
> the user-agent string for a perfectly valid request.
> 
> I'm not saying that throwing isn't a satisfactory solution in some or
> even many cases, but in certain applications it can be almost as bad
> as the original problem.

But you get around that by triggering a GC after a failure, possibly trying the parse again after the GC, and only throwing things out if it fails again. Presumably the malicious data isn't going to survive a GC.

Mark