That's surprizing: such fast hash is too much easy to create collisions in very common cases, like indexing keys that have many common prefixes and suffixes.
This then suggests that applications need to create their own objects to compute or store their own hashes, in addition to the string value of the object.
That's why I suggested that tables could have their own meta function (defined in their metatable) for its accessors, that would compute the hash they need. String objects contain a storage field for their computed hash, but if a string was hashed by a different function, it can no longer be interned with others using a different hashing function.
The choice of the "middle" 4 characters for strings that are longer than 12 may be quite arbitrary (I suppose that for all strings with at most 12 chars, they are simply fully hashed in a row, or GUIDs).
The common cases where this won't work is for various database objects (e.g. indexing timestamps if they are not compacted in a binary number format)
The only solution I see would be that applications will compute the hash of the string and will prepend this hash in the string itself (but they have to be aware of the placement of significant bytes). And the 12 bytes limit means that hashes cannot distinguish more than 96-bit; most secure hashes are longer (at least 128 bit). If the application computes a secure hash, it has to compact it to 96bits by using XOR at least for MD5/SHA1 (for SHA2 or SHA3 this is probably not needed, you can just truncate the secure hash, but there's no use os SHA2/SHA3 in that case).