|
Am 08.07.2013 15:15 schröbte Roberto Ierusalimschy:
That may not do what is expected, because it involves implementation-defined behavior: 3.2.1.2 Signed and unsigned integers [...] When an integer is demoted to a signed integer with smaller size, or an unsigned integer is converted to its corresponding signed integer, if the value cannot be represented the result is implementation-defined.If I remember right, you can work around this with a memcpy() from the unsigned variable to the signed variable.
I can't say for C90, but in C99 and up this won't work: For one's complement or sign-magnitude architectures, or if the signed type has more padding bits than the corresponding unsigned type, the result would just be wrong. In the latter case you could also generate trap representations by accident. (The gory details are in C99 "6.2.6.2 Representations of types -> Integer types").
Before trying to solve a problem, let us be sure there is a problem. Have anyone ever used a C implementation where the conversion from unsigned int to int did not have the "expected" behavior (keep the bits unchanged)?
Both gcc[1] and msvc[2] seem to have the required behavior. According to Linus Torvalds[3], this is the only sane choice for two's complement machines, and Wikipedia says that by now virtually everyone has adopted two's complement. So we can probably risk waiting for the first bug report, but IMHO this would be a good use case for an `assert` somewhere.
There is also this stackoverflow solution[4], but AFAICT it is only guaranteed to work for C99 and up ...
[1]: http://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html#Integers-implementation
[2]: http://msdn.microsoft.com/en-us/library/0eex498h.aspx [3]: http://yarchive.net/comp/linux/signed_unsigned_casts.html[4]: http://stackoverflow.com/questions/13150449/efficient-unsigned-to-signed-cast-avoiding-implementation-defined-behavior
-- Roberto
Philipp