Archive for the ‘C’ Category

Articles

Integer conversions in C

In C on August 8, 2011 by Matt Giuca

In light of a recent bug in crypt_blowfish highlighted by Steven Gibson on the Security Now podcast (episode #311), I read up on signed / unsigned conversions in C. Wow, is it complex. Here are some things you might not know about integer conversions in C (relating to converting between different bit widths and signedness). (Note: I, like the C specification, use the word “integer” to refer to any type that represents whole numbers, and the word “int” to refer to the C type of the same name.)

  • The type “int” is equivalent to “signed int”. However, it is implementation-defined whether “char” refers to “signed char” or “unsigned char”. If you care, you have to be specific (this is probably what caused the crypt_blowfish bug, because a variable was declared as “char” which really needed to be an unsigned char).
  • Converting between unsigned integers of different sizes is always well-defined (for example, converting from an unsigned int to unsigned char takes the value % 256, assuming an 8-bit char type).
  • However, a narrowing conversion of signed integers is undefined if the value cannot be represented in the new type (for example, converting the value 256 to a signed char is undefined, assuming an 8-bit char type). In almost all compilers, you can assume two’s complement format, which means the behaviour is nearly uniform, but the language standard itself does not mandate two’s complement representation for signed integers, and therefore cannot specify the behaviour when a signed integer is narrowed. The same applies to operations that overflow.
  • Similarly, converting from signed to unsigned is well-defined, but converting from unsigned to signed has undefined behaviour if the value is too large to fit. (Converting from unsigned to a signed value with more bits is always well-defined.)
  • Converting a signed value to an unsigned value with more bits (a widening conversion) is equivalent to first making the signed value wider (preserving the value) and then converting it to unsigned (taking the modulo). For example, converting a signed char -38 to an unsigned short results in 65536 – 38 = 65498, assuming a 16-bit short type. If you think about it, this isn’t obvious, because it could have been the other way around (first convert to unsigned, then widen, so in the above example you would get 256 – 38 = 218), but I think the way they chose makes more sense. In two’s complement notation, this is a sign-extension (and the alternative would be zero-extension).
  • For an operation A * B (“*” represents any binary operator, not necessarily multiplication), if A is signed and B is unsigned, A gets converted to an unsigned number. This is quite unintuitive to me. I would have thought that any operation involving a signed number would be performed as a signed operation, but C seems to always favour unsigned numbers (I presume since the results of conversion are well-defined). This means that if A is a signed char and B is an unsigned int, A is first converted to an unsigned int via sign-extension.

Source: ISO/IEC 9899:TC3 (the C99 draft standard from 2007), ยง6.3.1

You can see the bug here (in the function BF_set_key). There are two variables involved: unsigned int tmp and char* ptr. Because it doesn’t specify whether ptr points to an unsigned or signed char, the compiler is allowed to choose (and most modern compilers choose signed char). The killer line is “tmp |= *ptr;” which, according to the above rules and assuming a signed char, converts a signed char to an unsigned int by sign-extension. This is very bad since it then performs a bitwise OR with tmp, setting all of the bits above 8 to 1 (when the programmer expected them to be left alone), causing a massive weakening of the hash. The bug fix explicitly casts *ptr to an unsigned char first, and funnily enough includes a switch “sign_extension_bug” to re-enable the old, buggy behaviour in case you want your old hashes to match!

Articles

Fun with C operators

In C on December 13, 2010 by Matt Giuca Tagged:

Quick quiz: What does this C code print?

    int a, b;
    a = 1;
    b = 2;
    printf("%d\n", a+++ + +++b);
    printf("%d\n", a);
    printf("%d\n", b);

Answer:

   4
   2
   3

What the heck is a “+++” operator? It turns out the C grammar is quite liberal with its whitespace (intentionally), which can create some strange “ambiguities”. There are actually four operators in play here: x++ (post-increment), ++x (pre-increment), +x (unary plus) and x + y (addition). If you haven’t seen it, unary plus is analogous to –x (unary minus), only completely useless — unary minus inverts the sign of a number, while unary plus keeps it the same. So the expression “a+++ + +++b” is tokenised as “a ++ + + ++ + b” can be formatted better as:

    (a++) + (+(++(+b)))

(Note the colour coding: The “+” that looks like addition in the original is actually a unary plus.) So, this code reads: “Ensure b keeps its sign, pre-increment b, then ensure the result keeps its sign again, then add the result to a. Finally, post-increment a.” Hence its value is 1 + (2+1) = 4, and the values of both a and b are incremented.