I am doing some work in embedded C with an accelerometer that returns data as a 14 bit 2's complement number. I am storing this result directly into a uint16_t. Later in my code I am trying to convert this "raw" form of the data into a signed integer to represent / work with in the rest of my code.
I am having trouble getting the compiler to understand what I am trying to do. In the following code I'm checking if the 14th bit is set (meaning the number is negative) and then I want to invert the bits and add 1 to get the magnitude of the number.
int16_t fxls8471qr1_convert_raw_accel_to_mag(uint16_t raw, enum fxls8471qr1_fs_range range) {
  int16_t raw_signed;
  if(raw & _14BIT_SIGN_MASK) {
    // Convert 14 bit 2's complement to 16 bit 2's complement
    raw |= (1 << 15) | (1 << 14); // 2's complement extension
    raw_signed = -(~raw + 1);
  }
  else {
    raw_signed = raw;
  }
  uint16_t divisor;
  if(range == FXLS8471QR1_FS_RANGE_2G) {
    divisor = FS_DIV_2G;
  }
  else if(range == FXLS8471QR1_FS_RANGE_4G) {
    divisor = FS_DIV_4G;
  }
  else {
    divisor = FS_DIV_8G;
  }
  return ((int32_t)raw_signed * RAW_SCALE_FACTOR) / divisor;
}
This code unfortunately doesn't work. The disassembly shows me that for some reason the compiler is optimizing out my statement raw_signed = -(~raw + 1); How do I acheive the result I desire?
The math works out on paper, but I feel like for some reason the compiler is fighting with me :(.