what I'm after is something I can feed a number into and it will return the highest order bit. I'm sure there's a simple way. Below is an example output (left is the input)
1 -> 1 2 -> 2 3 -> 2 4 -> 4 5 -> 4 6 -> 4 7 -> 4 8 -> 8 9 -> 8 ... 63 -> 32
what I'm after is something I can feed a number into and it will return the highest order bit. I'm sure there's a simple way. Below is an example output (left is the input)
1 -> 1 2 -> 2 3 -> 2 4 -> 4 5 -> 4 6 -> 4 7 -> 4 8 -> 8 9 -> 8 ... 63 -> 32
 
    
    From Hacker's Delight:
int hibit(unsigned int n) {
    n |= (n >>  1);
    n |= (n >>  2);
    n |= (n >>  4);
    n |= (n >>  8);
    n |= (n >> 16);
    return n - (n >> 1);
}
This version is for 32-bit ints, but the logic can be extended for 64-bits or higher.
 
    
    fls bottoms out to a hardware instruction on many architectures. I suspect this is probably the simplest, fastest way of doing it. 
1<<(fls(input)-1)
 
    
    This should do the trick.
int hob (int num)
{
    if (!num)
        return 0;
    int ret = 1;
    while (num >>= 1)
        ret <<= 1;
    return ret;
}
hob(1234) returns 1024
hob(1024) returns 1024
hob(1023) returns 512
 
    
    like obfuscated code? Try this:
1 << ( int) log2( x)
 
    
    Little bit late to this party but the simplest solution I found, given a modern GCC as a compiler is simply:
static inline int_t get_msb32 (register unsigned int val)
{
  return 32 - __builtin_clz(val);
}
static inline int get_msb64 (register unsigned long long val)
{
  return 64 - __builtin_clzll(val);
}
It's even relatively portable (at the very least it will work on any GCC platform).
 
    
    Continually remove the low order bit comes to mind...
int highest_order_bit( int x )
{
    int y = x;
    do { 
        x = y;
        y = x & (x-1); //remove low order bit
    }
    while( y != 0 );
    return x;
}
 
    
     
    
    The linux kernel has a number of handy bitops like this, coded in the most efficient way for a number of architectures. You can find generic versions in include/asm-generic/bitops/fls.h (and friends), but see also include/asm-x86/bitops.h for a definition using inline assembly if speed is of the essence, and portability is not.
 
    
     
    
    This can easily be solved with existing library calls.
int highestBit(int v){
  return fls(v) << 1;
}
The Linux man page gives more details on this function and its counterparts for other input types.
A fast way to do this is via a look-up table. For a 32-bit input, and an 8-bit look-up table, in only requires 4 iterations:
int highest_order_bit(int x)
{
    static const int msb_lut[256] =
        {
            0, 0, 1, 1, 2, 2, 2, 2, // 0000_0000 - 0000_0111
            3, 3, 3, 3, 3, 3, 3, 3, // 0000_1000 - 0000_1111
            4, 4, 4, 4, 4, 4, 4, 4, // 0001_0000 - 0001_0111
            4, 4, 4, 4, 4, 4, 4, 4, // 0001_1000 - 0001_1111
            5, 5, 5, 5, 5, 5, 5, 5, // 0010_0000 - 0010_0111
            5, 5, 5, 5, 5, 5, 5, 5, // 0010_1000 - 0010_1111
            5, 5, 5, 5, 5, 5, 5, 5, // 0011_0000 - 0011_0111
            5, 5, 5, 5, 5, 5, 5, 5, // 0011_1000 - 0011_1111
            6, 6, 6, 6, 6, 6, 6, 6, // 0100_0000 - 0100_0111
            6, 6, 6, 6, 6, 6, 6, 6, // 0100_1000 - 0100_1111
            6, 6, 6, 6, 6, 6, 6, 6, // 0101_0000 - 0101_0111
            6, 6, 6, 6, 6, 6, 6, 6, // 0101_1000 - 0101_1111
            6, 6, 6, 6, 6, 6, 6, 6, // 0110_0000 - 0110_0111
            6, 6, 6, 6, 6, 6, 6, 6, // 0110_1000 - 0110_1111
            6, 6, 6, 6, 6, 6, 6, 6, // 0111_0000 - 0111_0111
            6, 6, 6, 6, 6, 6, 6, 6, // 0111_1000 - 0111_1111
            7, 7, 7, 7, 7, 7, 7, 7, // 1000_0000 - 1000_0111
            7, 7, 7, 7, 7, 7, 7, 7, // 1000_1000 - 1000_1111
            7, 7, 7, 7, 7, 7, 7, 7, // 1001_0000 - 1001_0111
            7, 7, 7, 7, 7, 7, 7, 7, // 1001_1000 - 1001_1111
            7, 7, 7, 7, 7, 7, 7, 7, // 1010_0000 - 1010_0111
            7, 7, 7, 7, 7, 7, 7, 7, // 1010_1000 - 1010_1111
            7, 7, 7, 7, 7, 7, 7, 7, // 1011_0000 - 1011_0111
            7, 7, 7, 7, 7, 7, 7, 7, // 1011_1000 - 1011_1111
            7, 7, 7, 7, 7, 7, 7, 7, // 1100_0000 - 1100_0111
            7, 7, 7, 7, 7, 7, 7, 7, // 1100_1000 - 1100_1111
            7, 7, 7, 7, 7, 7, 7, 7, // 1101_0000 - 1101_0111
            7, 7, 7, 7, 7, 7, 7, 7, // 1101_1000 - 1101_1111
            7, 7, 7, 7, 7, 7, 7, 7, // 1110_0000 - 1110_0111
            7, 7, 7, 7, 7, 7, 7, 7, // 1110_1000 - 1110_1111
            7, 7, 7, 7, 7, 7, 7, 7, // 1111_0000 - 1111_0111
            7, 7, 7, 7, 7, 7, 7, 7, // 1111_1000 - 1111_1111
        };
    int byte;
    int byte_cnt;
    for (byte_cnt = 3; byte_cnt >= 0; byte_cnt--)
    {
        byte = (x >> (byte_cnt * 8)) & 0xff;
        if (byte != 0)
        {
            return msb_lut[byte] + (byte_cnt * 8);
        }
    }
    return -1;
}
 
    
    If you do not need a portable solution and your code is executing on an x86 compatible CPU you can use _BitScanReverse() intrinsic function provided by Microsoft Visual C/C++ compiler. It maps to BSR CPU instruction which returns the highest bit set.
 
    
    The best algorithm I like very much is:
unsigned hibit(unsigned n) {
    n |= (n >>  1u);
    n |= (n >>  2u);
    n |= (n >>  4u);
    n |= (n >>  8u);
    n |= (n >> 16u);
    return n - (n >> 1);
}
And it's easily extended for uint64_t like that:
uint64_t hibit(uint64_t n) {
    n |= (n >>  1u);
    n |= (n >>  2u);
    n |= (n >>  4u);
    n |= (n >>  8u);
    n |= (n >> 16u);
    n |= (n >> 32u);
    return n - (n >> 1);
}
or even to __int128
__int128 hibit(__int128 n) {
    n |= (n >>  1u);
    n |= (n >>  2u);
    n |= (n >>  4u);
    n |= (n >>  8u);
    n |= (n >> 16u);
    n |= (n >> 32u);
    n |= (n >> 64u);
    return n - (n >> 1);
}
In addition is crossplatphorm solution independend of using compilator
 
    
    // Note doesn't cover the case of 0 (0 returns 1)
inline unsigned int hibit( unsigned int x )
{
  unsigned int log2Val = 0 ;
  while( x>>=1 ) log2Val++;  // eg x=63 (111111), log2Val=5
  return 1 << log2Val ; // finds 2^5=32
}
 
    
    A nifty solution I came up with is to binary search the bits.
uint64_t highestBit(uint64_t a, uint64_t bit_min, uint64_t bit_max, uint16_t bit_shift){
    if(a == 0) return 0;
    if(bit_min >= bit_max){
        if((a & bit_min) != 0)
            return bit_min;
        return 0;
    }
    uint64_t bit_mid = bit_max >> bit_shift;
    bit_shift >>= 1;
    if((a >= bit_mid) && (a < (bit_mid << 1)))
        return bit_mid;
    else if(a > bit_mid)
        return highestBit(a, bit_mid, bit_max, bit_shift);
    else
        return highestBit(a, bit_min, bit_mid, bit_shift);
}
Bit max is the highest power of 2, so for a 64 bit number it would be 2^63. Bit shift should be initialized to half the number of bits, so for 64 bits, it would be 32.
