I have 2 64 bit integers and I would like to concatenate it into a single 128bit integer.
    uint64_t len_A;
    uint64_t len_C;
    len_AC= (len_A << 64) | len_C;
GCC doesn't support uint128_t.
Is there any other ways to do it?
I have 2 64 bit integers and I would like to concatenate it into a single 128bit integer.
    uint64_t len_A;
    uint64_t len_C;
    len_AC= (len_A << 64) | len_C;
GCC doesn't support uint128_t.
Is there any other ways to do it?
First of all you should decide how you would store that 128-bit integer. There is no built-in integer type of that dimension.
You can store the integer, for example, as a struct consisting of two 64-bit integers:
typedef struct { uint64_t high; uint64_t low; } int128;
Then the answer will be quite simple.
The question is what are you going to do with this integer next.
 
    
    as Inspired said:
The question is what are you going to do with this integer next.
You probably want to use a arbitrary precision library that handles this for you in a portable and reliable way. Why? Because you may find yourself dealing with endianess issues as in choosing the high or low end of the integer in a given hardware.
Even if you know for sure where you code will run, still you will need to develop an entire set of functions that deals with your 128-bits integer because not all the compilers support a 128-bit type, (it seems GCC does support this type of integers), for instance, you will need to create a set of functions for basic mathematic operations.
It's probably better if you use the GMP library, visit the following link for more: http://gmplib.org/
 
    
    If your GCC does not have uint128_t it surely does not have 128 bits integers.
So you need to represent them e.g. with structures like
 struct my128int_st {
       uint64_t hi, lo;
 } ac;
 ac.hi = a;
 ac.lo = c;
