I have designed a Packet struct for a custom networking protocol as follows:
typedef struct {
uint8_t src;
uint8_t dest;
uint8_t len;
uint8_t flag;
//bitfields allocated R-L? gcc is somehow merging the fields also?
uint8_t type :4;
uint16_t seq :12; // 12 bits used for this field.[F3|02] == 0x02F3 on litle endian?
uint8_t checksum;
}Packet;
From my understanding of bitfields and struct padding, each field would be stored in contiguous bytes (seq will start at a new byte boundary). No trailing padding either since my device is x64 as well.
Graphically: |src|dest|len|flag|type| seq |checksum| 64 bit header, no struct padding.
To test my packet for further serialisation, I initialised a packet via:
thispack.src = 0x07;
thispack.dest = 0x34;
thispack.len = 0x5F;
thispack.flag = 0xA2;
thispack.type = 0x05;
thispack.seq = 0xAED;
thispack.checksum= 0x23;
I am running on a little endian system. I expect the memory layout of the struct to be:
0x07 0x34 0x5F 0xA2 0x05 0xED 0x0A 0x23, with the bytes of seq reversed by little-endian rules.
However, running x/8bx &thispack in gdb returns the following:
0x07 0x34 0x5f 0xa2 0xd5 0xae 0x23 0x00
I cannot understand how 0xD5 and 0xAE are obtained. This leads to uncertainty while accessing struct fields via . notation:
Here's what I feel is correct:
uint8_t mytype = thispack.typeshould store0x05in the byte referred to bymytype.uint16_t myseq = thispack.seqshould store0xED 0x0Ain the 2 bytes allocated to myseq.
I cannot see how it could be anything else.