To Obtain EPOCH Time Value from a Packed BIT Structure in C - c

I need to find out the Epoch time from a binary data file which has following data structure (it is a 12 byte structure):
Field-1 : Byte 1, Byte 2, + 6 Bits from Byte 3
Time-1 : 2 Bits from Byte 3 + Byte 4
Time-2 : Byte 5, Byte 6, Byte 7, Byte 8
Field-2 : Byte 9, Byte 10, Byte 11, Byte 12
For Field-1 and Field-2 I do not have issue as they can be taken out easily.
I need time value in Epoch Time (long) as it has been packed in Bytes 5,6,7,8 and 3 and 4 as follows:
Bytes 5 to 8 (32 bit word) Packs time value bits from 0 thru 31 (byte 5 has 0 to 7 bits,
byte 6 has 8 to 15, byte 7 has 16 to 23, byte 8 has 24 to 31).
The remaining 10 bits of time value are packed in Bytes 3 and byte 4 as follows:
byte 3 has 2 bits:32 and 33, and Byte 4 has remaining bits : 34 to 41.
So total bits for time value is 42 bits, packed as above.
I need to compute epoch value coming out of these 42 bits. How do I do it?
I have done something like this but not sure it gives me correct value:
typedef struct P_HEADER {
unsigned int tmuNumber : 22; //sorry for the typo.
unsigned int time1 : 10; // Bits 6,7 from Byte-3 + 8 bits from Byte-4
unsigned int time2 : 32; // 32 bits: Bytes 5,6,7,8
unsigned int traceKey : 32;
} __attribute__((__packed__)) P_HEADER;
Then in the code:
P_HEADER *header1;
//get input string in hexa,etc..etc..
//parse the input with the header as :
header1 = (P_HEADER *)inputBuf;
// then print the header1->time1, header1->time2 ....
long ttime = header1->time1|header1->time2;
Is this the way to get values out?

This will give you the value as you descibe it:
typedef struct P_HEADER {
unsigned int tmuNumber : 22;
unsigned int time1 : 10; // Bits 6,7 from Byte-3 + 8 bits from Byte-4
unsigned int time2 : 32; // 32 bits: Bytes 5,6,7,8
unsigned int traceKey : 32;
} __attribute__((__packed__)) P_HEADER;
long ttime = ((uint64_t)header1->time1) << 32 | header1->time2;
Works only like that on little-endian machines though.

Related

How to calculate size of structure with bit field?

#include <stdio.h>
struct test {
unsigned int x;
long int y : 33;
unsigned int z;
};
int main()
{
struct test t;
printf("%d", sizeof(t));
return 0;
}
I am getting the output as 24. How does it equate to that?
As your implementation accepts long int y : 33; a long int hase more than 32 bits on your system, so I shall assume 64.
If plain int are also 64 bits, the result of 24 is normal.
If they are only 32 bits, you have encountered padding and alignment. For performance reasons, 64 bits types on 64 bits systems are aligned on a 64 bits boundary. So you have:
4 bytes for the first int
4 padding bytes to have a 8 bytes boundary
8 bytes for the container of the bit field
4 bytes for the second int
4 padding bytes to allow proper alignment of arrays
Total: 24 bytes

Bit field in C; bytes and bits [duplicate]

This question already has answers here:
Why isn't sizeof for a struct equal to the sum of sizeof of each member?
(13 answers)
Closed 2 years ago.
I have found the following example:
#include <stdio.h>
// Space optimized representation of the date
struct date {
// d has value between 1 and 31, so 5 bits
// are sufficient
unsigned int d : 5;
// m has value between 1 and 12, so 4 bits
// are sufficient
unsigned int m : 4;
unsigned int y;
};
int main()
{
printf("Size of date is %lu bytes\n", sizeof(struct date));
struct date dt = { 31, 12, 2014 };
printf("Date is %d/%d/%d", dt.d, dt.m, dt.y);
return 0;
}
The results are
Size of date is 8 bytes
Date is 31/12/2014
I cant understand the first result. Why is it 8 bytes?
My thoughts:
y is 4 bytes, d is 5 bits and m is 4 bits. The total is 4 bytes and 9 bits. 1 byte is 8 bits, then the total is 41 bits.
There is a very good explanation here
C automatically packs the above bit fields as compactly as possible, provided that the maximum length of the field is less than or equal to the integer word length of the computer. If this is not the case then some compilers may allow memory overlap for the fields whilst other would store the next field in the next word (see comments on bit fiels portability below).

C Sizeof char[] in struct [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I know what padding is and how alignment works. Given the struct below:
typedef struct {
char word[10];
short a;
int b;
} Test;
I don't understand how C interprets and aligns the char array inside the struct. It should be 9 chars + terminator and it should be regarded as the longest like this:
| - _ - _ - _ - _ - word - _ - _ - _ - _ - |
| - a - | - _ - b - _ - | padding the remaining 4 bytes
The "-" represents a byte and "_" separates the bytes. So we have the 10 bytes long word, the 2 bytes long a and the 4 bytes long b and padding of 4 bytes. But when I print sizeof(Test) it returns 16.
EDIT: I got it.
In a struct like
struct {
char word[10];
short a;
int b;
}
you have the following requirements:
a needs an even offset. As the char arry before it has an even length, there is no need for padding. So a sits at offset 10.
b needs an offset which is dividible by 4. 12 is dividible by 4, so 12 is a fine offset for b.
The whole struct needs a size which is dividible by 4, because every b in an array of this struct needs to have the said requirement. But as we are currently at size 16, we don't need any padding.
WWWWWWWWWWAABBBB
|-- 10 --| 2 4 = 16
Compare this with
struct {
char word[11];
short a;
int b;
}
Here, a would have offset 11. This is not allowed, thus padding is inserted. a is fine with an offset of 12.
b would then get an offset of 14, which isn't allowed either, so 2 bytes are added. b gets an offset of 16. The whole struct gets a size of 20, which is fine for all subsequent items in an array.
WWWWWWWWWWW.AA..BBBB
|-- 11 --|1 2 2 4 = 20
Third example:
struct {
char word[11];
int b;
short a;
}
(note the changed order!)
b is happy with an offset of 12 (it gets 1 padding byte),
a is happy with an offset of 16. (no padding before it.)
After the struct, however, 2 bytes of padding are added so that the struct aligns with 4.
WWWWWWWWWW..BBBBAA..
|-- 10 --| 2 4 2 2 = 20
In:
struct
{
char word[10];
short a;
int b;
}
and given two-byte short and four-byte int, the structure is laid out in memory:
Offset Member
0 word[0]
1 word[1]
2 word[2]
3 word[3]
4 word[4]
5 word[5]
6 word[6]
7 word[7]
8 word[8]
9 word[9]
10 a
11 a
12 b
13 b
14 b
15 b
To get the layout described in the question, where a and b overlap word, you need to use a struct inside a union:
typedef union
{
char word[10];
struct { short a; int b; };
} Test;
Generally, each variable will be aligned on a boundary of its size.
(unless attributes such as packed are applied)
A complete discussion is on Wikipedia, which says in part:
A char (one byte) will be 1-byte aligned.
A short (two bytes) will be 2-byte aligned.
An int (four bytes) will be 4-byte aligned.
A long (four bytes) will be 4-byte aligned.
A float (four bytes) will be 4-byte aligned.
A double (eight bytes) will be 8-byte aligned on Windows and 4-byte aligned on
Linux (8-byte with -malign-double compile time option).
A long long (eight bytes) will be 4-byte aligned.
So your structure is laid out as:
typedef struct {
char word[10];
// Aligned with beginning of structure; takes bytes 0-9
short a;
// (assuming short is 2-bytes)
// Previous member ends on byte 9, this one starts on byte-10.
// Byte 10 is a multiple of 2, so no padding necessary
// Takes bytes 10 and 11
int b;
// Previous member ends on byte 11, next byte is 12, which is a multiple of 4.
// No padding necessary
// Takes bytes 12, 13, 14, 15.
} Test;
Total size: 16 bytes.
If you want to play with it, change your word-array to 9 or 11 bytes,
or reverse the order of your short and int, and you'll see the size of the structure change.

Finding the correct size of a misaligned structure

typedef struct structA
{
char C;
double D;
int I;
} structA_t;
Size of this structA_t structure:
sizeof(char) + 7 byte padding + sizeof(double) + sizeof(int) = 1 + 7 +
8 + 4 = 20 bytes
But this is wrong , the correct is
24
. Why?
There is most likely 4 byte padding after the last ìnt.
If sizeof(double) == 8 then likely alignof(double) == 8 also on your platform.
Consider this situation:
structA_t array[2];
If size would be only 20, then array[1].D would be misaligned (address would be divisible by 4, not 8 which is required alignment).
char = 1 byte
double = 8 bytes
int = 4 bytes
align to double =>
padding char => 1+7
padding double => 8+0
padding int => 4+4
=> 24 bytes
or, simply put, is the multiple of the largest => 3 (the number of fields) * 8 (the size of the largest) = 24
my guess would be the size of int in your system is 4 Bytes, so the int must also be padded by 4 Bytes in order to achieve a word size of 8 Bytes.
total_size=sizeof(char) + 7 Byte padding + sizeof(double) + sizeof(int) + 4 Bytes padding = 24 Bytes
Good article on padding/alignment:
http://www.drdobbs.com/cpp/padding-and-rearranging-structure-member/240007649
Because of the double member it forces everything to be eight byte aligned.
If you want a smaller structure then following structure gives you 16 bytes only!
typedef struct structA
{
int I;
char C;
double D;
} structA_t;

How do I parse out n-bit elements from a byte addressable array

I have a data stream that is addressable only in 8-bit bytes, I want to parse it out into 6-bit elements and store that into an array. Is there any best known methods to do this?
11110000 10101010 11001100
into
an array like
111100|001010|101011|001100
(can have zero padding, just needs to be addressable this way)
and the data is an 8-bit array that is also a multiple of 6-bits , not really endless
Depends how much bits a byte has on your particular architecture. On a six bit architecture it is quite simple :-)
Assuming a 8 bits per byte architecture you will have to do something along the lines:
int sixbits(unsigned char* datastream, unsigned int n) {
int bitpos = n*6;
return (datastream[bitpos/8] >> bitpos%8) // lower part of the five bit group
+ (datastream[bitpos/8+1] << 8-bitpos%8) // if bitpos%8>2, we need to add some carry bits from the next char
& 0x3f; // and finally mask the lowest 6 bits
}
Where n is the n-th 6 bit group. Any decent compiler will substitute the division with shifts and the moduli with ands. Just use this function in a loop to fill up your destination array.
You count your 5 bit sequences, read each byte, shift the bits based on your counter and the expected word position (by xor-ing pieces from neighboring byte words), and form new correctly aligned byte words that you then process.
I hope you don't expect code ...
You can do it using bit fiddling:
#include <stdio.h>
#include <string.h>
int main(int argc, char *argv[])
{
unsigned char source[3] = { 15, 85, 51 };
unsigned char destination[4];
memset(destination, 0, 4);
for (int i = 0; i < (8 * 3); ++i)
{
destination[i / 6] |= ((source[i / 8] >> (i % 8) & 1) << (i % 6));
}
for (int j = 0; j < 4; ++j)
printf("%d ", destination[j]);
}
Output:
15 20 53 12
Note that this starts working from the five least significant bits.
15 85 51
11110000 10101010 11001100
111100 001010 101011 001100
15 20 53 12
To get most significant first, do this instead:
destination[i / 6] |= ((source[i / 8] >> (7 - (i % 8))) & 1) << (5 - (i % 6));
This works as in your example, assuming you wrote the most significant bit first:
240 170 204
11110000 10101010 11001100
111100 001010 101011 001100
60 10 43 12
How about using a struct like this:
struct bit5
{
unsigned int v1 : 5;
unsigned int v2 : 5;
unsigned int v3 : 5;
unsigned int v4 : 5;
unsigned int v5 : 5;
unsigned int v6 : 5;
unsigned int v7 : 5;
unsigned int v8 : 5;
};
And then cast your array of bytes to the struct bit5 every 8 bytes (40 bits = 8 groups of 5 bits, fits in 8 bytes) to get the 5 bit chunks. Say:
unsigned char* array; // a byte array that you want to convert
int i;
struct bit5* pBit5;
for(i = 0; i < (int)(SIZE_OF_ARRAY / 8); i++)
pBit5 = (struct bit5*)((int)array + i * 8);
I would consider using a BitStream. It will allow you to read one bit at a time. You can shift that bit directly into place (using << n). It may not perform as well as reading an 8-bit byte at a time but it would certainly be cleaner looking code.
How about this Union?
union _EIGHT_TO_SIX_ {
struct {
unsigned char by6Bit0 : 6;
unsigned char by6Bit1 : 6;
unsigned char by6Bit2 : 6;
unsigned char by6Bit3 : 6;
} x6;
struct {
unsigned char by8Bit0;
unsigned char by8Bit0;
unsigned char by8Bit0;
} x8;
}
Setting by8Bitx will automatically fill in by6Bitx ~~~

Resources