Discriminate bits after "bit stuffing" - c

I have written a piece of code to add a '0' after 6 consecutive '1' in a bit stream. But how to decode it?
Here an example of one bits stream:
original = {01101111110111000101111110001100...etc...}
stuffed = {011011111O101110001011111O10001100...etc...}
(The 'O' stand for the stuffed '0'.)
As you can see a '0' was added after each '111111' and to retrieve the original stream one has to remove it. Easy.
But... What if the original stream had the same form as the stuffed one? How do I know if I have to remove these bits?!

I think you are confused with the basics. Pretend you want a B added after 2 As. This is not 'stuffed':
AAAAA
'Stuffing' it gives you:
AABAABA
The above is either 'stuffed' or 'not stuffed'. In other words you can stuff it again:
AABBAABBA
Or you could 'unstuff' it:
AAAAAA
What if the original stream had the same form as the stuffed one?
So if a bitstream has 10 consecutive 1s in it then it has clearly not been stuffed. You can't say the same for a bitstream that could have been stuffed.

My question was so dumb... But it was late !
Here the piece of code I wrote. It takes two streams of bits. The length of the stream to be stuffed is in its first byte. It works well except the new length after stuffing is not yet updated.
I used macro so it's more readable.
#include "bitstuff.h"
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
#define sbi(byte, bit) (byte = byte | (1 << bit))
#define cbi(byte, bit) (byte = byte & ~ (1 << bit))
#define ibc(byte, bit) (~byte & (1 << bit))
#define ibs(byte, bit) (byte & (1 << bit))
#define clr(byte) (byte = 0)
void bitstuff(uint8_t* stream, uint8_t* stuff) {
int8_t k = 7, b = 7;
uint8_t row = 0;
uint8_t len = 8**stream++;
stuff++;
while(len--) {
if(ibs(*stream, k--)) {
row++;
if(row==5) {
cbi(*stuff, b--);
if(b<0) {b=7; stuff++;};
sbi(*stuff, b--);
if(b<0) {b=7; stuff++;};
}
else {
sbi(*stuff, b--);
if(b<0) {b=7; stuff++;};
}
}
else {
clr(row);
cbi(*stuff, b--);
if(b<0) {b=7; stuff++;};
}
if(k<0) {k=7; stream++;};
}
}

Related

left shift count >= width of type in C macro

I have written a C Macro to set/unset Bits in a uint32 variable. Here are the definitions of the macros:
extern uint32_t error_field, error_field2;
#define SET_ERROR_BIT(x) do{\
if(x < 0 || x >63){\
break;\
}\
if(((uint32_t)x)<32U){\
(error_field |= ((uint32_t)1U << ((uint32_t)x)));\
break;\
} else if(((uint32_t)x)<64U){\
(error_field2 |= ((uint32_t)1U<<(((uint32_t)x)-32U)));\
}\
}while(0)
#define RESET_ERROR_BIT(x) do{\
if(((uint32_t)x)<32U){\
(error_field &= ~((uint32_t)1U<<((uint32_t)x)));\
break;\
} else if(((uint32_t)x) < 64U){\
(error_field2 &= ~((uint32_t)1U<<(((uint32_t)x)-32U)));\
}\
} while(0)
I am passing a field of an enumeration, that looks like this:
enum error_bits {
error_chamber01_data = 0,
error_port21_data,
error_port22_data,
error_port23_data,
error_port24_data,
/*this goes on until 47*/
};
This warning is produced:
left shift count >= width of type [-Wshift-count-overflow]
I am calling the Macros like this:
USART2->CR1 |= USART_CR1_RXNEIE;
SET_ERROR_BIT(error_usart2);
/*error_usart2 is 47 in the enum*/
return -1;
I get this warning with every macro, even with those where the left shift count is < 31.
If I use the definition of the macro without the macro, it produces no warning. The behaviour is the same with a 64 bit variable. I am programming a STM32F7 with AC6 STM32 MCU GCC compiler.
I can't figure out why this happens. Can anyone help me?
Probably a problem with the compiler not being able to diagnose correctly, as stated by M Oehm. A workaround could be, instead of using the minus operation, use the remainder operation:
#define _SET_BIT(x, bit) (x) |= 1U<<((bit) % 32U)
#define SET_BIT(x, bit) _SET_BIT(x, (uint32_t)(bit))
#define _SET_ERROR_BIT(x) do{\
if((x)<32U){\
SET_BIT(error_field, x);\
} else if((x)<64U){\
SET_BIT(error_field2, x);\
}\
}while(0)
#define SET_ERROR_BIT(x) _SET_ERROR_BIT((uint32_t)(x))
This way the compiler is finally smart enough to know that the value of x will never exceed 32.
The call to the "_" macro is used in order to force x to always be an uint32_t, inconditionally of the macro call, avoiding the UB of a call with a negative value of x.
Tested in coliru
Problem:
In the macros, you distinguish two cases, which, on their own, are okay. The warning comes from the branch that isn't executed, where the shift is out of range. (Apparently these diagnostics are issued before the dead branch is eliminated.)
#M Oehm
Solution
Insure shifts are in range 0-31 in both paths regardless of the x value and type of x.
x & 31 is a stronger insurance than x%32 or x%32u. % can result in negative remainders when x < 0 and with a wide enough type.
#define SET_ERROR_BIT(x) do{\
if((x) < 0 || (x) >63){\
break;\
}\
if(((uint32_t)x)<32U){\
(error_field |= ((uint32_t)1U << ( (x)&31 )));\
break;\
} else if(((uint32_t)x)<64U){\
(error_field2 |= ((uint32_t)1U<<( (x)&31 )));\
}\
}while(0)
As a general rule: good to use () around each usage of x.
Seeing the thread I wanted to indicate a nice (and perhaps cleaner) way to set, reset and toggle the status of a bit in the case of the two unsigned integers as in thread. This code should be OT because uses x that shall be an unsigned int (or an int) and not a enum value.
I've written the line of code at the end of this answer.
The code receives as input a number of parameter couples. Each couple of parameter is a letter and a number. The letter may be:
S to set a bit
R to reset a bit
T to toggle a bit
The number has to be a bit value from 0 to 63. The macros in the code discard each number greater than 63 and nothing is modified into the variables. The negative values haven't been evalued because we suppose a bit value is an unsigned value.
For Example (if we name the program bitman):
Executing: bitman S 0 S 1 T 7 S 64 T 7 S 2 T 80 R 1 S 63 S 32 R 63 T 62
The output will be:
S 0 00000000-00000001
S 1 00000000-00000003
T 7 00000000-00000083
S 64 00000000-00000083
T 7 00000000-00000003
S 2 00000000-00000007
T 80 00000000-00000007
R 1 00000000-00000005
S 63 80000000-00000005
S 32 80000001-00000005
R 63 00000001-00000005
T 62 40000001-00000005
#include <unistd.h>
#include <stdio.h>
#include <stdint.h>
#include <string.h>
static uint32_t err1 = 0;
static uint32_t err2 = 0;
#define SET_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 |= (1U<<(x))):\
(err2 |= (1U<<((x)-32)))\
)
#define RESET_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 &= ~(1U<<(x))):\
(err2 &= ~(1U<<((x)-32)))\
)
#define TOGGLE_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 ^= (1U<<(x))):\
(err2 ^= (1U<<((x)-32)))\
)
int main(int argc, char *argv[])
{
int i;
unsigned int x;
for(i=1;i<argc;i+=2) {
x=strtoul(argv[i+1],NULL,0);
switch (argv[i][0]) {
case 'S':
SET_ERROR_BIT(x);
break;
case 'T':
TOGGLE_ERROR_BIT(x);
break;
case 'R':
RESET_ERROR_BIT(x);
break;
default:
break;
}
printf("%c %2d %08X-%08X\n",argv[i][0], x, err2, err1);
}
return 0;
}
The macros are splitted in more then one line, but they are each a one-line code.
The code main has no error control then if the parameters are not correctly specified the program might be undefined behaviour.

How to fprintf an int to binary in C?

I'm trying to write the binary number of 16-bit signed integer to a file. I searched a lot and ofcourse I found many examples which converts integer variables to binary. But in my case these functions will not be efficient, because I need to convert 50e6 samples/s. Calling a function to convert each sample will need a lot of computing time.
So what I want to do is:
int array[] = {233, 431, 1024, ...}
for (i = 0; i < sizeof(array); i++){
fprintf(outfile, "%any_binary_format \n", array[i]);
}
result in the file should be:
0000000011101001
0000000110101111
0000010000000000
fprintf is intended for formatted output - the formatting being "human readable" text, it is therefore not the appropriate function to use if you want binary output. For that you should use fwrite():
for (i = 0; i < sizeof(array) / sizeof(*array); i++ )
{
fwrite (&array[i], sizeof(*array), 1, outfile ) ;
}
Note I have also fixed your loop termination to correctly iterate the number of elements in the array. But in fact the loop is unnecessary - the output is binary, the array is binary - you can just output the entire array thus:
fwrite( array, sizeof(array), 1, outfile ) ;
Your performance requirement of 50Msps will require write performance of around 95Mb/s sustained - that is a lot to ask, and unlikely to be achieved by writing one sample at a time. You may be better off using a memory mapped file, but unless you are using a real-time OS, there are no guarantees that you will sustain that output rate indefinitely - it only takes some other process to access the drive, and it may introduce an unacceptable delay.
Also note that the file must have been opened for binary output - especially on Windows to prevent translation of CR to CR+LF which will be disastrous for your sample data.
If you want to use printf you can use something like this:
#define BYTE_TO_BINARY_PATTERN "%c%c%c%c%c%c%c%c\n"
#define BYTE_TO_BINARY(byte) \
(byte & 0x80 ? '1' : '0'), \
(byte & 0x40 ? '1' : '0'), \
(byte & 0x20 ? '1' : '0'), \
(byte & 0x10 ? '1' : '0'), \
(byte & 0x08 ? '1' : '0'), \
(byte & 0x04 ? '1' : '0'), \
(byte & 0x02 ? '1' : '0'), \
(byte & 0x01 ? '1' : '0')
int main()
{
uint8_t value = 5;
printf(BYTE_TO_BINARY_PATTERN, BYTE_TO_BINARY(value));
return 0;
}
Should print 00000101. I use this sometimes in embedded code when debugging to check register values. Just replace printf with fprintf if you want to write the ascii binary strings to file.
If your compiler supports inline you don't need to worry about the overhead of a small function, take a look at this.
Anyway you can simply implement the function as a macro.
If you want a faster approach you can use a larger buffer (the size for the faster runtime is machine-dependent) for example char str[1 << 16], writing the results to the buffer and using fwrite/write to the out stream.
Another approach is to map the process via mmap/msync.
Anyway you don't need to look at a faster function, but rather a deeper knowledge of the system you're working on.
#define SHORT_WIDTH 16
#define TEST 1
#define PADDING 1 /* set to 0 if you don't need the leading 0s */
char *ShortToBin(unsigned short x, char *buffer) {
#if PADDING
int i;
for(i = 0; i < SHORT_WIDTH; ++i)
buffer[SHORT_WIDTH - i - 1] = '0' + ((x >> i) & 1);
return buffer;
#else
char *ptr = buffer + SHORT_WIDTH;
do {
*(--ptr) = '0' + (x & 1);
x >>= 1;
} while(x);
return ptr;
#endif
}
#if TEST
#include <stdio.h>
int main() {
short n;
char str[SHORT_WIDTH+1]; str[SHORT_WIDTH]='\0';
while(scanf("%hd", &n) == 1)
puts(ShortToBin(n, str));
return 0;
}
#endif

fwrite() in c writes bytes in a different order

#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int *int_pointer = (int *) malloc(sizeof(int));
// open output file
FILE *outptr = fopen("test_output", "w");
if (outptr == NULL)
{
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
*int_pointer = 0xabcdef;
fwrite(int_pointer, sizeof(int), 1, outptr);
//clean up
fclose(outptr);
free(int_pointer);
return 0;
}
this is my code and when I see the test_output file with xxd it gives following output.
$ xxd -c 12 -g 3 test_output
0000000: efcdab 00 ....
I'm expecting it to print abcdef instead of efcdab.
Which book are you reading? There are a number of issues in this code, casting the return value of malloc for example... Most importantly, consider the cons of using an integer type which might vary in size and representation from system to system.
An int is guaranteed the ability to store values between the range of -32767 and 32767. Your implementation might allow more values, but to be portable and friendly with people using ancient compilers such as Turbo C (there are a lot of them), you shouldn't use int to store values larger than 32767 (0x7fff) such as 0xabcdef. When such out-of-range conversions are performed, the result is implementation-defined; it could involve saturation, wrapping, trap representations or raising a signal corresponding to computational error, for example, the latter of two which could cause undefined behaviour later on.
You need to translate to an agreed-upon field format. When sending data over the write, or writing data to a file to be transferred to other systems, it's important that the protocol for communication be agreed upon. This includes using the same size and representation for integer fields. Both output and input should be followed by a translation function (serialisation and deserialisation, respectively).
Your fields are binary, and so your file should be opened in binary mode. For example, use fopen(..., "wb") rather than "w". In some situations, '\n' characters might be translated to pairs of \r\n characters, otherwise; Windows systems are notorious for this. Can you imagine what kind of havoc and confusion this could wreak? I can, because I've answered a question about this problem.
Perhaps uint32_t might be a better choice, but I'd choose unsigned long as uint32_t isn't guaranteed to exist. On that note, for systems which don't have htonl (which returns uint32_t according to POSIX), that function could be implemented like so:
uint32_t htonl(uint32_t x) {
return (x & 0x000000ff) << 24
| (x & 0x0000ff00) << 8
| (x & 0x00ff0000) >> 8
| (x & 0xff000000) >> 24;
}
As an example inspired by the above htonl function, consider these macros:
typedef unsigned long ulong;
#define serialised_long(x) serialised_ulong((ulong) x)
#define serialised_ulong(x) (x & 0xFF000000) / 0x1000000 \
, (x & 0xFF0000) / 0x10000 \
, (x & 0xFF00) / 0x100 \
, (x & 0xFF)
typedef unsigned char uchar;
#define deserialised_long(x) (x[3] <= 0x7f \
? deserialised_ulong(x) \
: -(long)deserialised_ulong((uchar[]) { 0x100 - x[0] \
, 0xFF - x[1] \
, 0xFF - x[2] \
, 0xFF - x[3] })
#define deserialised_ulong(x) ( x[0] * 0x1000000UL \
+ x[1] * 0x10000UL \
+ x[2] * 0x100UL \
+ x[3] )
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *f = fopen("test_output", "wb+");
if (f == NULL)
{
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
ulong value = 0xABCDEF;
unsigned char datagram[] = { serialised_ulong(value) };
fwrite(datagram, sizeof datagram, 1, f);
printf("%08lX serialised to %02X%02X%02X%02X\n", value, datagram[0], datagram[1], datagram[2], datagram[3]);
rewind(f);
fread(datagram, sizeof datagram, 1, f);
value = deserialised_ulong(datagram);
printf("%02X%02X%02X%02X deserialised to %08lX\n", datagram[0], datagram[1], datagram[2], datagram[3], value);
fclose(f);
return 0;
}
Use htonl()
It converts from whatever the host-byte-order is (endianness of your machine) to network byte order. So whatever machine you're running on you will get the the same byte order. These calls are used so that regardless of the host you're running on the bytes are sent over the network in the right order, but it works for you too.
See the man pages of htonl and byteorder. There are various conversion functions available, also for different integer sizes, 16-bit, 32-bit, 64-bit ...
#include <stdio.h>
#include <stdlib.h>
#include <arpa/inet.h>
int main(void) {
int *int_pointer = (int *) malloc(sizeof(int));
// open output file
FILE *outptr = fopen("test_output", "w");
if (outptr == NULL) {
fprintf(stderr, "Could not create %s.\n", "test_output");
return 1;
}
*int_pointer = htonl(0xabcdef); // <====== This ensures correct byte order
fwrite(int_pointer, sizeof(int), 1, outptr);
//clean up
fclose(outptr);
free(int_pointer);
return 0;
}

Compare 2 different hexadecimal

I have a hexadecimal in unsigned char *hex_1 that contains:
hex_1[0] = 0x5b
hex_1[1] = 0x83
hex_1[2] = 0xb6
hex_1[3] = 0xe9
and I want to compare it with a hex value: 1ca0aaf9.
What should I do? Should I create a new character array, split 1ca0aaf9 into 1c ca 0a, then do memcpy()?
EDIT: I actually want them to either tell me whether "THEY ARE THE SAME!" or "THEY ARE NOT THE SAME!".
EDIT 2: I want it to be like hex[0] to be compared with 1c, etc...
Your probably want this:
uint32_t val = *(uint32_t*)(hex_1); // uint32_t is available by #include <stdint.h>
if (val == 0x1ca0aaf9)
{
}
On a big-endian architecture, you are done. On Intel and other little endian architectures you need to decide if that byte array is logicialy meant to be interpreted as in network byte order as 0x5b83b6e9 (‭decimal 1535358697‬). Or if it's meant to be in host byte order (0xe9b6835b) (decimal ‭3921052507‬). If the byte array is in network byte order, then you'll need to swap the bytes. That's what the ntohl function does.
uint32_t val = *(uint32_t*)(hex_1);
val = ntohl(val); // <arpa/inet.h> or <winsock2.h>
if (val == 0x1ca0aaf9)
{
}

how to disable fast frames in ath5k wireless driver

By default, fast frames are enabled in ath5k. (http://wireless.kernel.org/en/users/Drivers/ath5k)
I have found the macro which disables it
#define AR5K_EEPROM_FF_DIS(_v) (((_v) >> 2) & 0x1
The question is what do I do with it?
Do I replace the above line with
#define AR5K_EEPROM_FF_DIS(_v) 1
?
Do I compile it passing some parameter?
The bit-shift expression confuses me. Is _v a variable?
The question is more general as to how to deal with such macros in drivers. I've seen them in other codes too and always got confused.
Ok, I try to explain with a simplified example
#include <stdio.h>
/* Just for print in binary mode */
char *chartobin(unsigned char c)
{
static char a[9];
int i;
for (i = 0; i < 8; i++)
a[7 - i] = (c & (1 << i)) == (1 << i) ? '1' : '0';
a[8] = '\0';
return a;
}
int main(void)
{
unsigned char u = 0xf;
printf("%s\n", chartobin(u));
u >>= 2; // Shift bits 2 positions (to the right)
printf("%s\n", chartobin(u));
printf("%s\n", chartobin(u & 0x1)); // Check if the last bit is on
return 0;
}
Output:
00001111
00000011
00000001
Do I replace the above line with #define AR5K_EEPROM_FF_DIS(_v) 1?
Nooooo!!
If you initialize u with 0xb instead of 0xf you get:
00001011
00000010
00000000
As you can see (((_v) >> 2) & 0x1 != 1
Fast frames are not enabled or used on ath5k. It's a feature allowing the card to send multiple frames at once (think of it as an early version of 11n frame aggregation) that's implemented on MadWiFi and their proprietary drivers and can only be used with an Access Point that also supports it. What you see there is a flag stored at the device's EEPROM that instructs the driver if fast frames can be used or not, that macro you refer to just checks if that flag is set. You can modify the header file to always return 1 but that wouldn't make any difference, the driver never uses that information.

Resources