Extracting bytes from a 32 bits number - c

This is not important and should be quite simple, I just don't understand what I'm doing wrong.
The story behind this is that I'm playing with tinyNeoPixel lib on the attiny85, and I'm trying to dive a bit deeper than I need.
This is traditional ANSI C and I'm using a Raspberry Pi3 for this test, but for this case this should be irrelevant. The sizeof(c) on the printf just shows that 'c' is 4 bytes, as expected.
I'm trying to extract the Red, Green, and Blue part of a color that's stored as a 32 bits number
Obviously I'm failing to return the result as a 1 byte value, can same one please tell me how do I do that ? Just casting to (uint8_t) just produces zero.
Thank you.
pi3:~/src$ cat a.c
#include <stdio.h>
typedef unsigned char uint8_t;
typedef unsigned long int uint32_t;
#define Red(x) (x & 0xff000000)
#define Green(x) (x & 0x00ff0000)
#define Blue(x) (x & 0x0000ff00)
void main()
{
uint32_t c;
uint8_t r,g,b;
c=0xffeecc00;
r=Red(c);
g=Green(c);
b=Blue(c);
printf("%d - %08x - %02x %02x %02x\n", sizeof(c), c, r, g, b);
printf("%d - %08x - %02x %02x %02x\n", sizeof(c), c, Red(c), Green(c), Blue(c));
}
pi3:~/src$ gcc a.c -o a
pi3:~/src$ ./a
4 - ffeecc00 - 00 00 00
4 - ffeecc00 - ff000000 ee0000 cc00
The solution is:
#define Red(x) (((x) & 0xff000000) >> 24)
#define Green(x) (((x) & 0x00ff0000) >> 16)
#define Blue(x) (((x) & 0x0000ff00) >> 8)
With this macros this produces
pi3:~/src$ ./a
4 - ffeecc00 - ff ee cc
4 - ffeecc00 - ff ee cc
as it should.
Thank you guys.

You need to shift as well as mask. That is, try something like
#define Red(x) (((x) & 0xff000000) >> 24)
and similarly for your Green() and Blue() macros.
(Also note that I've added two extra pairs of parentheses to the macro definition, for safety in expansion.)

Related

Purpose of double underscore pointer operator for C functions

I am writing some C code that is for a microcontroller and have come across a curious couple of statements in some generated drivers for a peripheral I am using. Seemingly, a function uint8_t gapm_reset_req_handler (void) is supposed to reset a handler and return a status. The function is seemingly failing in its purpose, which surprises me as it seems simple enough. The relevant code I would like to ask about is this function and that INTERFACE_UNPACK_UINT8 line.
uint8_t gapm_reset_req_handler (void) {
uint8_t u8Operation, u8Status;
INTERFACE_MSG_INIT(GAPM_RESET_CMD, TASK_GAPM);
INTERFACE_PACK_ARG_UINT8(GAPM_RESET);
INTERFACE_SEND_WAIT(GAPM_CMP_EVT, TASK_GAPM);
INTERFACE_UNPACK_UINT8(&u8Operation);
INTERFACE_UNPACK_UINT8(&u8Status);
INTERFACE_MSG_DONE();
if(u8Operation!=GAPM_RESET)
return AT_BLE_FAILURE;
return u8Status;}
These INTERFACE messages are defined in another file, and I am a bit lost at what exactly is supposed to be accomplished by the generated code regarding the use of the double underscore on the ptr variable. Does anyone have any intuition as to what is going on? To me, it looks like some operation on the value that is passed to it but the use of the double underscore confuses me as I thought that was just for macros. Any thoughts are greatly appreciated!
Specific line
#define INTERFACE_UNPACK_UINT8(ptr)\
*ptr = *__ptr++
Full Definition of INTERFACE Code:
#ifndef __INTERFACE_H__
#define __INTERFACE_H__
#include "event.h"
#define INTERFACE_HDR_LENGTH 9
#define INTERFACE_API_PKT_ID 0x05
#define INTERFACE_SEND_BUF_MAX 600
#define INTERFACE_RCV_BUFF_LEN 500
extern uint8_t interface_send_msg[INTERFACE_SEND_BUF_MAX];
void platform_send_lock_aquire(void);
void platform_send_lock_release(void);
#define INTERFACE_MSG_INIT(msg_id, dest_id) \
do{\
uint8_t* __ptr = interface_send_msg;\
uint16_t __len;\
platform_send_lock_aquire();\
*__ptr++ = (INTERFACE_API_PKT_ID);\
*__ptr++ = ((msg_id) & 0x00FF );\
*__ptr++ = (((msg_id)>>8) & 0x00FF );\
*__ptr++ = ((dest_id) & 0x00FF );\
*__ptr++ = (((dest_id)>>8) & 0x00FF );\
*__ptr++ = ((TASK_EXTERN) & 0x00FF );\
*__ptr++ = (((TASK_EXTERN)>>8) & 0x00FF );\
__ptr += 2
#define INTERFACE_PACK_ARG_UINT8(arg)\
*__ptr++ = (arg)
#define INTERFACE_PACK_ARG_UINT16(arg)\
*__ptr++ = ((arg) & 0x00FF);\
*__ptr++ = (((arg) >> 8) & 0x00FF)
#define INTERFACE_PACK_ARG_UINT32(arg) \
*__ptr++ = (uint8_t)((arg) & 0x00FF );\
*__ptr++ = (uint8_t)(( (arg) >> 8) & 0x00FF) ;\
*__ptr++ = (uint8_t)(( (arg) >> 16) & 0x00FF);\
*__ptr++ = (uint8_t)(( (arg) >> 24) & 0x00FF)
#define INTERFACE_PACK_ARG_BLOCK(ptr,len)\
memcpy(__ptr, ptr, len);\
__ptr += len
#define INTERFACE_PACK_ARG_DUMMY(len)\
__ptr += len
#define INTERFACE_PACK_LEN()\
__len = __ptr - &interface_send_msg[INTERFACE_HDR_LENGTH];\
interface_send_msg[7] = ((__len) & 0x00FF );\
interface_send_msg[8] = (((__len)>>8) & 0x00FF);\
__len += INTERFACE_HDR_LENGTH;
#define INTERFACE_SEND_NO_WAIT()\
INTERFACE_PACK_LEN();\
interface_send(interface_send_msg, __len)
#define INTERFACE_SEND_WAIT(msg, src)\
watched_event.msg_id = msg;\
watched_event.src_id = src;\
INTERFACE_PACK_LEN();\
interface_send(interface_send_msg, __len);\
if(platform_cmd_cmpl_wait()){return AT_BLE_FAILURE;}\
__ptr = watched_event.params;\
#define INTERFACE_MSG_DONE()\
platform_send_lock_release();\
}while(0)
#define INTERFACE_UNPACK_INIT(ptr)\
do{\
uint8_t* __ptr = (uint8_t*)(ptr);\
#define INTERFACE_UNPACK_UINT8(ptr)\
*ptr = *__ptr++
#define INTERFACE_UNPACK_UINT16(ptr)\
*ptr = (uint16_t)__ptr[0]\
| ((uint16_t)__ptr[1] << 8);\
__ptr += 2
#define INTERFACE_UNPACK_UINT32(ptr)\
*ptr = (uint32_t)__ptr[0] \
| ((uint32_t)__ptr[1] << 8) \
| ((uint32_t)__ptr[2] << 16)\
| ((uint32_t)__ptr[3] << 24);\
__ptr += 4
#define INTERFACE_UNPACK_BLOCK(ptr, len)\
memcpy(ptr, __ptr, len);\
__ptr += len
#define INTERFACE_UNPACK_SKIP(len)\
__ptr += (len)
#define INTERFACE_UNPACK_DONE()\
}while(0)
void interface_send(uint8_t* msg, uint16_t u16TxLen);
#endif /* HCI_H_ */
*ptr = *__ptr++ is simply a byte copy followed by increasing the source pointer by one. __ptr is a local variable declared inside one of the macros then re-used in the other macros.
Notably, it is bad practice to use identifiers starting with underscore and particularly with two underscore or one underscore + an upper case letter. These are reserved for the compiler and standard lib, and the lib you post does not appear to belong to either. So there is reason to believe it was badly designed.
The following function-like macro nightmare confirms this - this is some horrible code with non-existent type safety and massive potential for undefined behavior upon bitwise arithmetic with signed numbers. People used to write macro crap like this before function inlining became industry standard back in the 1980s-1990s. Although stdint.h was introduced in 1999 so more likely they were just incompetent.
As for what the code does, it is much simpler than it looks. There's just various macros for shoveling data from one data type to another, apparently part of some protocol encoding/decoding. They also seem to make various assumptions about endianess that aren't portable.
Please never use or trust code provided to you by some silicon vendor. They have a very long tradition of employing the absolutely worst programmers in the world. If someone wrote microcontroller code like this in a normal company, they would get fired immediately. Similarly, don't trust the average open source barf posted on Github either.

Is there any way to speedup be32 encoding in C?

Is there any way to speedup be32enc in C? Here's an example of what I do for uint32_t:
for (int i=0; i < 19; i++) {
be32enc(&endiandata[i], pdata[i]);
}
And the function itself:
static inline void be32enc(void *pp, uint32_t x)
{
uint8_t *p = (uint8_t *)pp;
p[3] = x & 0xff;
p[2] = (x >> 8) & 0xff;
p[1] = (x >> 16) & 0xff;
p[0] = (x >> 24) & 0xff;
}
I've googled hard, but haven't found anything - this topic is not so popular. Target CPU for this would be i3-7350k and I use msvc2017. May use MIT/GPL libs as well.
There are two modifications that are likely to improve the performance of your be32inc function. First get rid of the pointer magic and make it a function from uint32_t to uint32_t. Second, if you don't need to be portable to other architectures than x86, implement it using the _bswap-intrinsic.
If you have a decent compiler, you should be able to use builtins (btw there is a BSD standard function that does what you want, htobe32()):
#ifndef I_HAVE_A_CRAP_COMPILER
#define bswap32(x) __builtin_bswap32(x)
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define htobe32(x) bswap32(x)
#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
#define htobe32(x) (x)
#else
#error Must be little or big endian
#endif
#else
/*your implementation here*/
#endif
Edit: if you want to try your C library's builtin htobe32() function you can:
#define _BSD_SOURCE
#include <endian.h>
Though the compiler builtin will likely be faster, since it will avoid a function call altogether and inline efficient assembly (a single bswap instruction on x86 and x86_64)

What does htons() do on a Big-Endian system?

htons() converts host byte order to network byte order.
Network byte order is Big-Endian and host byte order can be either Little-Endian or Big-Endian.
On a Little Endian system htons() will convert the order of a multi-byte variable to Big-Endian. What will htons() do in case if the host byte order is also Big-Endian?
What will htons() do in case if the host byte order is also big endian?
Nothing - quite literally. The purpose of introducing htons() in the first place is to let you write code that does not care about the endianness of your system. Header file where the functions are defined is the only place where endianness comes into play.
Here is one implementation that replaces htons with parentheses around its parameter expression:
#if BYTE_ORDER == BIG_ENDIAN
#define HTONS(n) (n)
#define NTOHS(n) (n)
#define HTONL(n) (n)
#define NTOHL(n) (n)
#else
#define HTONS(n) (((((unsigned short)(n) & 0xFF)) << 8) | (((unsigned short)(n) & 0xFF00) >> 8))
#define NTOHS(n) (((((unsigned short)(n) & 0xFF)) << 8) | (((unsigned short)(n) & 0xFF00) >> 8))
#define HTONL(n) (((((unsigned long)(n) & 0xFF)) << 24) | \
((((unsigned long)(n) & 0xFF00)) << 8) | \
((((unsigned long)(n) & 0xFF0000)) >> 8) | \
((((unsigned long)(n) & 0xFF000000)) >> 24))
#define NTOHL(n) (((((unsigned long)(n) & 0xFF)) << 24) | \
((((unsigned long)(n) & 0xFF00)) << 8) | \
((((unsigned long)(n) & 0xFF0000)) >> 8) | \
((((unsigned long)(n) & 0xFF000000)) >> 24))
#endif
#define htons(n) HTONS(n)
#define ntohs(n) NTOHS(n)
#define htonl(n) HTONL(n)
#define ntohl(n) NTOHL(n)

Definining a C macro that expands to a variable number of elements

I'm writing USB report descriptors, which are a sequence of bytes: a tag byte (in which the lower bits tell how many data bytes follow) followed by 0, 1, 2 or 4 data bytes. e.g. to define the logical ranges of an input:
uint8_t report_descriptor[] = {
...
0x15, 0x00, // Logical Minimum (0)
0x26, 0xFF, 0x03, // Logical Maximum (1023)
...
};
Since 0 fits into one byte, we use tag type 0x15 (Logical Minimum with one data byte). But 1023 requires two bytes, so tag type 0x26 (Logical Maximum with two data bytes).
I had hoped to define some macros to make this more readable (and avoid having to comment every line):
uint8_t report_descriptor[] = {
...
LOGICAL_MINIMUM(0),
LOGICAL_MAXIMUM(1023),
...
};
However, I've hit a snag: that macro needs to expand to a different number of elements depending on the value. I don't see any easy way to achieve this. I've tried tricks like value > 255 ? (value & 0xFF, value >> 8) : value, but it always gets expanded to just one byte.
I think the spec allows to just always use the 4-byte tags, but that would be wasteful, so I'd rather not do that.
Is what I'm after possible with the preprocessor?
There is a dirty hack that will achieve the asked functionality. But being a dirty hack, it's unlikely to improve the readability. But it works. First lets define an include file helper.h like this:
#if PARAM > 255
0x26, (PARAM & 0xFF), (PARAM >> 8),
#else
0x15, (PARAM),
#endif
Then in our main we will do:
uint8_t report_descriptor[] = {
#define PARAM 0
#include "helper.h"
#undef PARAM
#define PARAM 1023
#include "helper.h"
#undef PARAM
};
To see it is working here is the test code:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
uint8_t report_descriptor[] = {
#define PARAM 0
#include "helper.h"
#undef PARAM
#define PARAM 1023
#include "helper.h"
#undef PARAM
};
int main(int argc, char** args) {
int i;
for (i=0; i < sizeof(report_descriptor); i++ )
printf("%x\n", report_descriptor[i]);
return 0;
}
and the output is:
15
0
26
ff
3
I don't think that the C preprocessor is powerful enough to do this in a clean way. If you are willing to resort to the M4 macro processor, it can be done fairly easily. M4 should be readily available on the vast majority of GNU/Linux systems and portable implementations should be available for most platforms.
Let's define the M4 macros in a separate file and name it macros.m4.
define(`EXTRACT_BYTE', `(($1 >> (8 * $2)) & 0xFF)')
dnl You probably don't want to define these as M4 macros but as C preprocessor
dnl macros in your header files.
define(`TAG_1_BYTES', `0x15')
define(`TAG_2_BYTES', `0x26')
define(`TAG_3_BYTES', `0x37')
define(`TAG_4_BYTES', `0x48')
define(`EXPAND_1_BYTES', `TAG_1_BYTES, EXTRACT_BYTE($1, 0)')
define(`EXPAND_2_BYTES', `TAG_2_BYTES, EXTRACT_BYTE($1, 1), EXTRACT_BYTE($1, 0)')
define(`EXPAND_3_BYTES', `TAG_3_BYTES, EXTRACT_BYTE($1, 2), EXTRACT_BYTE($1, 1), EXTRACT_BYTE($1, 0)')
define(`EXPAND_4_BYTES', `TAG_4_BYTES, EXTRACT_BYTE($1, 3), EXTRACT_BYTE($1, 2), EXTRACT_BYTE($1, 1), EXTRACT_BYTE($1, 0)')
define(`ENCODE',
`ifelse(eval($1 < 256), `1', `EXPAND_1_BYTES($1)',
`ifelse(eval($1 < 65536), `1', `EXPAND_2_BYTES($1)',
`ifelse(eval($1 < 16777216), `1', `EXPAND_3_BYTES($1)',
`EXPAND_4_BYTES($1)')')')')
Now, writing your C files is straight forward. Put the following code in a file test.c.m4:
include(`macros.m4')
`static unint8_t report_descriptor[] = {'
ENCODE(50),
ENCODE(5000),
ENCODE(500000),
ENCODE(50000000),
`};'
In your Makefile, add the following rule
test.c: test.c.m4 macros.m4
${M4} $< > $#
where M4 is set to the M4 processor (usually m4).
If M4 is run on test.c.m4, it will – omitting some excess white space – produce the following test.c file:
static unint8_t report_descriptor[] = {
0x15, ((50 >> (8 * 0)) & 0xFF),
0x26, ((5000 >> (8 * 1)) & 0xFF), ((5000 >> (8 * 0)) & 0xFF),
0x37, ((500000 >> (8 * 2)) & 0xFF), ((500000 >> (8 * 1)) & 0xFF), ((500000 >> (8 * 0)) & 0xFF),
0x48, ((50000000 >> (8 * 3)) & 0xFF), ((50000000 >> (8 * 2)) & 0xFF), ((50000000 >> (8 * 1)) & 0xFF), ((50000000 >> (8 * 0)) & 0xFF),
};
You'll probably find it more convenient to keep the test.c.m4 file as minimal as possible and #include it in an ordinary C file.
If you don't know M4, you can learn the basics rather quickly. If already using GNU Autoconf, you might find it convenient to use their M4sugar M4 macro library instead of the plain M4 I've used above.

Little Endian Macros

So I have a new and exciting question that I would dearly like answered. So I'm writing a file compressor, basically a tar and in all honesty, that code seems to be going quite well. What I'm getting stuck on right now is an additional feature that is required of the project. We need to be able to produce the binary files as if they were made on a little endian machine. I've created a header file that I've included into my code that should do the bit swapping for me. It follows thus:
#ifndef MYLIB_H
#define MYLIB_H
#define BITS_PER_BYTE 8
#define true 1
#define false 0
typedef unsigned char uchar;
typedef unsigned long ulong;
typedef unsigned int uint;
typedef unsigned short ushort;
#ifdef LITTLE_ENDIAN
#define SwapULong(val) (val << 24 | (val << 8 & 0xFF0000) | (val >> 8 & 0xFF00) | val >> 24 & 0xFF)
#define SwapUShort(val) (val << BITS_PER_BYTE | val >> BITS_PER_BYTE)
#else
#define SwapULong(val) (val)
#define SwapUShort(val) (val)
#endif
#endif
So when I compile with gcc and run the program there are no errors. When I do a hexdump -C of the output however, the output is still in Big Endian Order!
I then tried compiling with the -E flag and I got a bunch of lines saying that
./compress line #: typedef: command not found
which became
./compress line #: __extension__ : command not found
until the final lines of the terminal output showed
./compress line 86: syntax error near unexpected '}' token
./compress line 86: __extension__ typedef struct { int __val[2]; } __fsid_t;
So any ideas what might be causing this for me?
Any help would be appreciated.
You need to include the endian.h header.

Resources