read big-endian data portably without inet.h - c

I'm parsing a file format that represents numbers in a big-endian way.
Normally I would include aarp/inet.h and use ntohl().
In various embedded c environments I use inet.h does not exist
Are there any standard equivalents to ntohl() I could use so I don't need to pull in inet.h?

Updated
Now sure why you wouldn't want to link in ntohl, but this should work, although a little slower than what your platform provides.
The code is optimized to use a compile time check to detect an Intel processor (which is always Little Endian). You can amend that #if defined... block to include your own processor if you know it's endianness. Otherwise, it falls back to doing a simple runtime check of detecting Little Endian.
#include <stdint.h> // if stdint.h is not available, then "typedef int int32_t" and "typedef unsigned int uint32_t" as appropriate
int compile_time_assert[(sizeof(int32_t) == 4)?1:-1];
int32_t NTOHL(int32_t value)
{
#if defined(_M_X64) || defined(_M_IX86) || defined(_M_X64) || defined(__x86_64) || defined(__x86_64__) || defined(__amd64) || defined(__amd64__)
const int isBigEndian = 0;
#elif defined(sparc)
const int isBigEndian = 1;
#else
const int32_t test = 0x01020304;
const char* ptr = (const char*)&test;
int isBigEndian = (ptr[0] == 0x01);
#endif
int32_t result = value;
if (!isBigEndian)
{
uint32_t uvalue = (uint32_t)value; // cast to unsigned for shifting safely
result = (uvalue >> 24) | ((uvalue >> 8) & 0x0000FF00) | ((uvalue << 8) & 0x00FF0000) | (uvalue << 24);
}
return result;
}
All of this assumes that the only two possible endianess architectures (big and little). Which should be 99.9% of everything these days. If you want this to work on an extremely dated architecture - one in which the old guys with the grey beards will speak about about 6-bit bytes, EBCIDIC, and 1's complements, you will need to modify the above code as appropriate.

Related

Is there an architecture-independent method to create a little-endian byte stream from a value in C?

I am trying to transmit values between architectures, by creating a uint8_t[] buffer and then sending that. To ensure they are transmitted correctly, the spec is to convert all values to little-endian as they go into the buffer.
I read this article here which discussed how to convert from one endianness to the other, and here where it discusses how to check the endianness of the system.
I am curious if there is a method to read bytes from a uint64 or other value in little-endian order regardless of whether the system is big or little? (ie through some sequence of bitwise operations)
Or is the only method to first check the endianness of the system, and then if big explicitly convert to little?
That's actually quite easy -- you just use shifts to convert between 'native' format (whatever that is) and little-endian
/* put a 32-bit value into a buffer in little-endian order (4 bytes) */
void put32(uint8_t *buf, uint32_t val) {
buf[0] = val;
buf[1] = val >> 8;
buf[2] = val >> 16;
buf[3] = val >> 24;
}
/* get a 32-bit value from a buffer (little-endian) */
uint32_t get32(uint8_t *buf) {
return (uint32_t)buf[0] + ((uint32_t)buf[1] << 8) +
((uint32_t)buf[2] << 16) + ((uint32_t)buf[3] << 24);
}
If you put a value into a buffer, transmit it as a byte stream to another machine, and then get the value from the received buffer, the two machines will have the same 32 bit value regardless of whether they have the same or different native byte oridering. The casts are needed becuase the default promotions will just convert to int, which might be smaller than a uin32_t, in which case the shifts could be out of range.
Be careful if you buffers are char rather than uint8_t (char might or might not be signed) -- you need to mask in that case:
uint32_t get32(char *buf) {
return ((uint32_t)buf[0] & 0xff) + (((uint32_t)buf[1] & 0xff) << 8) +
(((uint32_t)buf[2] & 0xff) << 16) + (((uint32_t)buf[3] & 0xff) << 24);
}
You can always serialize an uint64_t value to array of uint8_t in little endian order as simply
uint64_t source = ...;
uint8_t target[8];
target[0] = source;
target[1] = source >> 8;
target[2] = source >> 16;
target[3] = source >> 24;
target[4] = source >> 32;
target[5] = source >> 40;
target[6] = source >> 48;
target[7] = source >> 56;
or
for (int i = 0; i < sizeof (uint64_t); i++) {
target[i] = source >> i * 8;
}
and this will work anywhere where uint64_t and uint8_t exists.
Notice that this assumes that the source value is unsigned. Bit-shifting negative signed values will cause all sorts of headaches and you just don't want to do that.
Deserialization is a bit more complex if reading byte at a time in order:
uint8_t source[8] = ...;
uint64_t target = 0;
for (int i = 0; i < sizeof (uint64_t); i ++) {
target |= (uint64_t)source[i] << i * 8;
}
The cast to (uint64_t) is absolutely necessary, because the operands of << will undergo integer promotions, and uint8_t would always be converted to a signed int - and "funny" things will happen when you shift a set bit into the sign bit of a signed int.
If you write this into a function
#include <inttypes.h>
void serialize(uint64_t source, uint8_t *target) {
target[0] = source;
target[1] = source >> 8;
target[2] = source >> 16;
target[3] = source >> 24;
target[4] = source >> 32;
target[5] = source >> 40;
target[6] = source >> 48;
target[7] = source >> 56;
}
and compile for x86-64 using GCC 11 and -O3, the function will be compiled to
serialize:
movq %rdi, (%rsi)
ret
which just moves the 64-bit value of source into target array as is. If you reverse the indices (7 ... 0; big-endian), GCC will be clever enough to recognize that too and will compile it (with -O3) to
serialize:
bswap %rdi
movq %rdi, (%rsi)
ret
Most standardized network protocols specify numbers in big-endian format. In fact, big-endian is all referred to as network byte order, and there are functions specifically for translating integers of various sizes between host and network byte order.
These function are htons and ntohs for 16 bit values and htonl and ntohl` for 32 bit values. However, there is no equivalent for 64 bit values, and you're using little-endian for the network protocol, so these won't help you.
You can still however translate between the host byte order and the network byte order (little-endian in this case) without knowing the host order. You can do this by bit shifting the relevant values in to or out of the host numbers.
For example, to convert a 32 bit value from host to little endian and back to host:
uint32_t src_value = *some value*;
uint8_t buf[sizeof(uint32_t)];
int i;
for (i=0; i<sizeof(uint32_t); i++) {
buf[i] = (src_value >> (8 * i)) & 0xff;
}
uint32_t dest_value = 0;
for (i=0; i<sizeof(uint32_t); i++) {
dest_value |= (uint32_t)buf[i] << (8 * i);
}
For two systems that must communicated, you specify an "intercomminication-byte order". Then you have functions that convert between that and the native architecture byte order of each system.
There are three approaches to this problem. In order of efficiency:
Compile time detection of endianess
Run time detection of endianness
Endian agnostic code (corresponding to "sequence of bitwise operations" in your question).
Compile time detection of endianess
On architectures whose byte order is the same as the intercomm byte order, these functions do no transformation, but by using them, the same code becomes portable between systems.
Such functions may already exist on your target platform, for example:
Linux's endian.h be64toh() et-al
POSIX htonl, htons, ntohl, ntohs
Windows' winsock.h (same as POSIX but adds 64 bit htonll() and ntohll()
Where they don't exist creating them with cross-platform support is trivial. For example:
uint16_t intercom_to_host_16( uint16_t intercom_word )
{
#if __BIG_ENDIAN__
return intercom_word ;
#else
return intercom_word >> 8 | intercom_word << 8 ;
#endif
}
Here I have assumed that the intercom order is big-endian, that makes the function compatible with network byte order per ntohs() et al. The macro __BIG_ENDIAN__ is a predefined macro on most compilers. If not simply define it as a command line macro when compiling e.g. -D __BIG_ENDIAN__.
Run time detection of endianness
It is possible to detect endianness at runtime with minimal overhead:
uint16_t intercom_to_host_16( uint16_t intercom_word )
{
static const union
{
uint16_t word ;
uint8_t bytes[2] ;
} test = {.word = 0xff00u } ;
return test.bytes[0] == 0xffu ?
intercom_word :
intercom_word >> 8 | intercom_word << 8 ;
}
Of course you might wrap the test in a function for use in similar functions for other word sizes:
#include <stdbool.h>
bool isBigEndian()
{
static const union
{
uint16_t word ;
uint8_t bytes[2] ;
} test = {.word = 0xff00u } ;
return test.bytes[0] == 0xffu ;
}
Then simply have :
uint16_t intercom_to_host_16( uint16_t intercom_word )
{
return isBigEndian() ? intercom_word :
intercom_word >> 8 | intercom_word << 8 ;
}
Endian agnostic code
It is entirely possible to use endian agnostic code, but in that case all participants in the communication or file processing have the software overhead imposed even if the native byte order is already the same as the intercom byte order.
uint16_t intercom_to_host_16( uint16_t intercom_word )
{
uint8_t host_word [2] = { intercom_word >> 8,
intercom_word << 8 } ;
return *(uint16_t*)host_word ;
}

Linux Change IDT, how to read offset?

I am trying to edit linex kernel to change IDT so I wrote the following helping function:
#include <asm/desc.h>
unsigned long my_get_gate_offset(gate_desc *gate) {
unsigned long res = 0;
return result;
}
How can I fill res in the following way?
the 16 lower bits should get offset_low, the mid 16 offset_middle and the higher 32 bits get offset_high how this can be done in C?
Plus how can I reach offset_low, offset_middle and offset_high? they are declared in gate_struct not in gate_desc
There already is the function gate_offset which does exactly what you ask for:
static inline unsigned long gate_offset(const gate_desc *g)
{
#ifdef CONFIG_X86_64
return g->offset_low | ((unsigned long)g->offset_middle << 16) |
((unsigned long) g->offset_high << 32);
#else
return g->offset_low | ((unsigned long)g->offset_middle << 16);
#endif
}

How can I made a rgb led works in mplab for a PIC18F4550?

I've tried in C to make the WS2812 works in mplab with pic18f4550, the energy reach the led, but it doesn't turn on and I can't select the led that I ant to use, but it doesn't, how can I solve this? This is the code that I'm using:
Thanks
main.c
#define _XTAL_FREQ 4000000UL
#include "config.h"
#include "ws2812.h"
void main(void) {
ADCON1=0b00001111;
TRISB=0b00000001;
if (energy_port==1){
pin_strip_led=1;
ws2812_setPixelColorLed(1, ws2812_Color(255, 0, 0));
}
else{
pin_strip_led=0;
}
return;
}
This is what I'm using in ws2812.h:
#define STRIP_SIZE 8
#define pin_strip_led PORTBbits.RB1
#define energy_port PORTBbits.RB0
void ws2812_setPixelColorLed(unsigned char pixel, unsigned long color) {
Strip_RGBData[pixel][0] = (char) (color >> 16);
Strip_RGBData[pixel][1] = (char) (color >> 8);
Strip_RGBData[pixel][2] = (char) (color);
}
unsigned long ws2812_Color(unsigned char r, unsigned char g, unsigned char b) {
return ((unsigned long) r << 16) | ((unsigned long) g << 8) | b;
}
The program shown will read port B0, set B1 to the same state, and if set, populate one row of the array Strip_RGBData (which is neither declared nor used). It will do this one time. You ask, "How can I solve this?"
A good place to start is with the WS2812 datasheet. Use it to determine how to compose the signal required to control the LEDs. In your code set up a timer to clock out the bits of that signal in compliance with the timing requirements.
After that is working, you can add a loop to read the input to select an individual LED.

byte order using GCC struct bit packing

I am using GCC struct bit fields in an attempt interpret 8 byte CAN message data. I wrote a small program as an example of one possible message layout. The code and the comments should describe my problem. I assigned the 8 bytes so that all 5 signals should equal 1. As the output shows on an Intel PC, that is hardly the case. All CAN data that I deal with is big endian, and the fact that they are almost never packed 8 bit aligned makes htonl() and friends useless in this case. Does anyone know of a solution?
#include <stdio.h>
#include <netinet/in.h>
typedef union
{
unsigned char data[8];
struct {
unsigned int signal1 : 32;
unsigned int signal2 : 6;
unsigned int signal3 : 16;
unsigned int signal4 : 8;
unsigned int signal5 : 2;
} __attribute__((__packed__));
} _message1;
int main()
{
_message1 message1;
unsigned char incoming_data[8]; //This is how this message would come in from a CAN bus for all signals == 1
incoming_data[0] = 0x00;
incoming_data[1] = 0x00;
incoming_data[2] = 0x00;
incoming_data[3] = 0x01; //bit 1 of signal 1
incoming_data[4] = 0x04; //bit 1 of signal 2
incoming_data[5] = 0x00;
incoming_data[6] = 0x04; //bit 1 of signal 3
incoming_data[7] = 0x05; //bit 1 of signal 4 and signal 5
for(int i = 0; i < 8; ++i){
message1.data[i] = incoming_data[i];
}
printf("signal1 = %x\n", message1.signal1);
printf("signal2 = %x\n", message1.signal2);
printf("signal3 = %x\n", message1.signal3);
printf("signal4 = %x\n", message1.signal4);
printf("signal5 = %x\n", message1.signal5);
}
Because struct packing order varies between compilers and architectures, the best option is to use a helper function to pack/unpack the binary data instead.
For example:
static inline void message1_unpack(uint32_t *fields,
const unsigned char *buffer)
{
const uint64_t data = (((uint64_t)buffer[0]) << 56)
| (((uint64_t)buffer[1]) << 48)
| (((uint64_t)buffer[2]) << 40)
| (((uint64_t)buffer[3]) << 32)
| (((uint64_t)buffer[4]) << 24)
| (((uint64_t)buffer[5]) << 16)
| (((uint64_t)buffer[6]) << 8)
| ((uint64_t)buffer[7]);
fields[0] = data >> 32; /* Bits 32..63 */
fields[1] = (data >> 26) & 0x3F; /* Bits 26..31 */
fields[2] = (data >> 10) & 0xFFFF; /* Bits 10..25 */
fields[3] = (data >> 2) & 0xFF; /* Bits 2..9 */
fields[4] = data & 0x03; /* Bits 0..1 */
}
Note that because the consecutive bytes are interpreted as a single unsigned integer (in big-endian byte order), the above will be perfectly portable.
Instead of an array of fields, you could use a structure, of course; but it does not need to have any resemblance to the on-the-wire structure at all. However, if you have several different structures to unpack, an array of (maximum-width) fields usually turns out to be easier and more robust.
All sane compilers will optimize the above code just fine. In particular, GCC with -O2 does a very good job.
The inverse, packing those same fields to a buffer, is very similar:
static inline void message1_pack(unsigned char *buffer,
const uint32_t *fields)
{
const uint64_t data = (((uint64_t)(fields[0] )) << 32)
| (((uint64_t)(fields[1] & 0x3F )) << 26)
| (((uint64_t)(fields[2] & 0xFFFF )) << 10)
| (((uint64_t)(fields[3] & 0xFF )) << 2)
| ( (uint64_t)(fields[4] & 0x03 ) );
buffer[0] = data >> 56;
buffer[1] = data >> 48;
buffer[2] = data >> 40;
buffer[3] = data >> 32;
buffer[4] = data >> 24;
buffer[5] = data >> 16;
buffer[6] = data >> 8;
buffer[7] = data;
}
Note that the masks define the field length (0x03 = 0b11 (2 bits), 0x3F = 0b111111 (16 bits), 0xFF = 0b11111111 (8 bits), 0xFFFF = 0b1111111111111111 (16 bits)); and the shift amount depends on the bit position of the least significant bit in each field.
To verify such functions work, pack, unpack, repack, and re-unpack a buffer that should contain all zeros except one of the fields all ones, and verify the data stays correct over two roundtrips. It usually suffices to detect the typical bugs (wrong bit shift amounts, typos in masks).
Note that documentation will be key to ensure the code remains maintainable. I'd personally add comment blocks before each of the above functions, similar to
/* message1_unpack(): Unpack 8-byte message to 5 fields:
field[0]: Foobar. Bits 32..63.
field[1]: Buzz. Bits 26..31.
field[2]: Wahwah. Bits 10..25.
field[3]: Cheez. Bits 2..9.
field[4]: Blop. Bits 0..1.
*/
with the field "names" reflecting their names in documentation.

Architecturally independent serialization/deserialization of numbers using C

I'm working on small client-server application written on C.
For sending unsigned 32-bit number to remote side used this code:
u_int32_t number = 123;
send(client_socket_, &number, 4, CLIENT_SOCKET_FLAGS);
For receiving this number on remote side used this code:
u_int32_t number;
recv(acceptor, &number, 4, SERVER_DATA_SOCKET_FLAGS);
printf("Number is: %u\n", number);
If client and server run on same architecture of processors (amd64) - all works fine.
But if my client app launched on different processor architecture (mips) - I getting invalid number on server (bytes order are inverted).
How I can architecturally independent serialize my number in binary format?
An important feature is the good performance of the binary serializer/deserializer.
Also the solution should not be tied to a library.
Serialization
static void Ser_uint32_t(uint8_t arr [sizeof(uint32_t)], uint32_t value)
{
arr[0] = (value >> 24u) & 0xFFu;
arr[1] = (value >> 16u) & 0xFFu;
arr[2] = (value >> 8u) & 0xFFu;
arr[3] = (value >> 0u) & 0xFFu;
}
And de-serialization
static uint32_t DeSer_uint32_t(const uint8_t arr [sizeof(uint32_t)])
{
uint32_t x = 0;
x |= (uint32_t)(arr[0]) << 24u;
x |= (uint32_t)(arr[1]) << 16u;
x |= (uint32_t)(arr[2]) << 8u;
x |= (uint32_t)(arr[3]) << 0u;
return x;
}
Depending on the endianness you want you can correct the posted functions
I prefer to use hexadecimal encoding.
Eg: 0xDEADBEEF will literally be send as: <stx>DEADBEEF<etx>.
This method has the advantage of not losing the ability to use ASCII control characters.
This method has the disadvantage that each number byte is doubled.
Conversion is easy, you can use a lookup table after masking a nibble.
It's a bit like using something like json, but more lightweight in execution.

Resources