Linux Change IDT, how to read offset? - c

I am trying to edit linex kernel to change IDT so I wrote the following helping function:
#include <asm/desc.h>
unsigned long my_get_gate_offset(gate_desc *gate) {
unsigned long res = 0;
return result;
}
How can I fill res in the following way?
the 16 lower bits should get offset_low, the mid 16 offset_middle and the higher 32 bits get offset_high how this can be done in C?
Plus how can I reach offset_low, offset_middle and offset_high? they are declared in gate_struct not in gate_desc

There already is the function gate_offset which does exactly what you ask for:
static inline unsigned long gate_offset(const gate_desc *g)
{
#ifdef CONFIG_X86_64
return g->offset_low | ((unsigned long)g->offset_middle << 16) |
((unsigned long) g->offset_high << 32);
#else
return g->offset_low | ((unsigned long)g->offset_middle << 16);
#endif
}

Related

How can I made a rgb led works in mplab for a PIC18F4550?

I've tried in C to make the WS2812 works in mplab with pic18f4550, the energy reach the led, but it doesn't turn on and I can't select the led that I ant to use, but it doesn't, how can I solve this? This is the code that I'm using:
Thanks
main.c
#define _XTAL_FREQ 4000000UL
#include "config.h"
#include "ws2812.h"
void main(void) {
ADCON1=0b00001111;
TRISB=0b00000001;
if (energy_port==1){
pin_strip_led=1;
ws2812_setPixelColorLed(1, ws2812_Color(255, 0, 0));
}
else{
pin_strip_led=0;
}
return;
}
This is what I'm using in ws2812.h:
#define STRIP_SIZE 8
#define pin_strip_led PORTBbits.RB1
#define energy_port PORTBbits.RB0
void ws2812_setPixelColorLed(unsigned char pixel, unsigned long color) {
Strip_RGBData[pixel][0] = (char) (color >> 16);
Strip_RGBData[pixel][1] = (char) (color >> 8);
Strip_RGBData[pixel][2] = (char) (color);
}
unsigned long ws2812_Color(unsigned char r, unsigned char g, unsigned char b) {
return ((unsigned long) r << 16) | ((unsigned long) g << 8) | b;
}
The program shown will read port B0, set B1 to the same state, and if set, populate one row of the array Strip_RGBData (which is neither declared nor used). It will do this one time. You ask, "How can I solve this?"
A good place to start is with the WS2812 datasheet. Use it to determine how to compose the signal required to control the LEDs. In your code set up a timer to clock out the bits of that signal in compliance with the timing requirements.
After that is working, you can add a loop to read the input to select an individual LED.

How to get 63bits out of 64

I have some 32bits memory area filled from hardware with this type of data :
reg1 : 63 | 62 | ... | 32 | reg0 : 31| ... | 0
-------------------------- | ----------------
val | time value upper | time value lower
I'm trying to get the time value and the 'val' at once with struct and union.
I first tried :
typedef struct
{
uint64_t time : 63;
uint64_t value : 1;
} myStruct;
but this is not working, myStruct.time cannot be bigger than 32bits.
I tried several things, event somethign like this :
typedef union
{
union{
struct{
uint32_t lower : 32;
uint32_t upper : 31;
} spare;
uint64_t value;
} time;
struct{
uint32_t spare_low : 32;
uint32_t spare_upp : 31;
uint32_t value : 1;
} pin;
} myStruct;
But in this case, myStruct.time.value get obviously the bit 63. I get some values like 0x8000_0415_4142_3015 instead of 0x0000_0415_4142_3015.
How can I retrieve the 63 bits time value and the 1 bit pin value easily ?
PS : I know that I can make a macro of something like that, I'm looking for a direct method.
Bitfields should never be used when bit placement is important.
Use bit masking instead:
uint64_t someValue = ...
uint64_t time = someValue & 0x7fffffffffffffff;
bool pin = someValue >> 63 & 1;
Bit-fields only gives head ache and non-portable code. Always avoid them.
Assuming that 64 bit access is feasible for your specific system and "reg1" and "reg0" aren't actual variables but something placed in memory by hardware, then:
#define reg1 (*(volatile uint32_t*)0x1000) // address of reg1
#define reg0 (*(volatile uint32_t*)0x1004) // address of reg0
const uint32_t VAL_MASK = 1ul << 31;
...
uint64_t time_;
time_ = (uint64_t)(reg1 & (VAL_MASK - 1)) << 32;
time_ |= reg0;
bool val = reg1 & VAL_MASK;
// or alternatively:
uint64_t val = reg1 >> 31;
(Please don't name your variable time since that collides with the standard lib.)
If you don't have 64 bit types or in case they are simply too slow to use, then you have to access each 32 bits individually, just don't OR them together as done above.
int getval(uint64_t reg)
{
return !!(reg & ((uint64_t)1 << 63));
}
uint64_t gettime(uint64_t reg)
{
return reg & (~((uint64_t)1 << 63));
}

byte order using GCC struct bit packing

I am using GCC struct bit fields in an attempt interpret 8 byte CAN message data. I wrote a small program as an example of one possible message layout. The code and the comments should describe my problem. I assigned the 8 bytes so that all 5 signals should equal 1. As the output shows on an Intel PC, that is hardly the case. All CAN data that I deal with is big endian, and the fact that they are almost never packed 8 bit aligned makes htonl() and friends useless in this case. Does anyone know of a solution?
#include <stdio.h>
#include <netinet/in.h>
typedef union
{
unsigned char data[8];
struct {
unsigned int signal1 : 32;
unsigned int signal2 : 6;
unsigned int signal3 : 16;
unsigned int signal4 : 8;
unsigned int signal5 : 2;
} __attribute__((__packed__));
} _message1;
int main()
{
_message1 message1;
unsigned char incoming_data[8]; //This is how this message would come in from a CAN bus for all signals == 1
incoming_data[0] = 0x00;
incoming_data[1] = 0x00;
incoming_data[2] = 0x00;
incoming_data[3] = 0x01; //bit 1 of signal 1
incoming_data[4] = 0x04; //bit 1 of signal 2
incoming_data[5] = 0x00;
incoming_data[6] = 0x04; //bit 1 of signal 3
incoming_data[7] = 0x05; //bit 1 of signal 4 and signal 5
for(int i = 0; i < 8; ++i){
message1.data[i] = incoming_data[i];
}
printf("signal1 = %x\n", message1.signal1);
printf("signal2 = %x\n", message1.signal2);
printf("signal3 = %x\n", message1.signal3);
printf("signal4 = %x\n", message1.signal4);
printf("signal5 = %x\n", message1.signal5);
}
Because struct packing order varies between compilers and architectures, the best option is to use a helper function to pack/unpack the binary data instead.
For example:
static inline void message1_unpack(uint32_t *fields,
const unsigned char *buffer)
{
const uint64_t data = (((uint64_t)buffer[0]) << 56)
| (((uint64_t)buffer[1]) << 48)
| (((uint64_t)buffer[2]) << 40)
| (((uint64_t)buffer[3]) << 32)
| (((uint64_t)buffer[4]) << 24)
| (((uint64_t)buffer[5]) << 16)
| (((uint64_t)buffer[6]) << 8)
| ((uint64_t)buffer[7]);
fields[0] = data >> 32; /* Bits 32..63 */
fields[1] = (data >> 26) & 0x3F; /* Bits 26..31 */
fields[2] = (data >> 10) & 0xFFFF; /* Bits 10..25 */
fields[3] = (data >> 2) & 0xFF; /* Bits 2..9 */
fields[4] = data & 0x03; /* Bits 0..1 */
}
Note that because the consecutive bytes are interpreted as a single unsigned integer (in big-endian byte order), the above will be perfectly portable.
Instead of an array of fields, you could use a structure, of course; but it does not need to have any resemblance to the on-the-wire structure at all. However, if you have several different structures to unpack, an array of (maximum-width) fields usually turns out to be easier and more robust.
All sane compilers will optimize the above code just fine. In particular, GCC with -O2 does a very good job.
The inverse, packing those same fields to a buffer, is very similar:
static inline void message1_pack(unsigned char *buffer,
const uint32_t *fields)
{
const uint64_t data = (((uint64_t)(fields[0] )) << 32)
| (((uint64_t)(fields[1] & 0x3F )) << 26)
| (((uint64_t)(fields[2] & 0xFFFF )) << 10)
| (((uint64_t)(fields[3] & 0xFF )) << 2)
| ( (uint64_t)(fields[4] & 0x03 ) );
buffer[0] = data >> 56;
buffer[1] = data >> 48;
buffer[2] = data >> 40;
buffer[3] = data >> 32;
buffer[4] = data >> 24;
buffer[5] = data >> 16;
buffer[6] = data >> 8;
buffer[7] = data;
}
Note that the masks define the field length (0x03 = 0b11 (2 bits), 0x3F = 0b111111 (16 bits), 0xFF = 0b11111111 (8 bits), 0xFFFF = 0b1111111111111111 (16 bits)); and the shift amount depends on the bit position of the least significant bit in each field.
To verify such functions work, pack, unpack, repack, and re-unpack a buffer that should contain all zeros except one of the fields all ones, and verify the data stays correct over two roundtrips. It usually suffices to detect the typical bugs (wrong bit shift amounts, typos in masks).
Note that documentation will be key to ensure the code remains maintainable. I'd personally add comment blocks before each of the above functions, similar to
/* message1_unpack(): Unpack 8-byte message to 5 fields:
field[0]: Foobar. Bits 32..63.
field[1]: Buzz. Bits 26..31.
field[2]: Wahwah. Bits 10..25.
field[3]: Cheez. Bits 2..9.
field[4]: Blop. Bits 0..1.
*/
with the field "names" reflecting their names in documentation.

read big-endian data portably without inet.h

I'm parsing a file format that represents numbers in a big-endian way.
Normally I would include aarp/inet.h and use ntohl().
In various embedded c environments I use inet.h does not exist
Are there any standard equivalents to ntohl() I could use so I don't need to pull in inet.h?
Updated
Now sure why you wouldn't want to link in ntohl, but this should work, although a little slower than what your platform provides.
The code is optimized to use a compile time check to detect an Intel processor (which is always Little Endian). You can amend that #if defined... block to include your own processor if you know it's endianness. Otherwise, it falls back to doing a simple runtime check of detecting Little Endian.
#include <stdint.h> // if stdint.h is not available, then "typedef int int32_t" and "typedef unsigned int uint32_t" as appropriate
int compile_time_assert[(sizeof(int32_t) == 4)?1:-1];
int32_t NTOHL(int32_t value)
{
#if defined(_M_X64) || defined(_M_IX86) || defined(_M_X64) || defined(__x86_64) || defined(__x86_64__) || defined(__amd64) || defined(__amd64__)
const int isBigEndian = 0;
#elif defined(sparc)
const int isBigEndian = 1;
#else
const int32_t test = 0x01020304;
const char* ptr = (const char*)&test;
int isBigEndian = (ptr[0] == 0x01);
#endif
int32_t result = value;
if (!isBigEndian)
{
uint32_t uvalue = (uint32_t)value; // cast to unsigned for shifting safely
result = (uvalue >> 24) | ((uvalue >> 8) & 0x0000FF00) | ((uvalue << 8) & 0x00FF0000) | (uvalue << 24);
}
return result;
}
All of this assumes that the only two possible endianess architectures (big and little). Which should be 99.9% of everything these days. If you want this to work on an extremely dated architecture - one in which the old guys with the grey beards will speak about about 6-bit bytes, EBCIDIC, and 1's complements, you will need to modify the above code as appropriate.

Convert Bytes to Int / uint in C

I have a unsigned char array[248]; filled with bytes. Like 2F AF FF 00 EB AB CD EF .....
This Array is my Byte Stream which I store my Data from the UART (RS232) as a Buffer.
Now I want to convert the bytes back to my uint16's and int32's.
In C# I used the BitConverter Class to do this. e.g:
byte[] Array = { 0A, AB, CD, 25 };
int myint1 = BitConverter.ToInt32(bytes, 0);
int myint2 = BitConverter.ToInt32(bytes, 4);
int myint3 = BitConverter.ToInt32(bytes, 8);
int myint4 = BitConverter.ToInt32(bytes, 12);
//...
enter code here
Console.WriteLine("int: {0}", myint1); //output Data...
Is there a similiar Function in C ? (no .net , I use the KEIL compiler because code is running on a microcontroller)
With Regards
Sam
There's no standard function to do it for you in C. You'll have to assemble the bytes back into your 16- and 32-bit integers yourself. Be careful about endianness!
Here's a simple little-endian example:
extern uint8_t *bytes;
uint32_t myInt1 = bytes[0] + (bytes[1] << 8) + (bytes[2] << 16) + (bytes[3] << 24);
For a big-endian system, it's just the opposite order:
uint32_t myInt1 = (bytes[0] << 24) + (bytes[1] << 16) + (bytes[2] << 8) + bytes[3];
You might be able to get away with:
uint32_t myInt1 = *(uint32_t *)bytes;
If you're careful about alignment issues.
Yes there is. Assume your bytes are in:
uint8_t bytes[N] = { /* whatever */ };
We know that, a 16 bit integer is just two 8 bit integers concatenated, i.e. one has a multiple of 256 or alternatively is shifted by 8:
uint16_t sixteen[N/2];
for (i = 0; i < N; i += 2)
sixteen[i/2] = bytes[i] | (uint16_t)bytes[i+1] << 8;
// assuming you have read your bytes little-endian
Similarly for 32 bits:
uint32_t thirty_two[N/4];
for (i = 0; i < N; i += 4)
thirty_two[i/4] = bytes[i] | (uint32_t)bytes[i+1] << 8
| (uint32_t)bytes[i+2] << 16 | (uint32_t)bytes[i+3] << 24;
// same assumption
If the bytes are read big-endian, of course you reverse the order:
bytes[i+1] | (uint16_t)bytes[i] << 8
and
bytes[i+3] | (uint32_t)bytes[i+2] << 8
| (uint32_t)bytes[i+1] << 16 | (uint32_t)bytes[i] << 24
Note that there's a difference between the endian-ness in the stored integer and the endian-ness of the running architecture. The endian-ness referred to in this answer is of the stored integer, i.e., the contents of bytes. The solutions are independent of the endian-ness of the running architecture since endian-ness is taken care of when shifting.
In case of little-endian, can't you just use memcpy?
memcpy((char*)&myint1, aesData.inputData[startindex], length);
char letter = 'A';
size_t filter = letter;
filter = (filter << 8 | filter);
filter = (filter << 16 | filter);
filter = (filter << 32 | filter);
printf("filter: %#I64x \n", filter);
result: "filter: 0x4141414141414141"

Resources