C header to delphi - c

I have a piece of C code. I need help to translate it to Delphi code.
1)
/*
* Color is packed into 16-bit word as follows:
*
* 15 8 7 0
* XXggbbbb XXrrrrgg
*
* Note that green bits 12 and 13 are the lower bits of green component
* and bits 0 and 1 are the higher ones.
*
*/
#define CLR_RED(spec) (((spec) >> 2) & 0x0F)
#define CLR_GREEN(spec) ((((spec) & 0x03) << 2) | ((spec & 0x3000) >> 12))
#define CLR_BLUE(spec) (((spec) >> 8) & 0x0F)
2)
#define CDG_GET_SCROLL_COMMAND(scroll) (((scroll) & 0x30) >> 4)
#define CDG_GET_SCROLL_HOFFSET(scroll) ((scroll) & 0x07)
#define CDG_GET_SCROLL_VOFFSET(scroll) ((scroll) & 0x0F)

These are parameterized macros. Since Delphi doesn't support these, you'll need to use functions instead, which is cleaner anyways.
>> is a right-shift, shr in Delphi
<< is a left-shift, shl in Delphi
& is "bitwise and", and in Delphi
Delphi uses bitwise operators when working on integers, and logical operators when working on booleans, so there is only one operator and to replace both && and &.
| is "bitwise or", or in Delphi
0x is the prefix for hex literals, $ in Delphi
So #define CLR_GREEN(spec) ((((spec) & 0x03) << 2) | ((spec & 0x3000) >> 12)) becomes something like:
function CLR_GREEN(spec: word):byte;
begin
result := byte(((spec and $03) shl 2) or ((spec and $3000) shr 12));
end;
(I don't have delphi at hand, so there might be minor bugs)
Convert the other macros in a similar manner.

Related

What is the purpose of bitshifting by 0

I am reviewing the open source AMD GPU drivers for Linux. I noticed something I haven't seen before, and I would like to know the purpose. On line 1441 of the sid.h file, there are a series of defines that are integers being bitshifted left by 0. Wouldn't this just result in the original integer being operated on?
Here is an excerpt and a link to the head
#define VGT_EVENT_INITIATOR 0xA2A4
#define SAMPLE_STREAMOUTSTATS1 (1 << 0)
#define SAMPLE_STREAMOUTSTATS2 (2 << 0)
#define SAMPLE_STREAMOUTSTATS3 (3 << 0)
https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/amd/amdgpu/sid.h#L1441
Also, I am learning to access the performance counter registers of AMD GPUs in order to calculate the GPU load. Any tips on that would be appreciated as well.
Things like that could be done just for the sake of consistency (not necessarily applicable to your specific case). For example, I can describe a set of single-bit flags as
#define FLAG_1 0x01
#define FLAG_2 0x02
#define FLAG_3 0x04
#define FLAG_4 0x08
or as
#define FLAG_1 (1u << 0)
#define FLAG_2 (1u << 1)
#define FLAG_3 (1u << 2)
#define FLAG_4 (1u << 3)
In the first line of the latter approach I did not have to shift by 0. But it just looks more consistent that way and emphasizes the fact that FLAG_1 has the same nature as the rest of the flags. And 0 acts as a placeholder for a different value, if I some day decide to change it.
You can actually see exactly that in the linked code with shift by 0 in the definitions of DYN_OR_EN and DYN_RR_EN macros.
The approach can be extended to multi-bit fields within a word, like in the following (contrived) example
// Bits 0-3 - lower counter, bits 4-7 - upper counter
#define LOWER_0 (0u << 0)
#define LOWER_1 (1u << 0)
#define LOWER_2 (2u << 0)
#define LOWER_3 (3u << 0)
#define UPPER_0 (0u << 4)
#define UPPER_1 (1u << 4)
#define UPPER_2 (2u << 4)
#define UPPER_3 (3u << 4)
unsigned packed_counters = LOWER_2 + UPPER_3; /* or `LOWER_2 | UPPER_3` */
Again, shifts by 0 bits are present purely for visual consistency. As well as shifts of 0 values.
You can actually see exactly that in the linked code with shift by 0 in the definitions of LC_XMIT_N_FTS and LC_XMIT_N_FTS_MASK macros.

Most efficient way to check if flags are set in an integer

I have 11 flags defined as:
#define F1 1
#define F2 2
#define F3 3
#define F4 4
...
#define F11 11
In some function I then create an integer which can include either of those flags, for example:
int a = (1 << F1) | (1 << F5) | (1 << F11) | (1 << F8);
This then gets passed into a function which needs to decode which flags are set in order to set specific bits in specific registers. So my question is, what is the most efficient way to check which flags are set. Right now I have 11 if's like:
void foo(int a)
{
if ((a & (1 << F1)) >> F1) {
// Set bit 5 in register A.
}
if ((a & (1 << F2)) >> F2) {
// Set bit 3 in register V.
}
if ((a & (1 << F3)) >> F3) {
// Set bit 2 in register H.
}
if ((a & (1 << F4)) >> F4) {
// Set bit 1 in register V.
}
// And so on, for all 11 flags.
}
P.S.
This is for an 8-bit microcontroller.
Just use:
typedef enum{
FLAG1 = 1, // or 0x01
FLAG2 = 2,
FLAG3 = 4,
...
FLAG8 = 0x80
} flags;
Then in main just check
if(value & FLAGN)
In C there is no diffrence between 1 and any other number in if statement. It just check if is zero or non-zero number.
And setting is the same:
value = FLAG1 | FLAG2 | FLAG8;
You can also use defines ofc.
And for claryfication, max number of flags for N bit type is N. So you need to have larger type (if compiller supports bigger datatypes) like uint16_t.
C's if statement and logical operators do not make a difference between 1 and other non-zeros (although logical operators produce 1 for true). Therefore, there is no difference between (a & (1 << F3)) >> F3 and a & (1 << F3) in the context of a logical expression: if one evaluates to true, so does the other one, and vice versa. Hence, this should work:
if (a & (1 << F1)) {
// Set bit 5 in register A.
}
Note: I assume you didn't mean to write #define F11 1024, but rather #define F11 10, because you use your Fs as the second operand of <<.

Specific C shifting operations. Have a look and tell me what it is about

SYSCFG->EXTICR[EXTI_PinSourcex >> 0x02] &=
~((uint32_t)0x0F) <<
(0x04 * (EXTI_PinSourcex & (uint8_t)0x03));
SYSCFG->EXTICR[EXTI_PinSourcex >> 0x02] |=
(((uint32_t)EXTI_PortSourceGPIOx) <<
(0x04 * (EXTI_PinSourcex & (uint8_t)0x03)));
This is a piece of code from the STM32F4 board standard library. I understand every single operation but the entire code is really messy. Please accept the challenge and tell me what it is about in a more intuitive way.
And for simplicity, try to explain the situation when EXTI_PinSourcex is 0x01, and the EXTI_PortSourceGPIOx is 0x01 as well .
Any comments is appreciated, thanks in advance.
Ah, bitwise operator math.
It's easier to understand if you break it apart into pieces and "unoptimize" some of the language syntax. Let's also make the bigger variable convolutions easier to read:
SYSCFG->EXTICR[EXTI_PinSourcex >> 0x02] &=
~((uint32_t)0x0F) <<
(0x04 * (EXTI_PinSourcex & (uint8_t)0x03));
becomes:
#define cfgval_shift_2r (SYSCFG->EXTICR[EXTI_PinSourcex >> 0x02])
cfgval_shift_2r = (cfgval_shift_2r) & ~((uint32_t)0x0F) << (0x04 * (EXTI_PinSourcex & (uint8_t)0x03));
Unraveling some of the constant bitwise math (such as ~((uint32_t)0x0F)):
cfgval_shift_2r = (cfgval_shift_2r) & 0xFFF0 << (0x04 * (EXTI_PinSourcex & 0x03));
Now we have something that's a little easier to read.
EXTI_PinSourcex == 0x00:
// cfgval_shift_2r = SYSCFG->EXTICR[0], because 0 shifted any number of bits is always 0
SYSCFG->EXTICR[0] = (SYSCFG->EXTICR[0]) & 0xFFF0 << (0x04 * (0 & 0x03));
// \ == 0 /
SYSCFG->EXTICR[0] = (SYSCFG->EXTICR[0]) & 0xFFF0;
So this takes the value of SYSCFG->EXTICR[0] and simply masks the least-significant byte off and assigns it back as the value.
EXTI_PinSourcex == 0x01:
// cfgval_shift_2r = SYSCFG->EXTICR[0], because (0x01 >> 0x02) == 0
SYSCFG->EXTICR[0] = (SYSCFG->EXTICR[0]) & 0xFFF0 << (0x04 * (0x01 & 0x03));
// \ == 0x04 * 0x01 == 0x04 /
SYSCFG->EXTICR[0] = (SYSCFG->EXTICR[0]) & 0xFFF0 << 0x04;
So this takes the value of SYSCFG->EXTICR[0], masks the least-significant byte off, shifts everything to the left by 4 bits, and assigns it back as the value.
You can apply a similar breakdown to the second operation.

What does "#Define BLABLABLA (1 << 2) mean? [duplicate]

This question already has answers here:
What does '<<' mean in C?
(5 answers)
Closed 9 years ago.
I am reviewing some C code, and found a header file full of defines of the style:
#define BLABLABLABLA (1 << 2)
#define XXXXXXXXXXXX (1 << 3)
#define YYYYYYYYYYYY (1 << 4)
What do they mean? What do they do?
<< is the shift operator in C.
So you define BLABLABLABLABLA by a zero value with a binary 1 shifted by 2 digits to the left.
The resulting value is then :
...00000100
You would normally do this do mask things.
So, say you have one status byte where every bit is a flag.
And if the 3rd bit is set, this means BLABLABLABLABLA, you would do :
int blablaFlag = statusByte & BLABLABLABLABLA;
If this is greater 0, your flag was set.
These defines can be used when storing information (flags) in bits:
#define HAS_SOMETHING (1 << 2)
#define HAS_ANOTHER (1 << 3)
int flags = 0;
if (has_something())
flags |= HAS_SOMETHING;
if (has_another())
flags |= HAS_ANOTHER;
// ... later:
if (flags & HAS_SOMETHING)
do_something();
Using a macro preprocessor directive to set or unset these flags makes the code way more readable than this would:
if (flags & 4) // 4 is 1 lsh 2
do_something();
They are a way of defining constants, using the C preprocessor. So each time you use XXXXXXXXXXXX in your code it will be replaced with 1 << 2 by the preprocessor.
#define simply means that whenever BLABLABLABLA is seen, it's replaced with (1 << 2).
So if you write int x=BLABLABLABLA;, it's as if you wrote int x=(1 << 2);.
<< is the shift left operator.
<< and >> are shift operators, they work on a binary scale.
42 is written 42 in decimal and 101010 in binary.
When you use the operators :
The binary representation of 42 is : 101010
42 << 1 : 101010 is "shifted" to the left, becoming 1010100, thus 84.
42 >> 1 : 101010 is "shifted" to the right, becoming 10101, thus 21.
This is used for flags for readability purposes : It's easier to read 1 << 1, 1 << 2, 1 << 3 than 1, 2, 4.

How to break 32 bit value and assign it to three different data types.?

I have a
#define PROT_EN_DAT 0x140
//(320 in decimal)
Its loaded into 64 bit value register(ex setup_data[39:8]=PROT_EN_DATA)
Now i want to put this value(0x140)into
uint8_t bRequest
uint16_t wValue
uint16_t wIndex
How can load the value so that i don't have to manually do it for other values again.
I think we can do with shift operators but don know how.
EDIT:Ya its related to USB. bRequest(8:15),wValue(16:31),wIndex(32:47) but setup_data is 64 bit value.I want to know how can i load proper values into the these fields.
For example say next time i am using #define PROT_EN2_REG 0x1D8.
and say setup_data[39:8]=PROT_EN2_DATA
General read form:
aField = (aRegister >> kBitFieldLSBIndex) & ((1 << kBitFieldWidth) - 1)
General write form:
mask = ((1 << kBitFieldWidth) - 1) << kBitFieldLSBIndex;
aRegister = (aRegister & ~mask) | ((aField << kBitFieldLSBIndex) & mask);
where:
aRegister is the value you read from the bit-field-packed register,
kBitFieldLSBIndex is the index of the least significant bit of the bit field, and
kBitFieldWidth is the width of the bit field, and
aField is the value of the bit field
These are generalized, and some operations (such as bit-masking) may be unnecessary in your case. Replace the 1 with 1L if the register is larger than 32 bits.
EDIT: In your example case (setup_data[39:8]=PROT_EN_DATA):
Read:
aField = (setup_data >> 8) & ((1L << 32) - 1L)
Write:
#define PROT_EN_MASK = (((1L << 32) - 1L) << 8) // 0x0000000FFFFFFFF0
setup_data = (setup_data & ~PROT_EN_MASK) | ((PROT_EN_DATA << 8) & PROT_EN_MASK);

Resources