BYTE, WORD and DWORD macros definition - c

I am trying to understand, what would be the best way to define BYTE, WORD and DWORD macros, which are mentioned in answers of this question.
#define LOWORD(l) ((WORD)(l))
#define HIWORD(l) ((WORD)(((DWORD)(l) >> 16) & 0xFFFF))
#define LOBYTE(w) ((BYTE)(w))
#define HIBYTE(w) ((BYTE)(((WORD)(w) >> 8) & 0xFF))
Would it be correct to assume, that:
BYTE is macro defined as
#define BYTE __uint8_t
WORD is macro defined as
#define WORD __uint16_t
DWORD is macro defined as
#define DWORD __uint32_t
If yes, why cast to another macro instead of casting to __uint8_t, __uint16_t or __uint32_t? Is it written like that to increase clarity?
I also found another question which answers include typedef, with little bit more of research I've found answers to question about comparing #define and typedef. Would typedef be better to use in this case?

This is a portable solution:
#include <stdint.h>
typedef uint32_t DWORD; // DWORD = unsigned 32 bit value
typedef uint16_t WORD; // WORD = unsigned 16 bit value
typedef uint8_t BYTE; // BYTE = unsigned 8 bit value

You have it defined at: https://msdn.microsoft.com/en-us/library/windows/desktop/aa383751(v=vs.85).aspx, and that is already defined in Windows Data Type headers for WinAPI:
typedef unsigned short WORD;
typedef unsigned char BYTE;
typedef unsigned long DWORD;
and it is a type, and not a macro.

Related

About the data alignment in c [duplicate]

This question already has answers here:
Why isn't sizeof for a struct equal to the sum of sizeof of each member?
(13 answers)
Closed 1 year ago.
And I define a struct :
#include <stdint.h>
#include <stdio.h>
#define O(type, field) (size_t)(&(((type *)0)->field))
struct byname {
int16_t int16;
int32_t int32;
int64_t int64;};
Then I use sizeof(struct byname) and it return 16 which I can understand.
However when I define the like adding a int8_t:
#include <stdint.h>
#include <stdio.h>
#define O(type, field) (size_t)(&(((type *)0)->field))
struct byname {
int16_t int16;
int32_t int32;
int64_t int64;
int8_t int8;};
It just return 24, I think a int8 only takes 1 by and there are 3 bys padding according to data alignment, so I think the answer should be 20.
Anyone can kindly explain to me how the 24 comes?
The structure contains int64_t. If the compiler thinks that int64_t should be aligned to 8-byte boundary, it is reasonable to make the size of the structure multiple of 8 (therefore 24 bytes instead of 20) to align every int64_t int64; in an array of struct byname to 8-byte boundary.

I need to write a C program that uses a Union structure to set the bits of an int

Here are the instructions:
Define a union data struct WORD_T for a uint16_t integer so that a value can be assigned to a WORD_T integer in three ways:
(1) To assign the value to each bit of the integer,
(2) To assign the value to each byte of the integer,
(3) To assign the value to the integer directly.
I know I need to do something different with that first struct, but I'm pretty lost in general. Here is what I have:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#define BIT(n) (1 << n)
#define BIT_SET(var, mask) (n |= (mask) )
#define BIT_CLEAR(var, mask) (n &= ~(mask) )
#define BIT_FLIP(var, mask) (n ^= (mask) )
union WORD_T{
struct{
uint16_t integerVar:16
};
struct{
unsigned bit1:0;
unsigned bit2:0;
unsigned bit3:0;
unsigned bit4:0;
unsigned bit5:0;
unsigned bit6:0;
unsigned bit7:0;
unsigned bit8:0;
unsigned bit9:0;
unsigned bit10:0;
unsigned bit11:0;
unsigned bit12:0;
unsigned bit13:0;
unsigned bit14:0;
unsigned bit15:0;
unsigned bit16:0;
};
void setIndividualBit(unsigned value, int bit) {
mask = BIT(value) | BIT(bit);
BIT_SET(n, mask);
}
};
The most obvious problem is that :0 means "a bit-field of zero bits" which doesn't make any sense. You should change this to 1 and then you can assign individual bits through code like the_union.the_struct.bit1 = 1;.
This format is an alternative to the bit set/clear macros you wrote.
Your union should look like:
typedef union
{
uint16_t word;
uint8_t byte[2];
struct{
uint16_t bit1:1;
...
However. This is a really bad idea and your teacher should know better than to give such assignments. Bit-fields in C are a bag of worms - they are very poorly specified by the standard. Problems:
You can't know which bit that is the MSB, it is not defined.
You can't know if there will be padding bits/bytes somewhere.
You can't even use uint16_t or uint8_t because bit-fields are only specified to work with int.
And so on. In addition you have to deal with endianness and alignment. See Why bit endianness is an issue in bitfields?.
Essentially, code using bit-fields like this will rely heavily on the specific compiler. It will be completely non-portable.
All of these problems could be avoided by dropping the bit-field and use bitwise operators on a uint16_t instead, like you did in your macros. With bitwise operators only, your code will turn deterministic and 100% portable. You can even use them to dodge endianess, by using bit shifts.
Here's the union definition:
union WORD_T
{
uint16_t word;
struct
{
uint8_t byte0;
uint8_t byte1;
};
struct
{
uint8_t bit0:1;
uint8_t bit1:1;
uint8_t bit2:1;
uint8_t bit3:1;
uint8_t bit4:1;
uint8_t bit5:1;
uint8_t bit6:1;
uint8_t bit7:1;
uint8_t bit8:1;
uint8_t bit9:1;
uint8_t bit10:1;
uint8_t bit11:1;
uint8_t bit12:1;
uint8_t bit13:1;
uint8_t bit14:1;
uint8_t bit15:1;
};
};
To assign to the individual components, do something similar to the following:
union WORD_T data;
data.word = 0xF0F0; // (3) Assign to entire word
data.byte0 = 0xAA; // (2) Assign to individual byte
data.bit0 = 0; // (1) Assign to individual bits
data.bit1 = 1;
data.bit2 = 1;
data.bit2 = 0;

Is this code declaring a type?

#ifdef _CPU_8BIT_INT_
// unsigned 8 bit
typedef unsigned _CPU_8BIT_INT_ u8 ;
What is the code above doing? Is it trying to declare a type? (type as in integer, char etc.)
Yes, typedef is used to declare a type. From now on
u8 x;
/* Equivalent to. */
unsigned _CPU_8BIT_INT_ x;
Are you sure you're not better off using uint8_t from stdint.h ?

Really simple error; I've probably forgotten a semicolon

I'm getting loads of errors like these:
gfx.h:48: error: syntax error before 'buffer'
gfx.h:48: warning: type defaults to 'int' in declaration of 'buffer'
gfx.h:48: warning: data definition has no type or storage class
gfx.h:73: error: syntax error before 'uint16_t'
gfx.h:73: warning: no semicolon at end of struct or union
gfx.h:74: warning: type defaults to 'int' in declaration of 'visible_lines_per_frame'
gfx.h:74: warning: data definition has no type or storage class
...
I'm a bit tired, so I can't figure out what could be causing these.
This is the definition of buffer (starting at line 43, going to line 57):
/* 8-bit architecture (not yet used.) */
#if PROC_BIT_SIZE == 8
uint8_t buffer[GFX_SIZE];
# define GFX_PIXEL_ADDR(x,y) (x / 8) + (y * (GFX_WIDTH / 8))
/* 16-bit architecture: dsPIC */
#elif PROC_BIT_SIZE == 16
uint16_t buffer[GFX_SIZE / 2];
# define GFX_PIXEL_ADDR(x,y) (x / 16) + (y * (GFX_WIDTH / 16))
/* 32-bit architecture: AVR32(?), STM32 */
#elif PROC_BIT_SIZE == 32
uint32_t buffer[GFX_SIZE / 4];
# define GFX_PIXEL_ADDR(x,y) (x / 32) + (y * (GFX_WIDTH / 32))
/* Other, unknown bit size.*/
#else
# error "processor bit size not supported"
#endif
(It's designed to support multiple architectures 8-bit MCUs to 32-bit MCUs.)
I've defined uint8_t etc. because the GCC I'm using doesn't seem to have a stdint.h header.
Here is how I have defined uint8_t etc.
/*
* stdint.h support.
* If your compiler has stdint.h, uncomment HAS_STDINT.
*/
//#define HAS_STDINT
#ifndef HAS_STDINT
// D'oh, compiler doesn't support STDINT, so create our own,
// 'standard' integers.
# if PROC_BIT_SIZE == 8 || PROC_BIT_SIZE == 16
typedef char int8_t;
typedef int int16_t;
typedef long int32_t;
typedef long long int64_t;
typedef unsigned char uint8_t;
typedef unsigned int uint16_t;
typedef unsigned long uint32_t;
typedef unsigned long long uint64_t;
# elif PROC_BIT_SIZE == 32
typedef char int8_t;
typedef short int16_t;
typedef int int32_t; // usually int is 32 bits on 32 bit processors, but this may need checking
typedef long int64_t;
typedef unsigned char uint8_t;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;
typedef unsigned long uint64_t;
# endif
#else
# include <stdint.h>
#endif
The warning: no semicolon at end of struct or union implies that you've left a semicolon off at the end of a struct or union defined earlier in the same file, or in another header that was included earlier. This would result in malformed statements such as:
struct S { ... } uint8_t buffer[GFX_SIZE];
I apparently have solved my problem.
My mistake was to not declare PROC_BIT_SIZE before the "typedef" definitions of uint8_t and friends. The preprocessor therefore ignored the declarations, causing the error.
Just a blunder of my own making.

Tips on redefining a register bitfield in C

I am struggling trying to come up with a clean way to redefine some register bitfields to be usable on a chip I am working with.
For example, this is what one of the CAN configuration registers is defined as:
extern volatile near unsigned char BRGCON1;
extern volatile near struct {
unsigned BRP0:1;
unsigned BRP1:1;
unsigned BRP2:1;
unsigned BRP3:1;
unsigned BRP4:1;
unsigned BRP5:1;
unsigned SJW0:1;
unsigned SJW1:1;
} BRGCON1bits;
Neither of these definitions is all that helpful, as I need to assign the BRP and SJW like the following:
struct
{
unsigned BRP:6;
unsigned SJW:2;
} GoodBRGbits;
Here are two attempts that I have made:
Attempt #1:
union
{
byte Value;
struct
{
unsigned Prescaler:6;
unsigned SynchronizedJumpWidth:2;
};
} BaudRateConfig1 = {NULL};
BaudRateConfig1.Prescaler = 5;
BRGCON1 = BaudRateConfig1.Value;
Attempt #2:
static volatile near struct
{
unsigned Prescaler:6;
unsigned SynchronizedJumpWidth:2;
} *BaudRateConfig1 = (volatile near void*)&BRGCON1;
BaudRateConfig1->Prescaler = 5;
Are there any "cleaner" ways to accomplish what I am trying to do? Also I am slightly annoyed about the volatile near casting in Attempt #2. Is it necessary to specify a variable is near?
Personally, I try to avoid using using bit fields for portability reasons. Instead, I tend to use bit masks so that I can explicitly control which bits are used.
For example (assuming the bit order is correct) ...
#define BRP0 0x80
#define BRP1 0x40
#define BRP2 0x20
#define BRP3 0x10
#define BRP4 0x08
#define BRP5 0x04
#define SJW0 0x02
#define SJW1 0x01
Masks can then be generated as appropriate and values assigned or read or tested. Better names for the macros can be picked by you.
Hope this helps.
I suggest that you dont mix up the bitfield declaration with the adressing of the hardware register.
Your union/struct declares how the bitfields are arranged, then you specify addressing and access restrictions when declaring a pointer to such a structure.
// foo.h
// Declare struct, declare pointer to hw reg
struct com_setup_t {
unsigned BRP:6;
unsigned SJW:2;
};
extern volatile near struct com_setup_t *BaudRateConfig1;
// foo.c
// Initialise pointer
volatile near struct com_setup_t *BaudRateConfig1 =
(volatile near struct com_setup_t *)0xfff...;
// access hw reg
foo() {
...
BaudRateConfig1->BRP = 3;
...
}
Regarding near/far I assume that the default is near unless far is specified, unless you can set the default pointer size to far using compiler switches.

Resources