I encountered a statement like "char var : 3". What does this C statement do? [duplicate] - c

This question already has answers here:
What does a colon in a struct declaration mean, such as :1, :7, :16, or :32?
(3 answers)
Closed 2 years ago.
While going through some C code, I encountered statements like
char var1 : num1, char var2: num2;
From the context, it seems like the number i.e. num1 is the byte size.
I am unable to find any explanation.

This could be part of what is called a bit-field in the C programming language.
Bit-fields can only be declared inside a struct, e.g.
struct {
unsigned int flag : 1; /* A one bit flag */
unsigned int value : 5; /* A 5 bit value */
} option;
if (option.flag == 1)
option.value = 7;
About everything on bit-fields is implementation-defined. The intention is to have bit-fields arranged as compact as possible by the compiler. E.g. the above could well fit in one byte.

Related

sizeof() in C conditional operator [duplicate]

This question already has answers here:
How to compare signed and unsigned (and avoiding issues)
(1 answer)
Why is this happening with the sizeof operator when comparing with a negative number? [duplicate]
(2 answers)
Closed 11 months ago.
#include<stdio.h>
int main(){
int i;
// printf("%d",sizeof(i)) ;
printf("%d",(sizeof(i) > (-1))) ;
return 0;}
why does the code print 0 when sizeof(i) gives 4 in 64 bit OS?
why does (sizeof(i) > (-1))) gives false(0) ?
Use a better compiler and enable warnings. Under any sane compiler you should have gotten a warning about comparing an unsigned and a signed value.
This should be closer to what you want:
printf("%d", (int)sizeof(i) > -1);
Or at least this:
printf("%d", sizeof(i) >= 0);
However your code is a no-op anyway, because it's impossible to have a negative size of a type.

Explain the following output [duplicate]

This question already has answers here:
for (unsigned char i = 0; i<=0xff; i++) produces infinite loop
(4 answers)
Closed 5 years ago.
#include <stdio.h>
int main() {
unsigned char var = -100;
for(var = 0; var <= 255; var++){
printf("%d ", var);
}
}
the output is attached below (run on codeblocks IDE version 16.01)
why is the output an infinite loop?
This condition var <= 255 is always true for an unsigned char, assuming CHAR_BIT is 8 on your platform. So the loop is infinite since the increment will cause the variable to wrap (that's what unsigned arithmetic does in C).
This initialization:
unsigned char var = -100;
is not an error, it's simply annoying code. The compiler will convert the int -100 to unsigned char, following the rules in the language specification.
You are using an unsigned char and its possible range is 0-255.
You are running your loop from 0-255 (inclusive). The moment your variable goes to 256, it will be converted back to 0. Also, initial value -100 will be treated as +156, due to this possible range.
So, this leads to an infinite loop.
Because unsigned char overflow problem. So, remove = in for loop condition.
for(var=0;var<255;var++){
}
For more information, See this stack overflow question.
unsigned char
Range is 0 to 255.When var =255.When it is incremented we get value as 256 which cannot be stored in unsigned char.That is the reason why it is ending in infinite loop.And when you initialize var as -100.It will not show any error because it converts -100 to binary and takes the 1st eight bits.And the corresponding value will be the value of var

why is sizeof() returning 4 bytes rather than 2 bytes of short int? [duplicate]

This question already has answers here:
Why does sizeof(char + char) return 4?
(2 answers)
Why must a short be converted to an int before arithmetic operations in C and C++?
(4 answers)
sizeof operator returns 4 for (char + short ) [duplicate]
(3 answers)
What happens here? sizeof(short_int_variable + char_variable)
(5 answers)
How does sizeof work for different data types when added and calculated? [duplicate]
(1 answer)
Closed 5 years ago.
To print the size of the variables using sizeof()
#include <stdio.h>
main()
{
int a = 10,b = 20;
short int c;
short int d = sizeof(c = a+b);
int e = sizeof(c*d); //e holds the value of 4 rather than 2
double f = sizeof(e*f);
printf("d:%d\ne:%d\nf:%lf\n",d,e,f);
}
Why is sizeof() returning the size of int rather than short int which is meant to be 2 bytes?
The statement
sizeof(c = a+b);
doesn't measure the size of variable c but the size of the value computed from expression c = a+b. It is the value of a+b that is assigned to c but it is also the value of the entire expression.
The integral values whose storage type is smaller than int that appear in an arithmetic expression are promoted to int (or unsigned int) for the computation. The storage type of the result of the arithmetic expression is int. This is not affected that the fact that you store it in a short int variable. Hence the value returned by sizeof().
The same for sizeof(c*d).

C Union and simultaneous assignment to members [duplicate]

This question already has answers here:
Floating point numbers do not work as expected
(6 answers)
Closed 8 years ago.
In the following code
#include<stdio.h>
int main()
{
union myUnion
{
int intVar;
char charVar;
float floatVar;
};
union myUnion localVar;
localVar.intVar = 10;
localVar.charVar = 'A';
localVar.floatVar = 20.2;
printf("%d ", localVar.intVar);
printf("%c ", localVar.charVar);
printf("%f ", localVar.floatVar);
}
I understand that union can hold only one value at a time. So when I assign char value, int would be overwritten, n then when I assign floatValue, char would be overwritten. So I was expecting some garbage values for int and char variables and 20.200000 for float variable as it was last value to be assigned. But following is the output I'm getting on VS Express as well as gcc
1101109658 Ü 20.200001
unable to understand why float value is changed ?
This has nothing to do with union, and the float value was not changed.
It simply doesn't have enough bits to represent 20.2 exactly as a binary float. But that's okay, nobody has that many bits.
You should read What Every Computer Scientist Should Know About Floating-Point Arithmetic.

What is the difference between macro constants and constant variables in C? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
“static const” vs “#define” in C
I started to learn C and couldn't understand clearly the differences between macros and constant variables.
What changes when I write,
#define A 8
and
const int A = 8
?
Macros are handled by the pre-processor - the pre-processor does text replacement in your source file, replacing all occurances of 'A' with the literal 8.
Constants are handled by the compiler. They have the added benefit of type safety.
For the actual compiled code, with any modern compiler, there should be zero performance difference between the two.
Macro-defined constants are replaced by the preprocessor. Constant 'variables' are managed just like regular variables.
For example, the following code:
#define A 8
int b = A + 10;
Would appear to the actual compiler as
int b = 8 + 10;
However, this code:
const int A = 8;
int b = A + 10;
Would appear as:
const int A = 8;
int b = A + 10;
:)
In practice, the main thing that changes is scope: constant variables obey the same scoping rules as standard variables in C, meaning that they can be restricted, or possibly redefined, within a specific block, without it leaking out - it's similar to the local vs. global variables situation.
In C, you can write
#define A 8
int arr[A];
but not:
const int A = 8;
int arr[A];
if I recall the rules correctly. Note that on C++, both will work.
For one thing, the first will cause the preprocessor to replace all occurrences of A with 8 before the compiler does anything whereas the second doesn't involve the preprocessor

Resources