I need to hide a single field, not several, inside a structure:
struct MyType1
{
unsigned char Value;
}; //
struct MyType2
{
unsigned void* Value;
} ; //
struct MyType3
{
signed int;
} ; //
What I want is to have the struct type to have the same size, if possible, that the primitive type variables, but that the compiler treated it like a new type.
In some part of the code I want to cast back the structures,
to the simple value.
And also create arrays with this struct type but,
with little space.
MyType1 MyArray[255];
I already check previous answers, but, didn't find it.
Example:
typedef
unsigned int /* rename as */ mydatetime;
// Define new type as
struct mydatetimetype
{
unsigned int /* field */ value;
} ;
Let's suppose I have these functions in the the same program, but different include files :
void SomeFunc ( unsigned int /* param */ anyparam );
void SomeFunc ( mydatetime /* param */ anyparam );
void SomeFunc ( mydatetimetype /* param */ anyparam );
My programming editor or I.D.E. confuses the first two functions.
In some part of the code, later, I will use the packed type with integer operations, but I should be hidden from other programmers, that use this type.
Note that, I also want to apply this feature to other types like pointers or characters.
And, "forwarding" or using an "opaque" structure is not necessary.
How does a single field structure gets padded or packed ?
Should I add an attribute to pack or pad this structure for better performance ?
Is there already a name for this trick ?
I hope that the code below may help you.
The code show you how you may use union to obtain that more type uses the same memory space.
The result of this code might be implemantation dependent, anyway it demonstraits you that all the types specified into the integers union share the same memory space.
A variable declared as integers (in the code is k) is always long as the longer type into the declaration. Then we have that, in the code, the variable k may contains integer types from 8 bits to 64 bits using always 64 bits.
Although I only used integer types, the type you may use inside union declarations may be of whatever type you want also struct types and/or pointers.
#include <unistd.h>
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
typedef union integers {
int8_t i8;
int16_t i16;
int32_t i32;
int64_t i64;
} integers;
typedef struct sInt {
integers a;
integers b;
} sInt;
int main(void) {
integers k;
sInt s;
k.i64=0x1011121314151617;
printf("Int 08: %" PRIx8 "h\n", k.i8 );
printf("Int 16: %" PRIx16 "h\n", k.i16 );
printf("Int 32: %" PRIx32 "h\n", k.i32 );
printf("Int 64: %" PRIx64 "h\n", k.i64 );
s.a.i64=0x1011121314151617;
s.b.i64=0x0102030405060708;
printf("Int a.08: %" PRIx8 "h\n", s.a.i8 );
printf("Int a.16: %" PRIx16 "h\n", s.a.i16 );
printf("Int a.32: %" PRIx32 "h\n", s.a.i32 );
printf("Int a.64: %" PRIx64 "h\n", s.a.i64 );
printf("Int b.08: %" PRIx8 "h\n", s.b.i8 );
printf("Int b.16: %" PRIx16 "h\n", s.b.i16 );
printf("Int b.32: %" PRIx32 "h\n", s.b.i32 );
printf("Int b.64: %" PRIx64 "h\n", s.b.i64 );
return 0;
}
Note: If your problem is the padding into the structure this code is not entirely the answer you're searching for. To manage padding you have to use #pragma pack() (gcc and other compilers manage #pragmas)
Structs can be padded to align address boundaries. So your first and third struct more likely will not have the same size as primitive types.
Single-field structs more likely be padded "behind" the field, but C standard does not state how compiler should carry this out.
You should add attribute if you want cast your structure to primitive type (to be sure you are casting value it stores and not garbage in padding) but, i think (and do not recommend) possible to cast structure to variable and get correct result even without attributes (though it is very implementation dependent). But you will get small performance penalty for every processor attempt to load non-aligned structure from memory.
Also you should be careful, because packing structs may be dangerous
Related
I wrote a small piece of code to understand how the offsetof macro works in the background. Here is the code:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
/* Getting the offset of a variable inside a struct */
typedef struct {
int a;
char b[23];
float c;
} MyStructType;
unsigned offset = (unsigned)(&((MyStructType * )NULL)->c);
printf("offset = %u\n", offset);
return 0;
}
However, if I run it I get a warning message:
WARNING: cast from pointer to integer of different size [-Wpointer-to-int-cast]
However, if I look at the original offsetof macro in c, the code looks like this:
#include <stdio.h>
#include <stdlib.h>
#include <stddef.h>
int main(void)
{
/* Getting the offset of a variable inside a struct */
typedef struct {
int a;
char b[23];
float c;
} MyStructType;
unsigned offset = offsetof(MyStructType, c);
printf("offset = %u\n", offset);
return 0;
}
So why do I get the warning as I cast to unsigned ? It appears to be the type for the offsetof macro. This is puzzling me.
As mch commented, unsigned is not the right type; it's 32-bit on pretty much all real-world systems. The offsetof macro is supposed to produce a result of type size_t, which is what you "should" be casting to here. I think you're confused by the code you found storing the result into an object of type unsigned; that's okay as long as they're sure the value is small, but it doesn't mean the type of the expression offsetof(MyStructType, c); was unsigned. C allows you to silently assign a larger integer type into a smaller one.
However, no matter what you do, this is not a valid implementation of offsetof and has undefined behavior (via applying -> to an invalid pointer). The way you get a working offsetof without UB is #include <stddef.h>.
While reviewing and old piece of code, I stumbled upon some coding horror like this one:
struct Foo
{
unsigned int bar;
unsigned char qux;
unsigned char xyz;
unsigned int etc;
};
void horror(const char* s1, const char* s2, const char* s3, const char* s4, Foo* foo)
{
sscanf(s1, "%u", &(foo->bar));
sscanf(s2, "%u", (unsigned int*) &(foo->qux));
sscanf(s3, "%u", (unsigned int*) &(foo->xyz));
sscanf(s4, "%u", &(foo->etc));
}
So, what is actually happening in the second and third sscanf, with the argument passed being a unsigned char* cast to unsigned int*, but with the format specifier for an unsigned integer? Whatever happens is due to undefined behavior, but why is this even "working"?
As far as I know, the cast effectively does nothing in this case (the actual type of the arguments passed as ... is unknown to the called function). However this has been in production for years and it has never crashed and the surrounding values apparently are not overwritten, I suppose because the members of the structure are all aligned to 32 bits. This is even reading the correct value on the target machine (a little endian 32 bit ARM) but I think that it would no longer work on a different endianness.
Bonus question: what is the cleanest correct way to do this? I know that now we have the %hhu format specifier (apparently introduced by C++11), but what about a legacy C89 compiler?
Please note that the original question had uint32_t instead of unsigned int and unsigned char instead of uint8_t but that was just misleading and out of topic, and by the way the original code I was reviewing uses its own typedefs.
Bonus question: what is the cleanest correct way to do this? I know that now we have the %hhu format specifier (apparently introduced by C++11), but what about a legacy C89 compiler?
The <stdint.h> header and its types were introduced in C99, so a C89 compiler won't support them except as an extension.
The correct way to use the *scanf() and *printf() families of functions with the various fixed or minimum-width types is to use the macros from <inttypes.h>. For example:
#include <inttypes.h>
#include <stdlib.h>
#include <stdio.h>
int main(void) {
int8_t foo;
uint_least16_t bar;
puts("Enter two numbers");
if (scanf("%" SCNd8 " %" SCNuLEAST16, &foo, &bar) != 2) {
fputs("Input failed!\n", stderr);
return EXIT_FAILURE;
}
printf("You entered %" PRId8 " and %" PRIuLEAST16 "\n", foo, bar);
}
In this case from the pointer point of view nothing as on the all modern machines pointers are the same for all types.
But because you use wrong formats - the scanf will write outside the memory allocated to the variables and it is an Undefined Behaviour.
First of all, this of course invokes Undefined Behaviour.
But that kind of horror was quite common in old code, where the C language was used as a higher level assembly language. So here are 2 possible behaviours:
the structure has a 32 bits alignment. All is (rather fine) on a little endian machine: the uint8_t members will recieve the least significant byte of the 32 bits value and the padding bytes will be zeroed (I assume that the program does not try to store a value greater than 255 into an uint8_t)
the structure has not a 32 bits alignement, but the architecture allows scanf to write into mis-aligned variables. The least significant byte of the value read for qux will correctly go into qux and the next 3 zero bytes will erase xyz and etc. On next line, xyz receives its value and etc recieves one more 0 byte. And finally etc will recieve its value. This could have been a rather common hack in the early 80' on an 8086 type machine.
For a portable way, I would use an temporary unsigned integer:
uint32_t u;
sscanf(s1, "%u", &(foo->bar));
sscanf(s2, "%u", &u);
foo->qux = (uint8_t) u;
sscanf(s3, "%u", &u);
foo->xyz = (uint8_t) u;
sscanf(s4, "%u", &(foo->etc));
and trust the compiler to generate code as efficient as the horror way.
OP code is UB as scan specifiers does not match arguments.
cleanest correct way to do this?
Cleaner
#include <inttypes.h>
void horror1(const char* s1, const char* s2, const char* s3, const char* s4, Foo* foo) {
sscanf(s1, "%" SCNu32, &(foo->bar));
sscanf(s2, "%" SCNu8, &(foo->qux));
sscanf(s2, "%" SCNu8, &(foo->xyz));
sscanf(s1, "%" SCNu32, &(foo->etc));
}
Cleanest
Add additional error handling if desired.
void horror2(const char* s1, const char* s2, const char* s3, const char* s4, Foo* foo) {
foo->bar = (uint32_t) strtoul(s1, 0, 10);
foo->qux = (uint8_t) strtoul(s1, 0, 10);
foo->xyz = (uint8_t) strtoul(s1, 0, 10);
foo->etc = (uint32_t) strtoul(s1, 0, 10);
}
I'm trying to make a function that will accept a float variable and convert it into a byte array. I found a snippet of code that works, but would like to reuse it in a function if possible.
I'm also working with the Arduino environment, but I understand that it accepts most C language.
Currently works:
float_variable = 1.11;
byte bytes_array[4];
*((float *)bytes_array) = float_variable;
What can I change here to make this function work?
float float_test = 1.11;
byte bytes[4];
// Calling the function
float2Bytes(&bytes,float_test);
// Function
void float2Bytes(byte* bytes_temp[4],float float_variable){
*(float*)bytes_temp = float_variable;
}
I'm not so familiar with pointers and such, but I read that (float) is using casting or something?
Any help would be greatly appreciated!
Cheers
*EDIT: SOLVED
Here's my final function that works in Arduino for anyone who finds this. There are more efficient solutions in the answers below, however I think this is okay to understand.
Function: converts input float variable to byte array
void float2Bytes(float val,byte* bytes_array){
// Create union of shared memory space
union {
float float_variable;
byte temp_array[4];
} u;
// Overite bytes of union with float variable
u.float_variable = val;
// Assign bytes to input array
memcpy(bytes_array, u.temp_array, 4);
}
Calling the function
float float_example = 1.11;
byte bytes[4];
float2Bytes(float_example,&bytes[0]);
Thanks for everyone's help, I've learnt so much about pointers and referencing in the past 20 minutes, Cheers Stack Overflow!
Easiest is to make a union:
#include <stdio.h>
int main(void) {
int ii;
union {
float a;
unsigned char bytes[4];
} thing;
thing.a = 1.234;
for (ii=0; ii<4; ii++)
printf ("byte %d is %02x\n", ii, thing.bytes[ii]);
return 0;
}
Output:
byte 0 is b6
byte 1 is f3
byte 2 is 9d
byte 3 is 3f
Note - there is no guarantee about the byte order… it depends on your machine architecture.
To get your function to work, do this:
void float2Bytes(byte bytes_temp[4],float float_variable){
union {
float a;
unsigned char bytes[4];
} thing;
thing.a = float_variable;
memcpy(bytes_temp, thing.bytes, 4);
}
Or to really hack it:
void float2Bytes(byte bytes_temp[4],float float_variable){
memcpy(bytes_temp, (unsigned char*) (&float_variable), 4);
}
Note - in either case I make sure to copy the data to the location given as the input parameter. This is crucial, as local variables will not exist after you return (although you could declare them static, but let's not teach you bad habits. What if the function gets called again…)
Here's a way to do what you want that won't break if you're on a system with a different endianness from the one you're on now:
byte* floatToByteArray(float f) {
byte* ret = malloc(4 * sizeof(byte));
unsigned int asInt = *((int*)&f);
int i;
for (i = 0; i < 4; i++) {
ret[i] = (asInt >> 8 * i) & 0xFF;
}
return ret;
}
You can see it in action here: http://ideone.com/umY1bB
The issue with the above answers is that they rely on the underlying representation of floats: C makes no guarantee that the most significant byte will be "first" in memory. The standard allows the underlying system to implement floats however it feels like -- so if you test your code on a system with a particular kind of endianness (byte order for numeric types in memory), it will stop working depending on the kind of processor you're running it on.
That's a really nasty, hard-to-fix bug and you should avoid it if at all possible.
I would recommend trying a "union".
Look at this post:
http://forum.arduino.cc/index.php?topic=158911.0
typedef union I2C_Packet_t{
sensorData_t sensor;
byte I2CPacket[sizeof(sensorData_t)];
};
In your case, something like:
union {
float float_variable;
char bytes_array[4];
} my_union;
my_union.float_variable = 1.11;
Yet another way, without unions:
(Assuming byte = unsigned char)
void floatToByte(byte* bytes, float f){
int length = sizeof(float);
for(int i = 0; i < length; i++){
bytes[i] = ((byte*)&f)[i];
}
}
this seems to work also
#include <stddef.h>
#include <stdint.h>
#include <string.h>
float fval = 1.11;
size_t siz;
siz = sizeof(float);
uint8_t ures[siz];
memcpy (&ures, &fval, siz);
then
float utof;
memcpy (&utof, &ures, siz);
also for double
double dval = 1.11;
siz = sizeof(double);
uint8_t ures[siz];
memcpy (&ures, &dval, siz);
then
double utod;
memcpy (&utod, &ures, siz);
Although the other answers show how to accomplish this using a union, you can use this to implement the function you want like this:
byte[] float2Bytes(float val)
{
my_union *u = malloc(sizeof(my_union));
u->float_variable = val;
return u->bytes_array;
}
or
void float2Bytes(byte* bytes_array, float val)
{
my_union u;
u.float_variable = val;
memcpy(bytes_array, u.bytes_array, 4);
}
**conversion without memory reference** \
#define FLOAT_U32(x) ((const union {float f; uint32_t u;}) {.f = (x)}.u) // float->u32
#define U32_FLOAT(x) ((const union {float f; uint32_t u;}) {.u = (x)}.f) // u32->float
**usage example:**
float_t sensorVal = U32_FLOAT(eeprom_read_dword(&sensor));
First of all, some embedded systems 101:
Anyone telling you to use malloc/new on Arduino have no clue what they are talking about. I wrote a fairly detailed explanation regarding why here: Why should I not use dynamic memory allocation in embedded systems?
You should avoid float on 8 bit microcontrollers since it leads to incredibly inefficient code. They do not have a FPU, so the compiler will be forced to load a very resource-heavy software floating point library to make your code work. General advise here.
Regarding pointer conversions:
C allows all manner of wild and crazy pointer casts. However, there are lots of situations where it can lead to undefined behavior if you cast a character byte array's address into a float* and then de-reference it.
If the address of the byte array is not aligned, it will lead to undefined behavior on systems that require aligned access. (AVR doesn't care about alignment though.)
If the byte array does not contain a valid binary representation of a float number, it could become a trap representation. Similarly you must keep endianess in mind. AVR is an 8-bitter but it's regarded as little endian since it uses little endian format for 16 bit addresses.
It leads to undefined behavior because it goes against the C language "effective type" system, also known as a "strict pointer aliasing violation". What is the strict aliasing rule?
Going the other way around is fine though - taking the address of a float variable and converting it to a character pointer, then de-reference that character pointer to access individual bytes. Multiple special rules in C allows this for serialization purposes and hardware-related programming.
Viable solutions:
memcpy always works fine and then you won't have to care about alignment and strict aliasing. You still have to care about creating a valid floating point representation though.
union "type punning" as demonstrated in other answers. Note that such type punning will assume a certain endianess.
Bit shifting individual bytes and concatenating with | or masking with & as needed. The advantage of this is that it's endianess-independent in some scenarios.
float f=3.14;
char *c=(char *)&f;
float g=0;
char *d=(char *)&g;
for(int i=0;i<4;i++) d[i]=c[i];
/* Now g=3.14 */
Cast your float as char, and assign the address to the char pointer.
Now, c[0] through c[3] contain your float.
http://justinparrtech.com/JustinParr-Tech/c-access-other-data-types-as-byte-array/
I would like to define a large bitfield for the purpose of quickly monitoring the status a very large structure of elements. Here is what I have so far:
#define TOTAL_ELEMENTS 1021
typedef struct UINT1024_tag
{
UINT8 byte[128];
} UINT1024;
typedef struct flags_tag
{
UINT1024:TOTAL_ELEMENTS;
} flags_t;
When I try compiling this, I get the error message, "error: bit-field `<anonymous>' has invalid type"
Can bit-fields only be used on certain types? I thought that if I defined a large-enough variable, the massive bitfield needed for my application could then be defined because the bitfield has to be no larger than the type used to define it.
Any thoughts or suggestions would be appreciated.
Bit fields must fit within a single int, you can't use arbitrary sizes. Honestly the ANSI bitfield implementation is kinda broken. It misses a lot of other stuff too, like control over padding and layout that real-world applications usually need. I'd consider writing some macros or accessor functions to abstract the larger sizes and giving up on the bitfield syntax.
In standard C language bit-fields can only be defined with a restricted set of types. In C89/90 these types are limited to int, signed int and unsigned int (a little-known detail is that in this context int is not guaranteed to be equivalent to signed int). In C99 type _Bool was added to the supported set. Any other types cannot be used in bit-field declaration.
In practice, as a popular extension, compilers normally allow any integral type (or also enum type) in bit-field declaration. But a struct type... No, I'm not aware of any compiler that would allow that (let alone that it doesn't seem to make much sense).
use
UINT128 blaha;
You're not defining a bitfield.
I'm not sure you understand what a bitfield is. I bitfield is a number of bits. Not an array of structs or similar. What exactly are you expecting that your code should do?
Edit: oh I see now. No, you can't use your own types, just ints.
Try this (untested code):
struct bit1024 {
unsigned char byte[128];
};
struct bit1024 foo;
void
set(struct bit1024*lala, int n, int v)
{
lala->byte[n/8] |= 1<<(n % 8);
if (!v) {
lala->byte[n/8] ^= 1<<(n % 8);
}
}
int
get(struct bit1024*lala, int n)
{
return 1 & (lala->byte[n/8] >> (n % 8));
}
As other have said, the C standard doesn't allow bit-fields to exceed the size of their attached integer type.
I'd suggest using plain arrays with some macro magic:
#include <limits.h>
#include <stdio.h>
#include <string.h>
// SIZE should be a constant expression
// this avoids VLAs and problems resulting from being evaluated twice
#define BITFIELD(SIZE, NAME) \
unsigned char NAME[(SIZE) / CHAR_BIT + ((SIZE) % CHAR_BIT != 0)]
static inline void setbit(unsigned char field[], size_t idx)
{ field[idx / CHAR_BIT] |= 1u << (idx % CHAR_BIT); }
static inline void unsetbit(unsigned char field[], size_t idx)
{ field[idx / CHAR_BIT] &= ~(1u << (idx % CHAR_BIT)); }
static inline void togglebit(unsigned char field[], size_t idx)
{ field[idx / CHAR_BIT] ^= 1u << (idx % CHAR_BIT); }
static inline _Bool isbitset(unsigned char field[], size_t idx)
{ return field[idx / CHAR_BIT] & (1u << (idx % CHAR_BIT)); }
int main(void)
{
BITFIELD(1025, foo);
printf("sizeof foo = %u\n", sizeof foo);
memset(foo, 0, sizeof foo);
printf("%i", isbitset(foo, 1011));
setbit(foo, 1011);
printf("%i", isbitset(foo, 1011));
unsetbit(foo, 1011);
printf("%i", isbitset(foo, 1011));
}
Hopefully, I didn't mess up the bit ops...
I have a data type, say X, and I want to know its size without declaring a variable or pointer of that type and of course without using sizeof operator.
Is this possible? I thought of using standard header files which contain size and range of data types but that doesn't work with user defined data type.
To my mind, this fits into the category of "how do I add two ints without using ++, += or + ?". It's a waste of time. You can try and avoid the monsters of undefined behaviour by doing something like this.
size_t size = (size_t)(1 + ((X*)0));
Note that I don't declare a variable of type or pointer to X.
Look, sizeof is the language facility for this. The only one, so it is the only portable way to achieve this.
For some special cases you could generate un-portable code that used some other heuristic to understand the size of particular objects[*] (probably by making them keep track of their own size), but you'd have to do all the bookkeeping yourself.
[*] Objects in a very general sense rather than the OOP sense.
Well, I am an amateur..but I tried out this problem and I got the right answer without using sizeof. Hope this helps..
I am trying to find the size of an integer.
int *a,*s, v=10;
a=&v;
s=a;
a++;
int intsize=(int)a-(int)s;
printf("%d",intsize);
The correct answer to this interview question is "Why would I want to do that, when sizeof() does that for me, and is the only portable method of doing so?"
The possibility of padding prevent all hopes without the knowledge of the rules used for introducing it. And those are implementation dependent.
You could puzzle it out by reading the ABI for your particular processor, which explains how structures are laid out in memory. It's potentially different for each processor. But unless you're writing a compiler it's surprising you don't want to just use sizeof, which is the One Right Way to solve this problem.
if X is datatype:
#define SIZEOF(X) (unsigned int)( (X *)0+1 )
if X is a variable:
#define SIZEOF(X) (unsigned int)( (char *)(&X+1)-(char *)(&X) )
Try this:
int a;
printf("%u\n", (int)(&a+1)-(int)(&a));
Look into the compiler sources. You will get :
the size of standard data types.
the rules for padding of structs
and from this, the expected size of anything.
If you could at least allocate space for the variable, and fill some sentinel value into it, you could change it bit by bit, and see if the value changes, but this still would not tell you any information about padding.
Try This:
#include<stdio.h>
int main(){
int *ptr = 0;
ptr++;
printf("Size of int: %d",ptr);
return 0;
Available since C89 solution that in user code:
Does not declare a variable of type X.
Does not declare a pointer to type X.
Without using sizeof operator.
Easy enough to do using standard code as hinted by #steve jessop
offsetof(type, member-designator)
which expands to an integer constant expression that has type size_t, the value of which is the offset in bytes, to the structure member ..., from the beginning of its structure ... C11 §7.19 3
#include <stddef.h>
#include <stdio.h>
typedef struct {
X member;
unsigned char uc;
} sud03r_type;
int main() {
printf("Size X: %zu\n", offsetof(sud03r_type, uc));
return 0;
}
Note: This code uses "%zu" which requires C99 onward.
This is the code:
The trick is to make a pointer object, save its address, increment the pointer and then subtract the new address from the previous one.
Key point is when a pointer is incremented, it actually moves by the size equal to the object it is pointing, so here the size of the class (of which the object it is pointing to).
#include<iostream>
using namespace std;
class abc
{
int a[5];
float c;
};
main()
{
abc* obj1;
long int s1;
s1=(int)obj1;
obj1++;
long int s2=(int)obj1;
printf("%d",s2-s1);
}
Regards
A lot of these answers are assuming you know what your structure will look like. I believe this interview question is intended to ask you to think outside the box. I was looking for the answer but didn't find any solutions I liked here. I will make a better assumption saying
struct foo {
int a;
banana b;
char c;
...
};
By creating foo[2], I will now have 2 consecutive foo objects in memory. So...
foo[2] buffer = new foo[2];
foo a = buffer[0];
foo b = buffer[1];
return (&b-&a);
Assuming did my pointer arithmetic correctly, this should be the ticket - and its portable! Unfortunately things like padding, compiler settings, etc.. would all play a part too
Thoughts?
put this to your code
then check the linker output ( map file)
unsigned int uint_nabil;
unsigned long ulong_nabil;
you will get something like this ;
uint_nabil 700089a8 00000004
ulong_nabil 700089ac 00000004
4 is the size !!
One simple way of doing this would be using arrays.
Now, we know for the fact that in arrays elements of the same datatype are stored in a contiguous block of memory. So, by exploiting this fact I came up with following:
#include <iostream>
using namespace std;
int main()
{
int arr[2];
int* ptr = &arr[0];
int* ptr1 = &arr[1];
cout <<(size_t)ptr1-(size_t)ptr;
}
Hope this helps.
Try this,
#define sizeof_type( type ) ((size_t)((type*)1000 + 1 )-(size_t)((type*)1000))
For the following user-defined datatype,
struct x
{
char c;
int i;
};
sizeof_type(x) = 8
(size_t)((x*)1000 + 1 ) = 1008
(size_t)((x*)1000) = 1000
This takes into account that a C++ byte is not always 8 binary bits, and that only unsigned types have well defined overflow behaviour.
#include <iostream>
int main () {
unsigned int i = 1;
unsigned int int_bits = 0;
while (i!=0) {
i <<= 1;
++int_bits;
}
unsigned char uc = 1;
unsigned int char_bits = 0;
while (uc!=0) {
uc <<= 1;
++char_bits;
}
std::cout << "Type int has " << int_bits << "bits.\n";
std::cout << "This would be " << int_bits/8 << " IT bytes and "
<< int_bits/char_bits << " C++ bytes on your platform.\n";
std::cout << "Anyways, not all bits might be usable by you. Hah.\n";
}
Surely, you could also just #include <limit> or <climits>.
main()
{
clrscr();
int n;
float x,*a,*b;//line 1
a=&x;
b=(a+1);
printf("size of x is %d",
n=(char*)(b)-(char*)a);
}
By this code script the size of any data can be calculated without sizeof operator.Just change the float in line 1 with the type whose size you want to calculate
#include <stdio.h>
struct {
int a;
char c;
};
void main() {
struct node*temp;
printf("%d",(char*)(temp+1)-(char*)temp);
}
# include<stdio.h>
struct node
{
int a;
char c;
};
void main()
{
struct node*ptr;
ptr=(struct node*)0;
printf("%d",++ptr);
}
#include <bits/stdc++.h>
using namespace std;
int main()
{
// take any datatype hear
char *a = 0; // output: 1
int *b = 0; // output: 4
long *c = 0; // output: 8
a++;
b++;
c++;
printf("%d",a);
printf("%d",b);
printf("%d",c);
return 0;
}