What's more recommended or advisable way to convert array of uint8_t at offset i to uint64_t and why?
uint8_t * bytes = ...
uint64_t const v = ((uint64_t *)(bytes + i))[0];
or
uint64_t const v = ((uint64_t)(bytes[i+7]) << 56)
| ((uint64_t)(bytes[i+6]) << 48)
| ((uint64_t)(bytes[i+5]) << 40)
| ((uint64_t)(bytes[i+4]) << 32)
| ((uint64_t)(bytes[i+3]) << 24)
| ((uint64_t)(bytes[i+2]) << 16)
| ((uint64_t)(bytes[i+1]) << 8)
| ((uint64_t)(bytes[i]));
There are two primary differences.
One, the behavior of ((uint64_t *)(bytes + i))[0] is not defined by the C standard (unless certain prerequisites about what bytes point to are met). Generally, an array of bytes should not be accessed using a uint64_t type.
When memory defined as one type is accessed with another type, it is called aliasing, and the C standard only defines certain combinations of aliasing. Some compilers may support some aliasing beyond what the standard requires, but using it is not portable. Additionally, if bytes + i is not suitably aligned for a uint64_t, the access may cause an exception or otherwise malfunction.
Two, loading the bytes through aliasing, if it is defined (by the standard or by compiler extension), interprets the bytes using the memory ordering for the C implementation. Some C implementations store the bytes representing integers in memory from low address to high address for low-position-value bytes to high-position-value bytes, and some store them from high address to low address. (And they can be stored in non-consecutive orders too, although this is rare.) So loading the bytes this way will produce different values from the same bytes in memory based on what order the C implementation uses.
But loading the bytes and using shifts to combine them will always produce the same value from the same bytes in memory regardless of what order the C implementation uses.
The first method should be avoided, because there is no need for it. If one desires to interpret the bytes using the C implementation’s ordering, this can be done with:
uint64_t t;
memcpy(&t, bytes+i, sizeof t);
const uint64_t v = t;
Using memcpy provides a portable way of aliasing the uint64_t to store bytes into it. Good compilers recognize this idiom and will optimize the memcpy to a load from memory, if suitable for the target architecture (and if optimization is enabled).
If one desires to interpret the bytes using little-endian ordering, as shown in the code in the question, then the second method may be used. (Sometimes platforms will have routines that may provide more efficient code for this.)
You can also use memcpy
uint64_t n;
memcpy(&n, bytes + i, sizeof(uint64_t));
const uint64_t v = n;
The first option has two big problems that qualify as undefined behavior (anything can happen):
A uint8_t* or array of uint8_t is not necessarily aligned the same way as required by a larger type like uint64_t. Simply casting to uint64_t* leads to misaligned access. This can cause hardware exceptions, program crashes, slower code etc, all depending on the alignment requirements of the specific target.
It violates the internal type system of C, where each object in memory known by the compiler has an "effective type" that the compiler keeps track of. Based on this, the compiler is allowed to make certain assumptions regarding if a certain memory region have been accessed or not during optimization. If your code violates these type rules, as it would in this case, wrong machine code could get generated.
This is most commonly referred to as the strict aliasing rule and your cast followed by dereferencing would be a so-called "strict aliasing violation".
The second option is sound code, because:
When doing shifts or other forms of bitwise arithmetic, a large integer type should be used. That is, unsigned int or larger - depending on system. Using signed types or small integer types can lead to undefined behavior or unexpected results. See Implicit type promotion rules regarding problems with small integer types implicitly changing signedness in some expressions.
If not for the cast to uint64_t, then the bytes[i+7] << 56 shift would involve an implicit promotion of the left operand from uint8_t to int, which would be a bug. Because if the most significant bit (MSB) of the byte is set and we shift into/beyond the sign bit, we invoke undefined behavior - again, anything can happen.
And naturally we need to use a 64 bit type in this specific case or otherwise we wouldn't be able to shift as far as 56 bits. Shifting beyond the range of the type of the left operand is also undefined behavior.
Note that whether to pick the order of bytes[i+7] << 56 versus the alternative bytes[i+0] << 56 depends on the underlying CPU endianess. Bit shifts are nice since the actual shift ignores if the destination type is using big or little endian. But in this case you must know in advance which byte in the source array you want to correspond to the most significant. This code you have here will work if the array was built based on little endian formatting, since the last byte of the array is shifted to the highest address.
As for the uint64_t const v = , the const qualifier is a bit strange to have at local scope like that. It's harmless but confusing and doesn't really add anything of value inside a local scope. I would just drop it.
Related
When writing to a GPIO output register (BCM2711 chip on the Raspberry Pi 4), the location of my clear_register function call changes the result completely. The code is below.
void clear_register(long reg) {
*(unsigned int*)reg = (unsigned int)0;
}
int set_pin(unsigned int pin_number) {
long reg = SET_BASE; //0xFE200001c
clear_register(SET_BASE);
unsigned int curval = *(unsigned int*)reg;
unsigned int value = (1 << pin_number);
value |= curval;
gpio_write(reg, value);
return 1;
}
When written like this, the register gets cleared. However, if I call clear_register(SET_BASE) before long reg = SET_BASE, the registers don't get cleared. I can't tell why this would make a difference; any help is appreciated.
I can't reproduce it since I don't have the target hardware which allows writing to that address but you have several bugs and dangerous style issues:
0xF E2 00 00 1c (64 bit signed) does not necessarily fit inside a long (possibly 32 bit) and certainly does not fit inside a unsigned int (16 or 32 bits).
Even if that was a typo and you meant to write 0xFE20001c (32 bit), your program still stuffers from "sloppy typing", which is when the programmer just type out long, int etc without given it any deeper thought. The outcome is strange, subtle and intermittent bugs. I think everyone can relate to the situation "just cast it to unsigned int and it works" but you have no idea why. Bugs like that are always caused by "sloppy typing".
You should be using the stdint.h types instead. uint32_t and so on.
Also related to "sloppy typing", there exist very few cases where you want to actually use signed numbers in embedded systems - certainly not when dealing with addresses and bits. Negative addresses in Linux is a thing - kernel space - but accidentally changing an address to a negative value is a bug.
Signed types will only cause problems when doing bitwise arithmetic and they come with overflows. So you need to nuke the presence of every signed variable in your code unless it explicitly needs to be signed.
This means no sloppy long but the correct type, which is either uint32_t or uintptr_t if it should hold an address, as is the case in your example. It also mean so sloppy hex constants, they must always have u suffix: 0xFE20001Cu or they easily boil down to the wrong type.
For these reasons 1 << is always a bug in a C program, because 1 is of type signed int and you may end up shifting into the sign bit of that signed int, which is undefined behavior. There exists no reason why you shouldn't write 1u << , always.
Whenever you are dealing with hardware registers, you must always use volatile qualified pointers or you might end up with very strange bugs caused by the optimizer. No exceptions here either.
(unsigned int)0 is a pointless cast. If you wish to be explict you could write 0u but that doesn't change anything in case of assignment, since a "lvalue conversion" to the type of the left operand always happens upon assignment. 0u does make some static analysers happy though, MISRA C checkers etc.
With all bug fixes, your clear function should for example look something like this:
void clear_register (uintptr_t reg) {
*(volatile uint32_t*)reg = 0;
}
Or better yet, don't write functions for such very basic things! *(volatile uint32_t*)SET_BASE = 0; is perfectly clear, readable and self-documented code. While clear_register(SET_BASE) suggests that something more advanced than just a simple write is taking place.
However, you could have placed the cast inside the SET_BASE macro. For details and examples see How to access a hardware register from firmware?
Let uint8 and uint16 be datatypes for 8bit and 16bit positive integers.
uint8 a = 1;
uint16 b = a << 8;
I tested this program on 32Bit architecture with result
b = 256
Would the same programm on a system with registers of 8bit length yield the result:
b = 0 ?
because all bits in register gets shifted to 0 by a << 8?
Registers are irrelevant. This is about the width of your types.
When you shift a value by more bits than it possesses, the behaviour is undefined. The compiler, the program, the computer, the tax office can legally manifest any results accordingly. And, no, that's not just theoretical.
However, operands in C are promoted before interesting things are done on them. So, your uint8_t becomes an int before the left-shift.
Now it depends on your architecture (as determined by your compiler configuration) as to what happens: is int on your implementation only 8-bit? No, it's not! The result, then — regardless of any "register size" — must abide by the rules of the language, yielding the mathematically appropriate answer (256). And, even if it were, you'd hit that undefined behaviour so the question would be moot.
Under the bonnet, if more than one register is needed to hold a variable, then that's what will and must happen (at whatever performance cost is implied as a result). That's if a register is used at all; remember, you're programming in an abstraction, not hand-crafting machine code. The program snippet you showed can be completely optimised away during compilation and doesn't require any runtime instructions at all.
Would the same programm on a system with registers of 8bit length the result be b=0?
No.
In the expression a << 8 the variable a will get promoted to an int before the bit shift. And an int is guaranteed to be at least 16 bits.
b will have the value 256 on all platforms unless there's a bug in the compiler.
However, if you changed the second line to uint32 b = a << 16; you might get strange results. a would still get promoted to an int, but if int is two bytes long, then a << 16 will invoke undefined behavior.
I'm trying to get a little fancier with how I write my drivers for peripherals in embedded applications.
Naturally, reading and writing to predefined memory mapped areas is a common task, so I try to wrap as much stuff up in a struct as I can.
Sometimes, I want to write to the whole register, and sometimes I want to manipulate a subset of bits in this register. Lately, I've read some stuff that suggests making a union that contains a single uintX type that's big enough to hold the whole register (usually 8 or 16 bits), as well as a struct that has a collection of bitfields in it to represent the specific bits of that register.
After reading a few comments on some of these posts that have this strategy outlined for managing multiple control/status registers for a peripheral, I concluded that most people with experience in this level of embedded development dislike bitfields largely due to lack of portability and eideness issues between different compilers...Not to mention that debugging can be confounded by bitfields as well.
The alternative that most people seem to recommend is to use bit shifting to ensure that the driver will be portable between platforms, compilers and environments, but I have had a hard time seeing this in action.
My question is:
How do I take something like this:
typedef union data_port
{
uint16_t CCR1;
struct
{
data1 : 5;
data2 : 3;
data3 : 4;
data4 : 4;
}
}
And get rid of the bitfields and convert to a bit-shifting scheme in a sane way?
Part 3 of this guys post here describes what I'm talking about in general...Notice at the end, he puts all the registers (wrapped up as unions) in a struct and then suggests to do the following:
define a pointer to refer to the can base address and cast it as a pointer to the (CAN) register file like the following.
#define CAN0 (*(CAN_REG_FILE *)CAN_BASE_ADDRESS)
What the hell is this cute little move all about? CAN0 is a pointer to a pointer to a function of a...number that's #defined as CAN_BASE_ADDRESS? I don't know...He lost me on that one.
The C standard does not specify how much memory a sequence of bit-fields occupies or what order the bit-fields are in. In your example, some compilers might decide to use 32 bits for the bit-fields, even though you clearly expect it to cover 16 bits. So using bit-fields locks you down to a specific compiler and specific compilation flags.
Using types larger than unsigned char also has implementation-defined effects, but in practice it is a lot more portable. In the real world, there are just two choices for an uintNN_t: big-endian or little-endian, and usually for a given CPU everybody uses the same order because that's the order that the CPU uses natively. (Some architectures such as mips and arm support both endiannesses, but usually people stick to one endianness across a large range of CPU models.) If you're accessing a CPU's own registers, its endianness may be part of the CPU anyway. On the other hand, if you're accessing a peripheral, you need to take care.
The documentation of the device that you're accessing will tell you how big a memory unit to address at once (apparently 2 bytes in your example) and how the bits are arranged. For example, it might state that the register is a 16-bit register accessed with a 16-bit load/store instructions whatever the CPU's endianness is, that data1 encompasses the 5 low-order bits, data2 encompasses the next 3, data3 the next 4 and data4 the next 4. In this case, you would declare the register as a uint16_t.
typedef volatile uint16_t data_port_t;
data_port_t *port = GET_DATA_PORT_ADDRESS();
Memory addresses in devices almost always need to be declared volatile, because it matters that the compiler reads and writes to them at the right time.
To access the parts of the register, use bit-shift and bit-mask operators. For example:
#define DATA2_WIDTH 3
#define DATA2_OFFSET 5
#define DATA2_MAX (((uint16_t)1 << DATA2_WIDTH) - 1) // in binary: 0000000000000111
#define DATA2_MASK (DATA2_MAX << DATA2_OFFSET) // in binary: 0000000011100000
void set_data2(data_port_t *port, unsigned new_field_value)
{
assert(new_field_value <= DATA2_MAX);
uint16_t old_register_value = *port;
// First, mask out the data2 bits from the current register value.
uint16_t new_register_value = (old_register_value & ~DATA2_MASK);
// Then mask in the new value for data2.
new_register_value |= (new_field_value << DATA2_OFFSET);
*port = new_register_value;
}
Obviously you can make the code a lot shorter. I separated it out into individual tiny steps so that the logic should be easy to follow. I include a shorter version below. Any compiler worth its salt should compile to the same code except in non-optimizing mode. Note that above, I used an intermediate variable instead of doing two assignments to *port because doing two assignments to *port would change the behavior: it would cause the device to see the intermediate value (and another read, since |= is both a read and a write). Here's the shorter version, and a read function:
void set_data2(data_port_t *port, unsigned new_field_value)
{
assert(new_field_value <= DATA2_MAX);
*port = (*port & ~(((uint16_t)1 << DATA2_WIDTH) - 1) << DATA2_OFFSET))
| (new_field_value << DATA2_OFFSET);
}
unsigned get_data2(data_port *port)
{
return (*port >> DATA2_OFFSET) & DATA2_MASK;
}
#define CAN0 (*(CAN_REG_FILE *)CAN_BASE_ADDRESS)
There is no function here. A function declaration would have a return type followed by an argument list in parentheses. This takes the value CAN_BASE_ADDRESS, which is presumably a pointer of some type, then casts the pointer to a pointer to CAN_REG_FILE, and finally dereferences the pointer. In other words, it accesses the CAN register file at the address given by CAN_BASE_ADDRESS. For example, there may be declarations like
void *CAN_BASE_ADDRESS = (void*)0x12345678;
typedef struct {
const volatile uint32_t status;
volatile uint16_t foo;
volatile uint16_t bar;
} CAN_REG_FILE;
#define CAN0 (*(CAN_REG_FILE *)CAN_BASE_ADDRESS)
and then you can do things like
CAN0.foo = 42;
printf("CAN0 status: %d\n", (int)CAN0.status);
1.
The problem when getting rid of bitfields is that you can no more use simple assignment statements, but you must shift the value to write, create a mask, make an AND to wipe out the previous bits, and use an OR to write the new bits. Reading is similar reversed. For example, let's take an 8-bit register defined like this:
val2.val1
0000.0000
val1 is the lower 4 bits, and val2 is the upper 4. The whole register is named REG.
To read val1 into tmp, one should issue:
tmp = REG & 0x0F;
and to read val2:
tmp = (REG >> 4) & 0xF; // AND redundant in this particular case
or
tmp = (REG & 0xF0) >> 4;
But to write tmp to val2, for example, you need to do:
REG = (REG & 0x0F) | (tmp << 4);
Of course some macro can be used to facilitate this, but the problem, for me, is that reading and writing require two different macros.
I think that bitfield is the best way, and a serious compiler should have options to define endiannes and bit ordering of such bitfields. Anyway, this is the future, even if, for now, maybe not every compiler has full support.
2.
#define CAN0 (*(CAN_REG_FILE *)CAN_BASE_ADDRESS)
This macro defines CAN0 as a dereferenced pointer to the base address of the CAN register(s), no function declaration is involved. Suppose you have an 8-bit register at address 0x800. You could do:
#define REG_BASE 0x800 // address of the register
#define REG (*(uint8_t *) REG_BASE)
REG = 0; // becomes *REG_BASE = 0
tmp = REG; // tmp=*REG_BASE
Instead of uint_t you can use a struct type, and all the bits, and probably all the bytes or words, go magically to their correct place, with the right semantics. Using a good compiler of course - but who doesn't want to deploy a good compiler?
Some compilers have/had extensions to assign a given address to a variable; for example old turbo pascal had the ABSOLUTE keyword:
var CAN: byte absolute 0x800:0000; // seg:ofs...!
The semantic is the same as before, only more straightforward because no pointer is involved, but this is managed by the macro and the compiler automatically.
I'm trying to translate this snippet :
ntohs(*(UInt16*)VALUE) / 4.0
and some other ones, looking alike, from C to Swift.
Problem is, I have very few knowledge of Swift and I just can't understand what this snippet does... Here's all I know :
ntohs swap endianness to host endianness
VALUE is a char[32]
I just discovered that Swift : (UInt(data.0) << 6) + (UInt(data.1) >> 2) does the same thing. Could one please explain ?
I'm willing to return a Swift Uint (UInt64)
Thanks !
VALUE is a pointer to 32 bytes (char[32]).
The pointer is cast to UInt16 pointer. That means the first two bytes of VALUE are being interpreted as UInt16 (2 bytes).
* will dereference the pointer. We get the two bytes of VALUE as a 16-bit number. However it has net endianness (net byte order), so we cannot make integer operations on it.
We now swap the endianness to host, we get a normal integer.
We divide the integer by 4.0.
To do the same in Swift, let's just compose the byte values to an integer.
let host = (UInt(data.0) << 8) | UInt(data.1)
Note that to divide by 4.0 you will have to convert the integer to Float.
The C you quote is technically incorrect, although it will be compiled as intended by most production C compilers.¹ A better way to achieve the same effect, which should also be easier to translate to Swift, is
unsigned int val = ((((unsigned int)VALUE[0]) << 8) | // ² ³
(((unsigned int)VALUE[1]) << 0)); // ⁴
double scaledval = ((double)val) / 4.0; // ⁵
The first statement reads the first two bytes of VALUE, interprets them as a 16-bit unsigned number in network byte order, and converts them to host byte order (whether or not those byte orders are different). The second statement converts the number to double and scales it.
¹ Specifically, *(UInt16*)VALUE provokes undefined behavior because it violates the type-based aliasing rules, which are asymmetric: a pointer with character type may be used to access an object with any type, but a pointer with any other type may not be used to access an object with (array-of-)character type.
² In C, a cast to unsigned int here is necessary in order to make the subsequent shifting and or-ing happen in an unsigned type. If you cast to uint16_t, which might seem more appropriate, the "usual arithmetic conversions" would then convert it to int, which is signed, before doing the left shift. This would provoke undefined behavior on a system where int was only 16 bits wide (you're not allowed to shift into the sign bit). Swift almost certainly has completely different rules for arithmetic on types with small ranges; you'll probably need to cast to something before the shift, but I cannot tell you what.
³ I have over-parenthesized this expression so that the order of operations will be clear even if you aren't terribly familiar with C.
⁴ Left shifting by zero bits has no effect; it is only included for parallel structure.
⁵ An explicit conversion to double before the division operation is not necessary in C, but it is in Swift, so I have written it that way here.
It looks like the code is taking the single byte value[0]. This is then dereferenced, this should retrieve a number from a low memory address, 1 to 127 (possibly 255).
What ever number is there is then divided by 4.
I genuinely can't believe my interpretation is correct and can't check that cos I have no laptop. I really think there maybe a typo in your code as it is not a good thing to do. Portable, reusable
I must stress that the string is not converted to a number. Which is then used
In an Open Source program I
wrote, I'm reading binary data (written by another program) from a file and outputting ints, doubles,
and other assorted data types. One of the challenges is that it needs to
run on 32-bit and 64-bit machines of both endiannesses, which means that I
end up having to do quite a bit of low-level bit-twiddling. I know a (very)
little bit about type punning and strict aliasing and want to make sure I'm
doing things the right way.
Basically, it's easy to convert from a char* to an int of various sizes:
int64_t snativeint64_t(const char *buf)
{
/* Interpret the first 8 bytes of buf as a 64-bit int */
return *(int64_t *) buf;
}
and I have a cast of support functions to swap byte orders as needed, such
as:
int64_t swappedint64_t(const int64_t wrongend)
{
/* Change the endianness of a 64-bit integer */
return (((wrongend & 0xff00000000000000LL) >> 56) |
((wrongend & 0x00ff000000000000LL) >> 40) |
((wrongend & 0x0000ff0000000000LL) >> 24) |
((wrongend & 0x000000ff00000000LL) >> 8) |
((wrongend & 0x00000000ff000000LL) << 8) |
((wrongend & 0x0000000000ff0000LL) << 24) |
((wrongend & 0x000000000000ff00LL) << 40) |
((wrongend & 0x00000000000000ffLL) << 56));
}
At runtime, the program detects the endianness of the machine and assigns
one of the above to a function pointer:
int64_t (*slittleint64_t)(const char *);
if(littleendian) {
slittleint64_t = snativeint64_t;
} else {
slittleint64_t = sswappedint64_t;
}
Now, the tricky part comes when I'm trying to cast a char* to a double. I'd
like to re-use the endian-swapping code like so:
union
{
double d;
int64_t i;
} int64todouble;
int64todouble.i = slittleint64_t(bufoffset);
printf("%lf", int64todouble.d);
However, some compilers could optimize away the "int64todouble.i" assignment
and break the program. Is there a safer way to do this, while considering
that this program must stay optimized for performance, and also that I'd
prefer not to write a parallel set of transformations to cast char* to
double directly? If the union method of punning is safe, should I be
re-writing my functions like snativeint64_t to use it?
I ended up using Steve Jessop's answer because the conversion functions re-written to use memcpy, like so:
int64_t snativeint64_t(const char *buf)
{
/* Interpret the first 8 bytes of buf as a 64-bit int */
int64_t output;
memcpy(&output, buf, 8);
return output;
}
compiled into the exact same assembler as my original code:
snativeint64_t:
movq (%rdi), %rax
ret
Of the two, the memcpy version more explicitly expresses what I'm trying to do and should work on even the most naive compilers.
Adam, your answer was also wonderful and I learned a lot from it. Thanks for posting!
I highly suggest you read Understanding Strict Aliasing. Specifically, see the sections labeled "Casting through a union". It has a number of very good examples. While the article is on a website about the Cell processor and uses PPC assembly examples, almost all of it is equally applicable to other architectures, including x86.
Since you seem to know enough about your implementation to be sure that int64_t and double are the same size, and have suitable storage representations, you might hazard a memcpy. Then you don't even have to think about aliasing.
Since you're using a function pointer for a function that might easily be inlined if you were willing to release multiple binaries, performance must not be a huge issue anyway, but you might like to know that some compilers can be quite fiendish optimising memcpy - for small integer sizes a set of loads and stores can be inlined, and you might even find the variables are optimised away entirely and the compiler does the "copy" simply be reassigning the stack slots it's using for the variables, just like a union.
int64_t i = slittleint64_t(buffoffset);
double d;
memcpy(&d,&i,8); /* might emit no code if you're lucky */
printf("%lf", d);
Examine the resulting code, or just profile it. Chances are even in the worst case it will not be slow.
In general, though, doing anything too clever with byteswapping results in portability issues. There exist ABIs with middle-endian doubles, where each word is little-endian, but the big word comes first.
Normally you could consider storing your doubles using sprintf and sscanf, but for your project the file formats aren't under your control. But if your application is just shovelling IEEE doubles from an input file in one format to an output file in another format (not sure if it is, since I don't know the database formats in question, but if so), then perhaps you can forget about the fact that it's a double, since you aren't using it for arithmetic anyway. Just treat it as an opaque char[8], requiring byteswapping only if the file formats differ.
The standard says that writing to one field of a union and reading from it immediately is undefined behaviour. So if you go by the rule book, the union based method won't work.
Macros are usually a bad idea, but this might be an exception to the rule. It should be possible to get template-like behaviour in C using a set of macros using the input and output types as parameters.
As a very small sub-suggestion, I suggest you investigate if you can swap the masking and the shifting, in the 64-bit case. Since the operation is swapping bytes, you should be able to always get away with a mask of just 0xff. This should lead to faster, more compact code, unless the compiler is smart enough to figure that one out itself.
In brief, changing this:
(((wrongend & 0xff00000000000000LL) >> 56)
into this:
((wrongend >> 56) & 0xff)
should generate the same result.
Edit:
Removed comments regarding how to effectively store data always big endian and swapping to machine endianess, as questioner hasn't mentioned another program writes his data (which is important information).Still if the data needs conversion from any endian to big and from big to host endian, ntohs/ntohl/htons/htonl are the best methods, most elegant and unbeatable in speed (as they will perform task in hardware if CPU supports that, you can't beat that).
Regarding double/float, just store them to ints by memory casting:
double d = 3.1234;
printf("Double %f\n", d);
int64_t i = *(int64_t *)&d;
// Now i contains the double value as int
double d2 = *(double *)&i;
printf("Double2 %f\n", d2);
Wrap it into a function
int64_t doubleToInt64(double d)
{
return *(int64_t *)&d;
}
double int64ToDouble(int64_t i)
{
return *(double *)&i;
}
Questioner provided this link:
http://cocoawithlove.com/2008/04/using-pointers-to-recast-in-c-is-bad.html
as a prove that casting is bad... unfortunately I can only strongly disagree with most of this page. Quotes and comments:
As common as casting through a pointer
is, it is actually bad practice and
potentially risky code. Casting
through a pointer has the potential to
create bugs because of type punning.
It is not risky at all and it is also not bad practice. It has only a potential to cause bugs if you do it incorrectly, just like programming in C has the potential to cause bugs if you do it incorrectly, so does any programming in any language. By that argument you must stop programming altogether.
Type punning A form of pointer
aliasing where two pointers and refer
to the same location in memory but
represent that location as different
types. The compiler will treat both
"puns" as unrelated pointers. Type
punning has the potential to cause
dependency problems for any data
accessed through both pointers.
This is true, but unfortunately totally unrelated to my code.
What he refers to is code like this:
int64_t * intPointer;
:
// Init intPointer somehow
:
double * doublePointer = (double *)intPointer;
Now doublePointer and intPointer both point to the same memory location, but treating this as the same type. This is the situation you should solve with a union indeed, anything else is pretty bad. Bad that is not what my code does!
My code copies by value, not by reference. I cast a double to int64 pointer (or the other way round) and immediately deference it. Once the functions return, there is no pointer held to anything. There is a int64 and a double and these are totally unrelated to the input parameter of the functions. I never copy any pointer to a pointer of a different type (if you saw this in my code sample, you strongly misread the C code I wrote), I just transfer the value to a variable of different type (in an own memory location). So the definition of type punning does not apply at all, as it says "refer to the same location in memory" and nothing here refers to the same memory location.
int64_t intValue = 12345;
double doubleValue = int64ToDouble(intValue);
// The statement below will not change the value of doubleValue!
// Both are not pointing to the same memory location, both have their
// own storage space on stack and are totally unreleated.
intValue = 5678;
My code is nothing more than a memory copy, just written in C without an external function.
int64_t doubleToInt64(double d)
{
return *(int64_t *)&d;
}
Could be written as
int64_t doubleToInt64(double d)
{
int64_t result;
memcpy(&result, &d, sizeof(d));
return result;
}
It's nothing more than that, so there is no type punning even in sight anywhere. And this operation is also totally safe, as safe as an operation can be in C. A double is defined to always be 64 Bit (unlike int it does not vary in size, it is fixed at 64 bit), hence it will always fit into a int64_t sized variable.