Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have been generating a C function from Matlab Coder environnement in order to implement it in an other software called MAX/MSP.
I'm trying to make sense of it, thanks to my poor level in C programming, and there is some syntax elements I can't understand: the use of unsigned values like 0U or 1U to pass arrays.
The next example doesn't do anything. Pasting the entire code wouldn't help much, unless you think so.
void function1(const double A[49], double B[50])
{
function2( (double *)&A[0U] );
}
static void function2(const double A[])
{
}
While doing some math, Matlab wrote something like:
b = 2;
f[1U] += b;
I don't understand the use of unsigned value either...
Thanks a lot!
For a[n], array indexes are always non-negative values, from 0 to n-1. Appending a u to a decimal constant poses no problem for indexing an array, yet does offer a benefit: it insures that the value is of minimal type width and of some unsigned type.
Automated generation of a fixed index like with Matlab benefits by using a u suffix.
Consider small and large values on a 32-bit unsigned/int/long/size_t system
aweeu[0u]; // `0u` is 32-bit `unsigned`.
aweenou[0]; // `0` is 32-bit `int`.
abigu[3000000000u]; // `3000000000u` is a 32-bit `unsigned`.
abignou[3000000000]; // `3000000000` is a 64-bit `long long`.
Is this of value? Perhaps. Some compiles make look at the value first and see that all above are in range of size_t and not complain. Others may complain about an index of type long long or possible even int. By appending the u, such rare complaints do not occur.
The U suffix is obviously not necessary here. It can be useful to force unsigned arithmetics in certain situations, with surprising side effects:
if (-1 < 1U) {
printf("surprise!\n");
}
In some rare occasions, it is necessary to avoid some type changes. On many current architectures, the following comparisons hold and the type of 2147483648 is different from that of 2147483648U is more than just signedness:
For example, on 32-bit linux and 32- and 64-bit windows:
sizeof(2147483648) == sizeof(long long) // 8 bytes
sizeof(2147483648U) == sizeof(unsigned) // 4 bytes
On many embedded systems with 16-bit ints:
sizeof(2147483648) == sizeof(long long) // 8 bytes
sizeof(2147483648U) == sizeof(unsigned long) // 4 bytes
sizeof(32768) == sizeof(long) // 4 bytes
sizeof(32768U) == sizeof(unsigned int) // 2 bytes
Depending on implementation details, array index values can exceed the range of both type int and type unsigned, and pointer offset values can be even larger. Just specifying U is no guarantee of anything.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am trying to do left shift greater then or equal to 64, but I am not sure which DATA TYPE exists to help me out here.
I'm working on existing project, where 63 macros are already taken the next comes 64(which are my changes) for which i have to do left shift operation.
Note: I just want to understand how do i set a particular bit greater then 64bits. "I am not sure which DATA TYPE exists to help me out here".
Below code is just a sample code. We know there no data type exists greater then 64bits, but can there be any solution for this.
#include <stdio.h>
#define PEAK 64
int main()
{
unsigned long int a;
a= (1ULL << PEAK);
printf("%lu",a);
return 0;
}
main.c:8:10: warning: left shift count >= width of type [-Wshift-count-overflow]
a= (1ULL << PEAK);
^~
I just want to understand how do i set a particular bit greater then 64bits.
You can't.
Old answer:
You can do a left shift greater than or equal to 64-bits by doing exactly what you're doing.
This, of course, won't result in anything usable (either the original value, zero, or something else), and is undefined behavior, so don't do it.
If you want a data type that can do this, you're mostly out of luck. There are no guarantees that an 128-bit data type exists in C, and any compiler extensions that you may see are not portable. This may be possible with SIMD instructions but they're not portable across processors.
That said, there is unsigned __int128 in GCC and Clang that allows shifting (through emulation of wider integers). However, this isn't available in MSVC. Also note that you won't be able to print this number, so it's pretty pointless anyway.
You can shift a 64-bit unsigned type left by zero to 63 bits. Anything else will lead to undefined behaviour. The largest unsigned integer type is uintmax_t but it is usually unsigned long long on most common implementations, which is 64 bits, and the shift is equally undefined then. In practice it will result in either zero, the original value, or completely random behaviour.
Why do you think you need to do this?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to make a hash function for int16_t. The function prototype looks like this:
uint64_t hash_int16_t(const void *key);
So far I've gotten this but I don't know if this is the correct approach:
uint64_t hash_int16_t(const void *key)
{
// key is expected to be an int16_t
const int16_t *e = (const int16_t*)key;
uint64_t x = (uint64_t)*e;
x = (x ^ (x >> 30)) * UINT64_C(0xbf58476d1ce4e5b9);
x = (x ^ (x >> 27)) * UINT64_C(0x94d049bb133111eb);
x = x ^ (x >> 31);
return x;
}
Is there a hash function for signed types? Should I mix the bits using 16 bit unsigned integers or 64 bit unsigned integers will do fine? Will I be loosing information when I cast it to an unsigned type if the integer is negative? Will this generate undefined behavior?
P.S. The code is in C and I've taken the hash function from here.
Edit 1: The argument is const void *key because the user is allowed to store keys as other values like structs or strings. The above function will add support to int16_t keys.
Edit 2: What I'm trying to accomplish is a generic hash table. The user will have to provide a hash function when initializing the hash table and the example above is bundled with the hash table.
Is there a hash function for signed types?
Sure. A good hash function that works on unsigned types can also work just fine on signed types. If the hash function is good, then it has good uniformity, and so it doesn't matter whether you call a particular bit a "sign bit" or "just another bit." For the purposes of this answer, I'll take it as given that the algorithm you found in the linked thread is "good."
Should I mix the bits using 16 bit unsigned integers or 64 bit unsigned integers will do fine?
You can't rely on bit-shift operators to promote the result of shifting a uint16_t to a uint64_t, so you must work with uint64_t as in the code you posted.
Will I be loosing information when I cast it to an unsigned type if the integer is negative?
No, because each possible value of an int16_t maps to a distinct value when converted to a uint64_t: the range [0, 32767] maps to [0, 32767] and the range [-32768, -1] maps to [18446744073709518848, 18446744073709551615] (see below for explanation).
Will this generate undefined behavior?
No. The C standard (C11) specifies the following for signed-to-unsigned integer conversion (ยง6.3.1.3):
[...] if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Thus, -32768 converts to -32768 + 264 = 18446744073709518848, and -1 converts to -1 + 264 = 18446744073709551615.
As for the algorithm itself... if the hash value is only being used to create a hash table, then it isn't necessary for the hash function to have any cryptographic properties like dispersion. As such, this trivial algorithm might work just fine for an int16_t x:
return (uint64_t) x;
This function has no dispersion, but (trivially) optimal uniformity for the input and output range. Whether this is acceptable will depend on the hash table implementation. If it naively uses only certain bits of the hash value to select a bin to place the value in, and it doesn't do any mixing of its own, you'll need to focus the uniformity of your output on those bits, wherever/whichever they are.
This question already has answers here:
size of int variable
(6 answers)
Closed 5 years ago.
This is a bit of a general question and not completely related to the c programming language but it's what I'm on studying at the moment.
Why does an integer take up 4 bytes or How ever many bytes dependant on the system?
Why does it not take up 1 byte per integer?
For example why does the following take up 8 bytes:
int a = 1;
int b = 1;
Thanks
I am not sure whether you are asking why int objects have fixed sizes instead of variable sizes or whether you are asking why int objects have the fixed sizes they do. This answers the former.
We do not want the basic types to have variable lengths. That makes it very complicated to work with them.
We want them to have fixed lengths, because then it is much easier to generate instructions to operate on them. Also, the operations will be faster.
If the size of an int were variable, consider what happens when you do:
b = 3;
b += 100000;
scanf("%d", &b);
When b is first assigned, only one byte is needed. Then, when the addition is performed, the compiler needs more space. But b might have neighbors in memory, so the compiler cannot just grow it in place. It has to release the old memory and allocate new memory somewhere.
Then, when we do the scanf, the compiler does not know how much data is coming. scanf will have to do some very complicated work to grow b over and over again as it reads more digits. And, when it is done, how does it let you know where the new b is? The compiler has to have some mechanism to update the location for b. This is hard and complicated and will cause additional problems.
In contrast, if b has a fixed size of four bytes, this is easy. For the assignment, write 3 to b. For the addition, add 100000 to the value in b and write the result to b. For the scanf, pass the address of b to scanf and let it write the new value to b. This is easy.
The basic integral type int is guaranteed to have at least 16 bits; At least means that compilers/architectures may also provide more bits, and on 32/64 bit systems int will most likely comprise 32 bits or 64 bits (i.e. 4 bytes or 8 bytes), respectively (cf, for example, cppreference.com):
Integer types
... int (also accessible as signed int): This is the most optimal
integer type for the platform, and is guaranteed to be at least 16
bits. Most current systems use 32 bits (see Data models below).
If you want an integral type with exactly 8 bits, use the int8_t or uint8_t.
It doesn't. It's implementation-defined. A signed int in gcc on an Atmel 8-bit microcontroller, for example, is a 16-bit integer. An unsigned int is also 16-bits, but from 0-65535 since it's unsigned.
The fact that an int uses a fixed number of bytes (such as 4) is a compiler/CPU efficiency and limitation, designed to make common integer operations fast and efficient.
There are types (such as BigInteger in Java) that take a variable amount of space. These types would have 2 fields, the first being the number of words being used to represent the integer, and the second being the array of words. You could define your own VarInt type, something like:
struct VarInt {
char length;
char bytes[]; // Variable length
}
VarInt one = {1, {1}}; // 2 bytes
VarInt v257 = {2, {1,1}}; // 3 bytes
VarInt v65537 = {4, {1,0,0,1}}; // 5 bytes
and so on, but this would not be very fast to perform arithmetic on. You'd have to decide how you would want to treat overflow; resizing the storage would require dynamic memory allocation.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to work with Tiny Encryption Algorithm, but here is the thing:
I'm not getting why the array is a long type and it has hex values, for example,
unsigned long v[] = {0xe15034c8, 0x260fd6d5};
Also, I want to pass a plaintext (ASCII) read from stream, in what data format should I pass it to this type of array?
the encryption function is:
void encipher(unsigned long *const v,unsigned long *const w,
const unsigned long *const k)
{
register unsigned long y=v[0],z=v[1],sum=0,delta=0x9E3779B9,
a=k[0],b=k[1],c=k[2],d=k[3],n=32;
while(n-->0)
{
sum += delta;
y += (z << 4)+a ^ z+sum ^ (z >> 5)+b;
z += (y << 4)+c ^ y+sum ^ (y >> 5)+d;
}
w[0]=y; w[1]=z;
}
Integral constants can be represented in octal, decimal, or hex. The following lines are equivalent and initialize a to 10.
int a = 012;
int a = 10;
int a = 0xA;
In your case,
unsigned long v[] = {0xe15034c8, 0x260fd6d5};
is equivalent to:
unsigned long v[] = {3780129992, 638572245};
Regarding
I want to pass a plaintext (ASCII) read from stream, in what data format should I pass it to this type of array?
You'll need to parse the contents of the string using one of several functions from the standard library, such as sscanf, atoi, strtol, and assign the resulting number to an element of v.
It has long because 0xe15034c8 is 3780129992 in base-10. On some platforms int is too small to handle such a big number. Moreover, hexadecimal representations often have fewer digits and consumes less space in code than the decimal representations. Here it's said max value for int32 is 2,147,483,647. Your first number (3,780,129,992) is greater while the second one (638,572,245) is smaller.
unsigned long on most of the platforms will be greater than 4 bytes.
So you are storing a 32 bit value in this array which is totally fine.
You can do a sizeof(unsigned long) and see what is the size of unsigned long and calculate what is the maximum value it can hold.
Also, I want to pass a plaintext (ASCII) read from stream, in what
data format should I pass it to this type of array?
Presuming that you want to read the input as string and store the value in this array then there are API's like atoi() which you can use
This is a common confusion. Hexadecimal is not a data type, it's a representation. For example, a "table" can be called differently on other spoken languages (i.e. "mesa" in spanish) but it is still a table. In this case, values (e.g. 10) can be represent in different ways, the most common for us humans is base 10. But computers use binary (and other related representations like hexadecimal) and thus what for us is 10, for the computer it is 1010 (A in hexadecimal). At the end, the long is just a 32-bit value, how you want to represent it is up to you, but the value is the same.
For characters, for the computer they are still numbers, so you can just pass their values as chunks of longs (i.e. using atoi())
This question already has answers here:
Is there a 128 bit integer in gcc?
(3 answers)
Closed 9 years ago.
I have 2 64 bit integers and I would like to concatenate it into a single 128bit integer.
uint64_t len_A;
uint64_t len_C;
len_AC= (len_A << 64) | len_C;
GCC doesn't support uint128_t.
Is there any other ways to do it?
First of all you should decide how you would store that 128-bit integer.
There is no built-in integer type of that dimension.
You can store the integer, for example, as a struct consisting of two 64-bit integers:
typedef struct { uint64_t high; uint64_t low; } int128;
Then the answer will be quite simple.
The question is what are you going to do with this integer next.
as Inspired said:
The question is what are you going to do with this integer next.
You probably want to use a arbitrary precision library that handles this for you in a portable and reliable way. Why? Because you may find yourself dealing with endianess issues as in choosing the high or low end of the integer in a given hardware.
Even if you know for sure where you code will run, still you will need to develop an entire set of functions that deals with your 128-bits integer because not all the compilers support a 128-bit type, (it seems GCC does support this type of integers), for instance, you will need to create a set of functions for basic mathematic operations.
It's probably better if you use the GMP library, visit the following link for more:
http://gmplib.org/
If your GCC does not have uint128_t it surely does not have 128 bits integers.
So you need to represent them e.g. with structures like
struct my128int_st {
uint64_t hi, lo;
} ac;
ac.hi = a;
ac.lo = c;