Related
I understand that I can reassign a variable to a bigger type if it fits, ad its ok to do it. For example:
short s = 2;
int i = s;
long l = i;
long long ll = l;
When I try to do it with pointers it fails and I don't understand why. I have integers that I pass as arguments to functions expecting a pointer to a long long. And it hasn't failed, yet..
The other day I was going from short to int, and something weird happens, I hope someone can I explain it to me. This would be the minimal code to reproduce.
short s = 2;
int* ptr_i = &s; // here ptr_i is the pointer to s, ok , but *ptr_i is definitely not 2
When I try to do it with pointers it fails and I don't understand why.
A major purpose of the type system in C is to reduce programming mistakes. A default conversion may be disallowed or diagnosed because it is symptomatic of a mistake, not because the value cannot be converted.
In int *ptr_i = &s;, &s is the address of a short, typically a 16-bit integer. If ptr_i is set to point to the same memory and *ptr_i is used, it attempts to refer to an int at that address, typically a 32-bit integer. This is generally an error; loading a 32-bit integer from a place where there is a 16-bit integer, and we do not know what is beyond it, is not usually a desired operation. The C standard does not define the behavior when this is attempted.
In fact, there are multiple things that can go wrong with this:
As described above, using *ptr_i when we only know there is a short there may produce undesired results.
The short object may have alignment that is not suitable for an int, which can cause a problem either with the pointer conversion or with using the converted pointer.
The C standard does not define the result of converting short * to int * except that, if it is properly aligned for int, the result can be converted back to short * to produce a value equal to the original pointer.
Even if short and int are the same width, say 32 bits, and the alignment is good, the C standard has rules about aliasing that allow the compiler to assume that an int * never accesses an object that was defined as short. In consequence, optimization of your program may transform it in unexpected ways.
I have integers that I pass as arguments to functions expecting a pointer to a long long.
C does allow default conversions of integers to integers that are the same width or wider, because these are not usually mistakes.
Because in C the array length has to be stated when the array is defined, would it be acceptable practice to use the first element as the length, e.g.
int arr[9]={9,0,1,2,3,4,5,6,7};
Then use a function such as this to process the array:
int printarr(int *ARR) {
for (int i=1; i<ARR[0]; i++) {
printf("%d ", ARR[i]);
}
}
I can see no problem with this but would prefer to check with experienced C programmers first. I would be the only one using the code.
Well, it's bad in the sense that you have an array where the elements does not mean the same thing. Storing metadata with the data is not a good thing. Just to extrapolate your idea a little bit. We could use the first element to denote the element size and then the second for the length. Try writing a function utilizing both ;)
It's also worth noting that with this method, you will have problems if the array is bigger than the maximum value an element can hold, which for char arrays is a very significant limitation. Sure, you can solve it by using the two first elements. And you can also use casts if you have floating point arrays. But I can guarantee you that you will run into hard traced bugs due to this. Among other things, endianness could cause a lot of issues.
And it would certainly confuse virtually every seasoned C programmer. This is not really a logical argument against the idea as such, but rather a pragmatic one. Even if this was a good idea (which it is not) you would have to have a long conversation with EVERY programmer who will have anything to do with your code.
A reasonable way of achieving the same thing is using a struct.
struct container {
int *arr;
size_t size;
};
int arr[10];
struct container c = { .arr = arr, .size = sizeof arr/sizeof *arr };
But in any situation where I would use something like above, I would probably NOT use arrays. I would use dynamic allocation instead:
const size_t size = 10;
int *arr = malloc(sizeof *arr * size);
if(!arr) { /* Error handling */ }
struct container c = { .arr = arr, .size = size };
However, do be aware that if you init it this way with a pointer instead of an array, you're in for "interesting" results.
You can also use flexible arrays, as Andreas wrote in his answer
In C you can use flexible array members. That is you can write
struct intarray {
size_t count;
int data[]; // flexible array member needs to be last
};
You allocate with
size_t count = 100;
struct intarray *arr = malloc( sizeof(struct intarray) + sizeof(int)*count );
arr->count = count;
That can be done for all types of data.
It makes the use of C-arrays a bit safer (not as safe as the C++ containers, but safer than plain C arrays).
Unforntunately, C++ does not support this idiom in the standard.
Many C++ compilers provide it as extension though, but it is not guarantueed.
On the other hand this C FLA idiom may be more explicit and perhaps more efficient than C++ containers as it does not use an extra indirection and/or need two allocations (think of new vector<int>).
If you stick to C, I think this is a very explicit and readable way of handling variable length arrays with an integrated size.
The only drawback is that the C++ guys do not like it and prefer C++ containers.
It is not bad (I mean it will not invoke undefined behavior or cause other portability issues) when the elements of array are integers, but instead of writing magic number 9 directly you should have it calculate the length of array to avoid typo.
#include <stdio.h>
int main(void) {
int arr[9]={sizeof(arr)/sizeof(*arr),0,1,2,3,4,5,6,7};
for (int i=1; i<arr[0]; i++) {
printf("%d ", arr[i]);
}
return 0;
}
Only a few datatypes are suitable for that kind of hack. Therefore, I would advise against it, as this will lead to inconsistent implementation styles across different types of arrays.
A similar approach is used very often with character buffers where in the beginning of the buffer there is stored its actual length.
Dynamic memory allocation in C also uses this approach that is the allocated memory is prefixed with an integer that keeps the size of the allocated memory.
However in general with arrays this approach is not suitable. For example a character array can be much larger than the maximum positive value (127) that can be stored in an object of the type char. Moreover it is difficult to pass a sub-array of such an array to a function. Most of functions that designed to deal with arrays will not work in such a case.
A general approach to declare a function that deals with an array is to declare two parameters. The first one has a pointer type that specifies the initial element of an array or sub-array and the second one specifies the number of elements in the array or sub-array.
Also C allows to declare functions that accepts variable length arrays when their sizes can be specified at run-time.
It is suitable in rather limited circumstances. There are better solutions to the problem it solves.
One problem with it is that if it is not universally applied, then you would have a mix of arrays that used the convention and those that didn't - you have no way of telling if an array uses the convention or not. For arrays used to carry strings for example you have to continually pass &arr[1] in calls to the standard string library, or define a new string library that uses "Pascal strings" rather then "ASCIZ string" conventions (such a library would be more efficient as it happens),
In the case of a true array rather then simply a pointer to memory, sizeof(arr) / sizeof(*arr) will yield the number of elements without having to store it in the array in any case.
It only really works for integer type arrays and for char arrays would limit the length to rather short. It is not practical for arrays of other object types or data structures.
A better solution would be to use a structure:
typedef struct
{
size_t length ;
int* data ;
} intarray_t ;
Then:
int data[9] ;
intarray_t array{ sizeof(data) / sizeof(*data), data } ;
Now you have an array object that can be passed to functions and retain the size information and the data member can be accesses directly for use in third-party or standard library interfaces that do not accept the intarray_t. Moreover the type of the data member can be anything.
Obviously NO is the answer.
All programming languages has predefined functions stored along with the variable type. Why not use them??
In your case is more suitable to access count /length method instead of testing the first value.
An if clause sometimes take more time than a predefined function.
On the first look seems ok to store the counter but imagine you will have to update the array. You will have to do 2 operations, one to insert other to update the counter. So 2 operations means 2 variables to be changed.
For statically arrays might be ok to have them counter then the list, but for dinamic ones NO NO NO.
On the other hand please read programming basic concepts and you will find your idea as a bad one, not complying with programming principles.
I'm a little confused as how to use size_t when other data types like int, unsigned long int and unsigned long long int are present in a program. I try to illustrate my confusion minimally. Imagine a program where I use
void *calloc(size_t nmemb, size_t size)
to allocate an array (one- or multidimensional). Let the call to calloc() be dependent on nrow and sizeof(unsigned long int). sizeof(unsigned long int) is obviously fine because it returns size_t. But let nrow be such that it needs to have type unsigned long int. What do I do in such a case? Do I cast nrow in the call to calloc() from unsigned long int to size_t?
Another case would be
char *fgets(char *s, int size, FILE *stream)
fgets() expects type int as its second parameter. But what if I pass it an array, let's say save, as it's first parameter and use sizeof(save) to pass it the size of the array? Do I cast the call to sizeof() to int? That would be dangerous since int isn't guaranteed to hold all possible returns from sizeof().
What should I do in these two cases? Cast, or just ignore possible warnings from tools such as splint?
Here is an example regarding calloc() (I explicitly omit error-checking for clarity!):
long int **arr;
unsigned long int mrow;
unsigned long int ncol;
arr = calloc(mrow, sizeof(long int *));
for(i = 0; i < mrow; i++) {
arr[i] = calloc(ncol, sizeof(long int));
}
Here is an example for fgets() (Error-handling again omitted for clarity!):
char save[22];
char *ptr_save;
unsigned long int mrow
if (fgets(save, sizeof(save), stdin) != NULL) {
save[strcspn(save, "\n")] = '\0';
mrow = strtoul(save, &ptr_save, 10);
}
I'm a little confused as how to use size_t when other data types like
int, unsigned long int and unsigned long long int are present in a
program.
It is never a good idea to ignore warnings. Warnings are there to direct your attention to areas of your code that may be problematic. It is much better to take a few minutes to understand what the warning is telling you -- and fix it, then to get bit by it later when you hit a corner-case and stumble off into undefined behavior.
size_t itself is just a data-type like any other. While it can vary, it generally is nothing more than an unsigned int covering the range of positive values that can be represented by int including 0 (the type size was intended to be consistent across platforms, the actual bytes on each may differ). Your choice of data-type is a basic and fundamental part of programming. You choose the type based on the range of values your variable can represent (or should be limited to representing). So if whatever you are dealing with can't be negative, then an unsigned or size_t is the proper choice. The choice then allows the compiler to help identify areas where your code would cause that to be violated.
When you compile with warnings enabled (e.g. -Wall -Wextra) which you should use on every compile, you will be warned about possible conflicts in your data-type use. (i.e. comparison between signed and unsigned values, etc...) These are important!
Virtually all modern x86 & x86_64 computers use the twos-compliment representation for signed values. In simple terms it means that if the leftmost bit of a signed number is 1 the value is negative. Herein lie the subtle traps you may fall in when mixing/casting or comparing numbers of varying type. If you choose to cast an unsigned number to a signed number and that number happens to have the most significant bit populated, your large number just became a very small number.
What should I do in these two cases? Cast, or just ignore possible
warnings...
You do what you do each time you are faced with warnings from the compiler. You analyze what is causing the warning, and then you fix it (or if you can't fix it -- (i.e. is comes from some library you don't have access to) -- you understand the warning well enough that you can make an educated decision to disregard it knowing you will not hit any corner-cases that would lead to undefined behavior.
In your examples (while neither should produce warning, they may on some compilers):
arr = calloc (mrow, sizeof(long int *));
What is the range of sizeof(long int *)? Well -- it's the range of what the pointer size can be. So, what's that? (4 bytes on x86 or 8 bytes on x86_64). So the range of values is 4-8, yes that can be properly fixed with a cast to size_t if needed, or better just:
arr = calloc (mrow, sizeof *arr);
Looking at the next example:
char save[22];
...
fgets(save, sizeof(save), stdin)
Here again what is the possible range of sizeof save? From 22 - 22. So yes, if a warnings is produced complainting about the fact that sizeof returns long unsigned and fgets calls for int, 22 can be cast to int.
When to cast size_t
You shouldn't.
Use it where it's appropriate.
(As you already noticed) the libc-library functions tell you where this is the case.
Additionally use it to index arrays.
If in doubt the type suits your program's needs you might go for the useful assertion statement as per Steve Summit's answer and if it fails start over with your program's design.
More on this here by Dan Saks: "Why size_t matters" and "Further insights into size_t"
My other answer got waaaaaaay too long, so here's a short one.
Declare your variables of natural and appropriate types. Let the compiler take care of most conversions. If you have something that is or might be a size, go ahead and use size_t. (Similarly, if you have something that's involved in file sizes or offsets, use off_t.)
Try not to mix signed and unsigned types.
If you're getting warnings about possible data loss because of larger types getting downconverted to possibly smaller types, and if you can't change the types to make the warnings go away, first (a) convince yourselves that the values, in practice, will not ever actually overflow the smaller type, then (b) add an explicit downconversion cast to make the warning go away, and for extra credit (c) add an assertion to document and enforce your assumption:
.
assert(size_i_need <= SIZE_MAX);
char *buf = malloc((size_t)size_i_need);
In general, you're right, you should not ignore the warnings! And in general, if you can, you should shy away from explicit casts, because they can make your code less reliable, or silence warning which are really trying to tell you something important.
Most of the time, I believe, the compiler should do the right thing for you. For example, malloc() expects a size_t, and the compiler knows from the function prototype that it does, so if you write
int size_i_need = 10;
char *buf = malloc(size_i_need);
the compiler will insert the appropriate conversion from int to size_t, as necessary. (I don't believe I've had warnings here I had to worry about, either.)
If the variables you're using are already unsigned, so much the better!
Similarly, if you were to write
fgets(buf, sizeof(buf), ifp);
the compiler will again insert an appropriate conversion. Here, I guess I see what you're getting at, a 64-bit compiler might emit a warning about the downconversion from long to int. Now that I think about it, I'm not sure why I haven't had that problem, because this is a common idiom.
(You also asked about passing unsigned long to malloc, and on a machine where size_t is smaller than long, I suppose that might get you warnings, too. Is that what you were worried about?)
If you've got a downsize that you can't avoid, and your compiler or some other tool is warning about it, and you want to get rid of the warning safely, you could use a cast and an assertion. That is, if you write
unsigned long long size_i_need = 23;
char *buf = malloc(size_i_need);
this might get a warning on a machine where size_t is 32 bits. So you could silence the warning with a cast (on the assumption that your unsigned long long values will never actually be too big), and then back up your assumption with a call to assert:
unsigned long long size_i_need = 23;
assert(size_i_need <= SIZE_MAX);
char *buf = malloc((size_t)size_i_need);
In my experience, the biggest nuisance is printing these things out. If you write
printf("int size = %d\n", sizeof(int));
or
printf("string length = %d\n", strlen("abc"));
on a 64-bit machine, a modern compiler will typically (and correctly) warn you that "format specifies type 'int' but the argument has type 'unsigned long'", or something to that effect. You can fix this in two ways: cast the value to match the printf format, or change the printf format to match the value:
printf("int size = %d\n", (int)sizeof(int));
printf("string length = %lu\n", strlen("abc"));
In the first case, you're assuming that sizeof's result will fit in an int (which is probably a safe bet). In the second case, you're assuming that size_t is in fact unsigned long, which may be true on a 64-bit compiler but may not be true on some other. So it's actually safer to use an explicit cast in the second case, too:
printf("string length = %lu\n", (unsigned long)strlen("abc"));
The bottom line is that abstract types like size_t don't work so well with printf; this is where we can see that the C++ output style of cout << "string length = " << strlen("abc") << endl has its advantages.
To solve this problem, there are some special printf modifiers that are guaranteed to match size_t and I think off_t and a few other abstract types, although they're not so well known. (I wasn't sure where to look them up, but while I've been composing this answer, some commenters have already reminded me.) So the best way to print one of these things (if you can remember, and unless you're using old compilers) would be
printf("string length = %zu\n", strlen("abc"));
Bottom line:
You obviously don't have to worry about passing plain int or plain unsigned to a function like calloc that expects size_t.
When calling something that might result in a downcast, such as passing a size_t to fgets where size_t is 64 bits but int is 32, or passing unsigned long long to calloc where size_t is only 32 bits, you might get warnings. If you can't make the passed-in types smaller (which in the general case you're not going to be able to do), you'll have little choice to silence the warnings but to insert a cast. In this case, to be strictly correct, you might want to add some assertions.
With all of that said, I'm not sure I've actually answered your question, so if you'd like further clarification, please ask.
I have an array of unsigned integers in C and a java array of longs. I want to copy the contents of the unsigned integers to the java array. So far, the only function that I've found to do this is SetLongArrayRegion(), but this takes an entire buffer array. Is there a function to set only the individual elements of the java array?
There is also a function for the primitive 'long' type to set individual elements in JNI. So I believe what you want to have is something like this
unsigned int* cIntegers = getFromSomewhere();
int elements = sizeof(cIntegers) / sizeof(int);
jfieldID jLongArrayId = env->GetFieldID(javaClass, "longArray", "[J");
jlongArray jLongArray = (jlongArray) env->GetObjectField(javaObject, jLongArrayId);
for (unsigned int i = 0; i < elements; ++i) {
unsigned int cInteger = cIntegers[i];
long cLong = doSomehowConvert(cInteger);
env->SetLongArrayElement(jLongArray, i, (jlong) cLong);
}
if the long array in java is called longArray and the java class is saved in a JNI jclass variable javaClass.
There is a SetObjectArrayElement() function, that works on non-native types. If you really, really wanted to use this approach, I think you could create an array of Longs. You may still have problems with type conversion though.
I think your big problem here is that you are trying to cast unsigned integers to Java longs. Java longs are signed 64 bit numbers. Once you have your conversion right, you can create an array of type jlong in c, and then use to SetLongArrayRegion() method to get the numbers back to java.
In C, in an Unix environment (Plan9), I have got an array as memory.
uchar mem[32*1024];
I need that array to contain different fields, such as an int (integer) to indicate the size of memory free and avaliable. So, I've tried this:
uchar* memp=mem;
*memp=(int)250; //An example of size I want to assign.
I know the size of an int is 4, so I have to force with casting or something like that, that the content of the four first slots of mem have the number 250 in this case, it's big endian.
But the problem is when I try to do what I've explained it doesn't work. I suppose there is a mistake with the conversion of types. I hopefully ask you, how could I force that mem[0] to mem[3] would have the size indicated, representated as an int and no as an uchar?
Thanks in advance
Like this:
*((int*) memp) = 250;
That says "Even though memp is a pointer to characters, I want you treat it as a pointer to integers, and put this integer where it points."
Have you considered using a union, as in:
union mem_with_size {
int size;
uchar mem[32*1024];
};
Then you don't have to worry about the casting. (You still have to worry about byte-ordering, of course, but that's a different issue.)
As others have pointed out, you need to cast to a pointer to int. You also need to make sure you take alignment of the pointer in consideration: on many architectures, an int needs to start at a memory location that is divisible by sizeof(int), and if you try to access an unaligned int, you get a SIGBUS. On other architectures, it works, but slowly. On yet others, it works quickly.
A portable way of doing this might be:
int x = 250;
memcpy(mem + offset, &x, sizeof(x));
Using unions may make this easier, though, so +1 to JamieH.
Cast pointer to int, not unsigned char again!
int * memp = (int *)mem;
* memp = 250; //An example of size I want to assign.