Copying C Array into Java Array Using JNI - c

I have an array of unsigned integers in C and a java array of longs. I want to copy the contents of the unsigned integers to the java array. So far, the only function that I've found to do this is SetLongArrayRegion(), but this takes an entire buffer array. Is there a function to set only the individual elements of the java array?

There is also a function for the primitive 'long' type to set individual elements in JNI. So I believe what you want to have is something like this
unsigned int* cIntegers = getFromSomewhere();
int elements = sizeof(cIntegers) / sizeof(int);
jfieldID jLongArrayId = env->GetFieldID(javaClass, "longArray", "[J");
jlongArray jLongArray = (jlongArray) env->GetObjectField(javaObject, jLongArrayId);
for (unsigned int i = 0; i < elements; ++i) {
unsigned int cInteger = cIntegers[i];
long cLong = doSomehowConvert(cInteger);
env->SetLongArrayElement(jLongArray, i, (jlong) cLong);
}
if the long array in java is called longArray and the java class is saved in a JNI jclass variable javaClass.

There is a SetObjectArrayElement() function, that works on non-native types. If you really, really wanted to use this approach, I think you could create an array of Longs. You may still have problems with type conversion though.
I think your big problem here is that you are trying to cast unsigned integers to Java longs. Java longs are signed 64 bit numbers. Once you have your conversion right, you can create an array of type jlong in c, and then use to SetLongArrayRegion() method to get the numbers back to java.

Related

Cast elements during copying

I just solved a C task and now wonder if there is a quick way to copy an array while casting the data type. Specifically, I had an array of integer values and wanted to copy this into a new array with long long int values. The function memcpy copies only bytes without considering data types. I have now solved this with a loop and wonder if there is a faster way.
void myfunction(int arr_count, int* arr) {
long long int arr_max[arr_count];
long long int arr_min[arr_count];
for(int j = 0;j < arr_count; j++){
arr_max[j] = (long long int) *arr;
arr_min[j] = (long long int) *arr;
}
}
memcpy assumes that the source and destination have compatible types, so it won't work here. I'm not quite sure why you wish to change the type on the fly during run-time - it is more sensible to use the most suitable type to begin with.
That being said, the code here does what you ask (except it should naturally say arr[j]), although the cast is strictly speaking not necessary, since the integer will get converted to the type of the left operand during assignment.
You can't really optimize this code further without a specific system and use-case in mind. Though as already mentioned, the best optimization is to pick the correct type to begin with and then don't copy a thing.

Is it good programming practice in C to use first array element as array length?

Because in C the array length has to be stated when the array is defined, would it be acceptable practice to use the first element as the length, e.g.
int arr[9]={9,0,1,2,3,4,5,6,7};
Then use a function such as this to process the array:
int printarr(int *ARR) {
for (int i=1; i<ARR[0]; i++) {
printf("%d ", ARR[i]);
}
}
I can see no problem with this but would prefer to check with experienced C programmers first. I would be the only one using the code.
Well, it's bad in the sense that you have an array where the elements does not mean the same thing. Storing metadata with the data is not a good thing. Just to extrapolate your idea a little bit. We could use the first element to denote the element size and then the second for the length. Try writing a function utilizing both ;)
It's also worth noting that with this method, you will have problems if the array is bigger than the maximum value an element can hold, which for char arrays is a very significant limitation. Sure, you can solve it by using the two first elements. And you can also use casts if you have floating point arrays. But I can guarantee you that you will run into hard traced bugs due to this. Among other things, endianness could cause a lot of issues.
And it would certainly confuse virtually every seasoned C programmer. This is not really a logical argument against the idea as such, but rather a pragmatic one. Even if this was a good idea (which it is not) you would have to have a long conversation with EVERY programmer who will have anything to do with your code.
A reasonable way of achieving the same thing is using a struct.
struct container {
int *arr;
size_t size;
};
int arr[10];
struct container c = { .arr = arr, .size = sizeof arr/sizeof *arr };
But in any situation where I would use something like above, I would probably NOT use arrays. I would use dynamic allocation instead:
const size_t size = 10;
int *arr = malloc(sizeof *arr * size);
if(!arr) { /* Error handling */ }
struct container c = { .arr = arr, .size = size };
However, do be aware that if you init it this way with a pointer instead of an array, you're in for "interesting" results.
You can also use flexible arrays, as Andreas wrote in his answer
In C you can use flexible array members. That is you can write
struct intarray {
size_t count;
int data[]; // flexible array member needs to be last
};
You allocate with
size_t count = 100;
struct intarray *arr = malloc( sizeof(struct intarray) + sizeof(int)*count );
arr->count = count;
That can be done for all types of data.
It makes the use of C-arrays a bit safer (not as safe as the C++ containers, but safer than plain C arrays).
Unforntunately, C++ does not support this idiom in the standard.
Many C++ compilers provide it as extension though, but it is not guarantueed.
On the other hand this C FLA idiom may be more explicit and perhaps more efficient than C++ containers as it does not use an extra indirection and/or need two allocations (think of new vector<int>).
If you stick to C, I think this is a very explicit and readable way of handling variable length arrays with an integrated size.
The only drawback is that the C++ guys do not like it and prefer C++ containers.
It is not bad (I mean it will not invoke undefined behavior or cause other portability issues) when the elements of array are integers, but instead of writing magic number 9 directly you should have it calculate the length of array to avoid typo.
#include <stdio.h>
int main(void) {
int arr[9]={sizeof(arr)/sizeof(*arr),0,1,2,3,4,5,6,7};
for (int i=1; i<arr[0]; i++) {
printf("%d ", arr[i]);
}
return 0;
}
Only a few datatypes are suitable for that kind of hack. Therefore, I would advise against it, as this will lead to inconsistent implementation styles across different types of arrays.
A similar approach is used very often with character buffers where in the beginning of the buffer there is stored its actual length.
Dynamic memory allocation in C also uses this approach that is the allocated memory is prefixed with an integer that keeps the size of the allocated memory.
However in general with arrays this approach is not suitable. For example a character array can be much larger than the maximum positive value (127) that can be stored in an object of the type char. Moreover it is difficult to pass a sub-array of such an array to a function. Most of functions that designed to deal with arrays will not work in such a case.
A general approach to declare a function that deals with an array is to declare two parameters. The first one has a pointer type that specifies the initial element of an array or sub-array and the second one specifies the number of elements in the array or sub-array.
Also C allows to declare functions that accepts variable length arrays when their sizes can be specified at run-time.
It is suitable in rather limited circumstances. There are better solutions to the problem it solves.
One problem with it is that if it is not universally applied, then you would have a mix of arrays that used the convention and those that didn't - you have no way of telling if an array uses the convention or not. For arrays used to carry strings for example you have to continually pass &arr[1] in calls to the standard string library, or define a new string library that uses "Pascal strings" rather then "ASCIZ string" conventions (such a library would be more efficient as it happens),
In the case of a true array rather then simply a pointer to memory, sizeof(arr) / sizeof(*arr) will yield the number of elements without having to store it in the array in any case.
It only really works for integer type arrays and for char arrays would limit the length to rather short. It is not practical for arrays of other object types or data structures.
A better solution would be to use a structure:
typedef struct
{
size_t length ;
int* data ;
} intarray_t ;
Then:
int data[9] ;
intarray_t array{ sizeof(data) / sizeof(*data), data } ;
Now you have an array object that can be passed to functions and retain the size information and the data member can be accesses directly for use in third-party or standard library interfaces that do not accept the intarray_t. Moreover the type of the data member can be anything.
Obviously NO is the answer.
All programming languages has predefined functions stored along with the variable type. Why not use them??
In your case is more suitable to access count /length method instead of testing the first value.
An if clause sometimes take more time than a predefined function.
On the first look seems ok to store the counter but imagine you will have to update the array. You will have to do 2 operations, one to insert other to update the counter. So 2 operations means 2 variables to be changed.
For statically arrays might be ok to have them counter then the list, but for dinamic ones NO NO NO.
On the other hand please read programming basic concepts and you will find your idea as a bad one, not complying with programming principles.

how to transform a double into an array of bytes in c

I am writing a program in c that runs on 32-bit Microcontroller. I have a double and I want to transform it into an array of bytes. These bytes will be later sent to another microcontroller. But I do not know which method I should use to perform this task. Any Suggestion?
The Problem consists in the inability in c to define a function that returns an array. So I must define an array outside the function and then process it inside the c-function which does not satisfy my needs
You can use type-punning through a union:
union{
double d;
unsigned char bytes[sizeof(double)];
} d2b;
d2b.d = 3.14;
You can access the bytes through d2b.bytes. Note that this assumes both micro-controllers use the same (internal) representation for doubles. If not, use some sort of serialization.
Edit:
Seeing your edit. You could also just memcpy and 'return' it through an output-parameter:
unsigned char * doubleToBytes(unsigned char bytes[sizeof(double)], double d){
return memcpy(bytes, &d, sizeof d);
}

c memory allocation and arrays of arrays

I am currently writing a terminal based hex editor. and I have a few question regarding memory allocation.
To track changes the user has made I write them to an array of arrays like so, the [i][0] is the absolute offset of the change from the beginning of the file and [i][1] is the change itself:
unsigned long long writebuffer[10000][2];
but I have 2 problems with this. the first array (writebuffer[i][0]) NEEDS to be the sizeof unsigned long long but the second one ([i][1]) can be as small as sizeof unsigned char. Is it possible to do something like this??
also can I dynamically allocate the first index of writebuffer so I wouldn't initialize it like above but more like:
unsigned long long **writebuffer;
and then change the first index with malloc() and realloc(); while the second index would be 2 but have the size of a unsigned char.
Why not use a struct?
typedef struct {
long long offset;
int change; /* or unsigned short, or whatever you feel is right */
} t_change;
Be aware that the struct will likely get padded by the compiler to a different size if you choose to use unsigned char for the change element. What it gets padded to depends on your compiler, the compiler settings, and the target architecture.
You may define an array of type void:
void **writebuffer;
and then allocate and use each element as you like, for example:
*writebuffer[0] = (char*)malloc(sizeof(char));

Length of an `int` array in Objective C

I am passing an Integer array to a function.
Next I want to traverse through the array.
In C and C++ this worked by simply using arrayname.length which gave the number of elements in array. What is the way to find that in Objective-C?
[NSArrayObject length] works for NSArray type but I want it for int[]. Not even [XYZ count] works (an already suggested answer) so I'm looking for another way here.
You can use [XYZ count] to get the length of the array
There isn't anything specific to Objective-C with an array of ints. You would use the same technique as if you were using C.
sz = (sizeof foo) / (sizeof foo[0]);
There is no such thing as array.length in C. An int array in Objective-C is exactly the same as an int array in C. If it's statically defined like int foo[5], then you can do sizeof(foo) to get the size — but only in the same function foo is defined in (to other functions, it's just an int pointer, not an array per se). Otherwise, there is no inherent way to get this information. You need to either pass the size around or use sentinel values (like the terminating '\0' in C strings, which are just arrays of char).
Huh? In C, there's no such thing as "arrayname.length". An array of a primitive type, such as int[], does not have any length information associated with it at runtime.
[array count] this appears to work the easiest in objective-c
this code can be use when the total number of element in array are not known.
main()
{
int a[]={1,2,3,4,5,6,7}
int i;
clrscr();
for (i=0;i<=((sizeof(a)/sizeof(int));i++)
{
puts(a[i]);
}
getch();
}
There is no such thing as arrayname.length in C. That is why so many functions take argument pairs, one argument with the array and one argument with the length of the array. The most obvious case for this is the main function. You can find this function is all your iPhone projects in a file named main.m, it will look something like this:
int main(int argc, char *argv[]) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
int retVal = UIApplicationMain(argc, argv, nil, nil);
[pool release];
return retVal;
}
Notice how the argv agument is a vector, or array of strings, and the argc argument is a count of how many items that are in this array.
You will have to do the same thing for all primitive types, this is the C part of Objective-C. If you stay in the Objective parts using NSArray or subclasses works fine, but requires all elements to be objects.
looks like a job for NSMutableArray. is there a reason why you need to work with a C array? if not, consider NSMutableArray.
i may be wrong (so please correct me if i am), but there's no easy way in C to get the size of int[] at runtime. you would have to do something like create a struct to hold your array and an int where you can keep track of how many items in your array yourself.
even fancier, you can make your struct hold your array and a block. then you can design the block to do the tracking for you.
but this is more a C question, not an objective-c quesiton.
If you want an array of integers, wrap them in NSNumber objects and place them into an NSArray.
What you are asking to do, is not really supported well in C though there is a method on the mac you could try (malloc_size):
determine size of dynamically allocated memory in c
You should try the following approach:
float ptArray[] = {97.8, 92.3, 89.4, 85.7, 81.0, 73.4, 53.0, 42.3, 29.4, 14.1};
int x = sizeof(ptArray)/sizeof(ptArray[0]);
NSLog(#"array count = %d", x);
Output:
array count = 10

Resources