What is the main difference in these two:
val array: Array<Double> = arrayOf()
vs
val array: DoubleArray = doubleArrayOf()
I know that one is using primitive data type double and the second its object based countrepart Double.
Is there any penalty or disadvatnage in using plain DoubleArray?
Why I want to know:
I am using JNI and for Double, I have to call
jclass doubleClass = env->FindClass("java/lang/Double");
jmethodID doubleCtor = env->GetMethodID(doubleClass, "<init>", "(D)V");
jobjectArray res = env->NewObjectArray(elementCount, doubleClass, nullptr);
for (int i = 0; i < elementCount; i++){
jobject javaDouble = env->NewObject(doubleClass, doubleCtor, array[i]);
env->SetObjectArrayElement(res, i, javaDouble);
env->DeleteLocalRef(javaDouble);
}
vs
jdoubleArray res = env->NewDoubleArray(elementCount);
env->SetDoubleArrayRegion(res, 0, elementCount, array);
There is no penalty (in fact, it will be faster due to no boxing), but, as with primitive types in Java, it forces you to create specialized overloads of certain methods if you want to be able to use them with [Int/Double/etc]Array.
This has actually been discussed over at the Kotlin forums:
the memory layout of an array of integers is quite different from that of an array of object pointers.
Norswap's comment in that discussion summarizes the tradeoff quite well:
The native one [int[]/IntArray] is faster to read/write, but the wrapped [Integer[]/Array<Int>] does not need to be entirely converted each time it crosses a generic boundary.
#7, norswap
For example, a function accepting Array<Int> (Integer[] on the JVM) will not accept an IntArray (int[]).
You have already listed the only real difference, that one is compiled to the primitive double[] and the other to Double[]. However, Double[] is an array of objects, so any time you modify the array by setting a value to a double, or retrieve a double, boxing and unboxing will be performed, respectively.
It is usually recommended to use DoubleArray instead, for speed and memory reasons.
As an example of speed penalties due to the object wrappers, take a look at the start of this post, taken from Effective Java:
public static void main(String[] args) {
Long sum = 0L; // uses Long, not long
for (long i = 0; i <= Integer.MAX_VALUE; i++) {
sum += i;
}
System.out.println(sum);
}
Replacing Long with long brings runtime from 43 seconds down to 8 seconds.
Related
(There has some Edit in below)
Well, I wrote exactly the same code with Swift and C lang. It's a code to find a Prime number and show that.
I expect that Swift lang's Code is much faster than C lang's program, but It doesn't.
Is there any reason Swift lang is much slower than C lang code?
When I found until 4000th Prime number, C lang finished calculating with only one second.
But, Swift finished with 38.8 seconds.
It's much much slower than I thought.
Here is a code I wrote.
Do there any solutions to fast up Swift's code?
(Sorry for the Japanese comment or text in the code.)
Swift
import CoreFoundation
/*
var calendar = Calendar.current
calender.locale = .init(identifier: "ja.JP")
*/
var primeCandidate: Int
var prime: [Int] = []
var countMax: Int
print("いくつ目まで?(最小2、最大100000まで)\n→ ", terminator: "")
countMax = Int(readLine()!)!
var flagPrint: Int
print("表示方法を選んでください。(1:全て順番に表示、2:\(countMax)番目の一つだけ表示)\n→ ", terminator: "")
flagPrint = Int(readLine()!)!
prime.append(2)
prime.append(3)
var currentMaxCount: Int = 2
var numberCount: Int
primeCandidate = 4
var flag: Int = 0
var ix: Int
let startedTime = clock()
//let startedTime = time()
//.addingTimeInterval(0.0)
while currentMaxCount < countMax {
for ix in 2..<primeCandidate {
if primeCandidate % ix == 0 {
flag = 1
break
}
}
if flag == 0 {
prime.append(primeCandidate)
currentMaxCount += 1
} else if flag == 1 {
flag = 0
}
primeCandidate += 1
}
let endedTime = clock()
//let endedTime = Time()
//.timeIntervalSince(startedTime)
if flagPrint == 1 {
print("計算された素数の一覧:", terminator: "")
let completedPrimeNumber = prime.map {
$0
}
print(completedPrimeNumber)
//print("\(prime.map)")
print("\n\n終わり。")
} else if flagPrint == 2 {
print("\(currentMaxCount)番目の素数は\(prime[currentMaxCount - 1])です。")
}
print("\(countMax)番目の素数まで計算。")
print("計算経過時間: \(round(Double((endedTime - startedTime) / 100000)) / 10)秒")
Clang
#include <stdio.h>
#include <time.h> //経過時間計算のため
int main(void)
{
int primeCandidate;
unsigned int prime[100000];
int countMax;
printf("いくつ目まで?(最小2、最大100000まで)\n→ ");
scanf("%d", &countMax);
int flagPrint;
printf("表示方法を選んでください。(1:全て順番に表示、2:%d番目の一つだけ表示)\n→ ", countMax);
scanf("%d", &flagPrint);
prime[0] = 2;
prime[1] = 3;
int currentMaxCount = 2;
int numberCount;
primeCandidate = 4;
int flag = 0;
int ix;
int startedTime = time(NULL);
for(;currentMaxCount < countMax;primeCandidate++){
/*
for(numberCount = 0;numberCount < currentMaxCount - 1;numberCount++){
if(primeCandidate % prime[numberCount] == 0){
flag = 1;
break;
}
}
*/
for(ix = 2;ix < primeCandidate;++ix){
if(primeCandidate % ix == 0){
flag = 1;
break;
}
}
if(flag == 0){
prime[currentMaxCount] = primeCandidate;
currentMaxCount++;
} else if(flag == 1){
flag = 0;
}
}
int endedTime = time(NULL);
if(flagPrint == 1){
printf("計算された素数の一覧:");
for(int i = 0;i < currentMaxCount - 1;i++){
printf("%d, ", prime[i]);
}
printf("%d.\n\n終わり", prime[currentMaxCount - 1]);
} else if(flagPrint == 2){
printf("%d番目の素数は「%d」です。\n",currentMaxCount ,prime[currentMaxCount - 1]);
}
printf("%d番目の素数まで計算", countMax);
printf("計算経過時間: %d秒\n", endedTime - startedTime);
return 0;
}
**Add**
I found some reason for one.
for ix in 0..<currentMaxCount - 1 {
if primeCandidate % prime[ix] == 0 {
flag = 1
break
}
}
I wrote a code to compare all numbers. That was a mistake.
But, I fix with code with this, also Swift finished calculating in 4.7 secs.
It's 4 times slower than C lang also.
The fundamental cause
As with most of these "why does this same program in 2 different languages perform differently?", the answer is almost always: "because they're not the same program."
They might be similar in high-level intent, but they're implemented differently enough that you can distinguish their performance.
Sometimes they're different in ways you can control (e.g. you use an array in one program and a hash set in the other) or sometimes in ways you can't (e.g. you're using CPython and you're experiencing the overhead of interpretation and dynamic method dispatch, as compared to compiled C function calls).
Some example differences
In this case, there's a few notable differences I can see:
The prime array in your C code uses unsigned int, which is typically akin to UInt32. Your Swift code uses Int, which is typically equivalent to Int64. It's twice the size, which doubles memory usage and decreases the efficacy of the CPU cache.
Your C code pre-allocates the prime array on the stack, whereas your Swift code starts with an empty Array, and repeatedly grows it as necessary.
Your C code doesn't pre-initialize the contents of the prime array. Any junk that might be leftover in the memory is still there to be observed, whereas the Swift code will zero-out all the array memory before use.
All Swift arithmetic operations are checked for overflow. This introduces a branch within every single +, %, etc. That's good for program safety (overflow bugs will never be silent and will always be detected), but sub-optimal in performance-critical code where you're certain that overflow is impossible. There's non-checked variants of all the operators that you can use, such as &+, &-, etc.
The general trend
In general, you'll notice a trend that Swift optimizes for safety and developer experience, whereas C optimizes for being close to the hardware. Swift optimizes for allowing the developer to express their intent about the business logic, whereas C optimizes for allowing the developer to express their intent about the final machine code that runs.
There are typically "escape hatches" in Swift that let you sacrifice safety or convenience for C-like performance. This sounds bad, but arguably, you can view C just being exclusively using these escape hatches. There's no Array, Dictionary, automatic reference counting, Sequence algorithms, etc. E.g. what Swift calls UnsafePointer is just a "pointer" in C. "Unsafe" comes with the territory.
Improving the performance
You could get pretty far in hitting performance parity by:
Pre-allocating a sufficiently large array with [Array.reserveCapacity(_:)](https://developer.apple.com/documentation/swift/array/reservecapacity(_:)). See this note in the Array documentation:
Growing the Size of an Array
Every array reserves a specific amount of memory to hold its contents. When you add elements to an array and that array begins to exceed its reserved capacity, the array allocates a larger region of memory and copies its elements into the new storage. The new storage is a multiple of the old storage’s size. This exponential growth strategy means that appending an element happens in constant time, averaging the performance of many append operations. Append operations that trigger reallocation have a performance cost, but they occur less and less often as the array grows larger.
If you know approximately how many elements you will need to store, use the reserveCapacity(_:) method before appending to the array to avoid intermediate reallocations. Use the capacity and count properties to determine how many more elements the array can store without allocating larger storage.
For arrays of most Element types, this storage is a contiguous block of memory. For arrays with an Element type that is a class or #objc protocol type, this storage can be a contiguous block of memory or an instance of NSArray. Because any arbitrary subclass of NSArray can become an Array, there are no guarantees about representation or efficiency in this case.
Use UInt32 or Int32 instead of Int.
If necessary drop down to UnsafeMutableBuffer<UInt32> instead of Array<UInt32>. This is closer to the simple pointer implementation used in your C example.
You can used unchecked arithmetic operators like &+, &-, &% and so on. Obviously, you should only do this when you're absolutely certain that overflow is impossible. Given how many thousands of silent overflow related bugs have come and gone, this is almost always a bad bet, but the loaded gun is available for you if you insist.
These aren't things you should generally do. They're merely possibilities that exist if they're necessary to improve performance of critical code.
For example, the Swift convention is to generally use Int unless you have a good reason to use something else. For example, Array.count returns an Int, even though it can never be negative, and is unlikely to ever need to be more than UInt32.max.
You've forgotten to turn on the optimizer. Swift is much slower without optimization than C, but on things like this is roughly the same when optimized:
➜ x swift -O prime.swift
いくつ目まで?(最小2、最大100000まで)
→ 40000
表示方法を選んでください。(1:全て順番に表示、2:40000番目の一つだけ表示)
→ 2
40000番目の素数は479909です。
40000番目の素数まで計算。
計算経過時間: 5.9秒
➜ x clang -O3 prime.c && ./a.out
いくつ目まで?(最小2、最大100000まで)
→ 40000
表示方法を選んでください。(1:全て順番に表示、2:40000番目の一つだけ表示)
→ 2
40000番目の素数は「479909」です。
40000番目の素数まで計算計算経過時間: 6秒
This is without doing any work to improve your code (probably the most significant would be to pre-allocate the buffer like you do in C that doesn't actually matter).
I have a lot of constants and they can be separated by groups, so I used some static const arrays of doubles.
But I need to do some calculations with this array. Therefore, I created another array that stores the calculated values - because I use them a lot.
However, I make a lot of index of the arrays, so it gets very slow too and my code, by now, is O(6^n²) - with n between 1 and 12.
My question is: what is faster, make the same calculations a lot of time, or index this array that stores those calculated?
I thought to make a lot of defines (because I know it's preprocessed), but I can't index defines what would make the code extremely big and unclear.
short code (example)
const static double array1[12] = {2,4,6,8,...};
const static double array2[12] = {1,2,3,4,...};
...
// in some function
{
...
double stored1[12];
for(int i = start1; i < end1; i++)
stored1[i] = array1[i] + i*array1[i-1];
for(int i = start2; i < end2; i++)
stored1[i] = array2[i] + i*array2[i-1];
// then, I'll have to index stored1 a lot of times - or create 12 auxiliary variables
// when I need to use those arrays of stored values
// I use these values in loops of loops of loops (they are some summations)
// I don't use loops, but recursive functions to make this loops, but in both
// cases, I have to index a lot of time this array (or make the same calculations)
...
}
My current code is like:
double[][][] result = new double[1000][][];
for (int i = 0; i < result.length; i++){
result[i] = somemethod();
}
Now I want to use stream to do the same thing. I did a lot of research, but cannot find an answer to fill the first dimension with the return value of another method.
For your task, the 3D nature of the array is irrelevant. You can replace double[][] with X in your mind and follow the same generic steps, you would do when producing the array X[]:
X[] array = IntStream.range(0, size).mapToObj(i -> somemethod()).toArray(X[]::new);
Using double[][] for X and 1000 for size yields:
double[][][] result
= IntStream.range(0, 1000).mapToObj(i -> somemethod()).toArray(double[][][]::new);
If somemethod() is expensive, but has no interference (so multiple concurrent calls are safe), you can use .parallel() with the stream. You could also use
double[][][] result = new double[1000][][];
Arrays.parallelSetAll(args, i -> somemethod());
in that case.
There is a pseudocode that I want to implement in C. But I am in doubt on how to implement a part of it. The psuedocode is:
for every pair of states qi, and qj, i<j, do
D[i,j] := 0
S[i,j] := notzero
end for
i and j, in qi and qj are subscripts.
how do I represent D[i,J] or S[i,j]. which data structure to use so that its simple and fast.
You can use something like
int length= 10;
int i =0, j= 0;
int res1[10][10] = {0, }; //index is based on "length" value
int res2[10][10] = {0, }; //index is based on "length" value
and then
for (i =0; i < length; i++)
{
for (j =0; j < length; j++)
{
res1[i][j] = 0;
res2[i][j] = 1;//notzero
}
}
Here D[i,j] and S[i,j] are represented by res1[10][10] and res2[10][10], respectively. These are called two-dimentional array.
I guess struct will be your friend here depending on what you actually want to work with.
Struct would be fine if, say, pair of states creates some kind of entity.
Otherwise You could use two-dimensional array.
After accept answer.
Depending on coding goals and platform, to get "simple and fast" using a pointer to pointer to a number may be faster then a 2-D array in C.
// 2-D array
double x[MAX_ROW][MAX_COL];
// Code computes the address in `x`, often involving a i*MAX_COL, if not in a loop.
// Slower when multiplication is expensive and random array access occurs.
x[i][j] = f();
// pointer to pointer of double
double **y = calloc(MAX_ROW, sizeof *y);
for (i=0; i<MAX_ROW; i++) y[i] = calloc(MAX_COL, sizeof *(y[i]));
// Code computes the address in `y` by a lookup of y[i]
y[i][j] = f();
Flexibility
The first data type is easy print(x), when the array size is fixed, but becomes challenging otherwise.
The 2nd data type is easy print(y, rows, columns), when the array size is variable and of course works well with fixed.
The 2nd data type also row swapping simply by swapping pointers.
So if code is using a fixed array size, use double x[MAX_ROW][MAX_COL], otherwise recommend double **y. YMMV
I'm new to C from many years of Matlab for numerical programming. I've developed a program to solve a large system of differential equations, but I'm pretty sure I've done something stupid as, after profiling the code, I was surprised to see three loops that were taking ~90% of the computation time, despite the fact they are performing the most trivial steps of the program.
My question is in three parts based on these expensive loops:
Initialization of an array to zero. When J is declared to be a double array are the values of the array initialized to zero? If not, is there a fast way to set all the elements to zero?
void spam(){
double J[151][151];
/* Other relevant variables declared */
calcJac(data,J,y);
/* Use J */
}
static void calcJac(UserData data, double J[151][151],N_Vector y)
{
/* The first expensive loop */
int iter, jter;
for (iter=0; iter<151; iter++) {
for (jter = 0; jter<151; jter++) {
J[iter][jter] = 0;
}
}
/* More code to populate J from data and y that runs very quickly */
}
During the course of solving I need to solve matrix equations defined by P = I - gamma*J. The construction of P is taking longer than solving the system of equations it defines, so something I'm doing is likely in error. In the relatively slow loop below, is accessing a matrix that is contained in a structure 'data' the the slow component or is it something else about the loop?
for (iter = 1; iter<151; iter++) {
for(jter = 1; jter<151; jter++){
P[iter-1][jter-1] = - gamma*(data->J[iter][jter]);
}
}
Is there a best practice for matrix multiplication? In the loop below, Ith(v,iter) is a macro for getting the iter-th component of a vector held in the N_Vector structure 'v' (a data type used by the Sundials solvers). Particularly, is there a best way to get the dot product between v and the rows of J?
Jv_scratch = 0;
int iter, jter;
for (iter=1; iter<151; iter++) {
for (jter=1; jter<151; jter++) {
Jv_scratch += J[iter][jter]*Ith(v,jter);
}
Ith(Jv,iter) = Jv_scratch;
Jv_scratch = 0;
}
1) No they're not you can memset the array as follows:
memset( J, 0, sizeof( double ) * 151 * 151 );
or you can use an array initialiser:
double J[151][151] = { 0.0 };
2) Well you are using a fairly complex calculation to calculate the position of P and the position of J.
You may well get better performance. by stepping through as pointers:
for (iter = 1; iter<151; iter++)
{
double* pP = (P - 1) + (151 * iter);
double* pJ = data->J + (151 * iter);
for(jter = 1; jter<151; jter++, pP++, pJ++ )
{
*pP = - gamma * *pJ;
}
}
This way you move various of the array index calculation outside of the loop.
3) The best practice is to try and move as many calculations out of the loop as possible. Much like I did on the loop above.
First, I'd advise you to split up your question into three separate questions. It's hard to answer all three; I, for example, have not worked much with numerical analysis, so I'll only answer the first one.
First, variables on the stack are not initialized for you. But there are faster ways to initialize them. In your case I'd advise using memset:
static void calcJac(UserData data, double J[151][151],N_Vector y)
{
memset((void*)J, 0, sizeof(double) * 151 * 151);
/* More code to populate J from data and y that runs very quickly */
}
memset is a fast library routine to fill a region of memory with a specific pattern of bytes. It just so happens that setting all bytes of a double to zero sets the double to zero, so take advantage of your library's fast routines (which will likely be written in assembler to take advantage of things like SSE).
Others have already answered some of your questions. On the subject of matrix multiplication; it is difficult to write a fast algorithm for this, unless you know a lot about cache architecture and so on (the slowness will be caused by the order that you access array elements causes thousands of cache misses).
You can try Googling for terms like "matrix-multiplication", "cache", "blocking" if you want to learn about the techniques used in fast libraries. But my advice is to just use a pre-existing maths library if performance is key.
Initialization of an array to zero.
When J is declared to be a double
array are the values of the array
initialized to zero? If not, is there
a fast way to set all the elements to
zero?
It depends on where the array is allocated. If it is declared at file scope, or as static, then the C standard guarantees that all elements are set to zero. The same is guaranteed if you set the first element to a value upon initialization, ie:
double J[151][151] = {0}; /* set first element to zero */
By setting the first element to something, the C standard guarantees that all other elements in the array are set to zero, as if the array were statically allocated.
Practically for this specific case, I very much doubt it will be wise to allocate 151*151*sizeof(double) bytes on the stack no matter which system you are using. You will likely have to allocate it dynamically, and then none of the above matters. You must then use memset() to set all bytes to zero.
In the
relatively slow loop below, is
accessing a matrix that is contained
in a structure 'data' the the slow
component or is it something else
about the loop?
You should ensure that the function called from it is inlined. Otherwise there isn't much else you can do to optimize the loop: what is optimal is highly system-dependent (ie how the physical cache memories are built). It is best to leave such optimization to the compiler.
You could of course obfuscate the code with manual optimization things such as counting down towards zero rather than up, or to use ++i rather than i++ etc etc. But the compiler really should be able to handle such things for you.
As for matrix addition, I don't know of the mathematically most efficient way, but I suspect it is of minor relevance to the efficiency of the code. The big time thief here is the double type. Unless you really have need for high accuracy, I'd consider using float or int to speed up the algorithm.