I know a similar question is already asked here for example:
Malloc a 2D array in C
However, my question is not how to create one but rather if I should prefer to use for a mathematical 2D matrix a "real" 2D array (with pointers of pointers) or rather a flattened 1-dimensional array with proper indexing.
I think the only case it can be important is when you are doing operations that depends on the neighbors of the matrix. In this case, using a 2D matrix is a bit better because it avoids cache misses.
This is specially important for problem solutions that use dynamic programming optimization .
I believe it can also be important for image processing, where many operations are applied in a rectangle of pixels.
Related
I was trying to write a library for linear algebra operations in Haskell. In order to be able to define safe operations for matrices and vectors I wanted to encode their dimensions in their types. After some research I found that using DataKinds one is able to do that, similar to the way it's done here. For example:
data Vector (n :: Nat) a
dot :: Num a => Vector n a -> Vector n a -> a
In the aforementioned article, as well as in some libraries, the size of the vectors is a phantom type and the vector type itself is a wrapper around an Array. In trying to figure out if there is a array type with its size at the type-level in the standard library I started wondering about the underlying representation of arrays. From what I could gather form this commentary on GHC memory layout, arrays need to store their size on the heap so a 3-dimensional vector would need to take up 1 more word than necessary. Of course we could use the following definition:
data Vector3 a = Vector3 a a a
which might be fine if we only care about 3D geometry, but it doesn't allow for vectors of arbitrary size and also it makes indexing awkward.
So, my question is this. Wouldn't it be useful and a potential memory optimization to have an array type with statically known size in the standard library? As far, as I understand the only thing that it would need is a different info table, which would store the size, instead of it being stored for at each heap object. Also, the compiler could choose between Array and SmallArray automatically.
Wouldn't it be useful and a potential memory optimization to have an array type with statically known size in the standard library?
Sure. I suspect if you wrote up your use case carefully and implemented this, GHC HQ would accept a patch. You might want to do the writeup first and double-check that they're into it to avoid wasting time on a patch they won't accept, though; I certainly don't speak for them.
Also, the compiler could choose between Array and SmallArray automatically.
I'm not an expert here, but I kinda doubt this. Usually supporting polymorphism means you need a uniform representation.
I need to transpose a 3D matrix in Fortran. I have a 3d array: V(111,222,333); I need to transpose it to V(333,111,222). Is there any function to do that in Fortran?
I hesitate to disagree with #IanBush and perhaps what follows is neither easy nor clear. The following statement will return a permutation of the array V. If I have got it right then the element at V(i,j,k) is sent to V_prime(k,i,j).
V_prime = RESHAPE(source=V, shape=[size(V,3),size(V,1),size(V,2)], order=[2,3,1])
Whether this creates the permutation OP asks for is a bit unclear, I'm not aware that there is a single definition of the transpose of an array of rank other than 2. Changing the order will produce different permutations.
This question is probably a duplicate of Fortran reshape - N-dimensional transpose. It is certainly worth reading the answers to that question which explain the use of reshape with order very well.
There is no routine that will do this simply. I would write some loops, that is often the easiest and clearest way.
I'm progrmaming the 3D Ising model on C++. This model consists in a 3D lattice containing the spins which are then updated to simulate different random configurations (Monte Carlo Simulation). To speed up the code I also store the nearest neighbors of each site in a vector/array.
Since I'm not a very expert of C++, I would like to ask you what is the most performant way/better practice for storing such sequences. Personally, I would initialize the lattice and the nearest neighbors in a static STL array (i.e. on the stack), since the size of these arrays never changes and the arrays are also never destroyed until program end.
Is this good or bad practice? Or would it be better to store them as STL vectors?
You can tell std::vector how big it will be in advace.
Alternatively, you can use std::array now if it will remain fixed size.
For example,
std::array<int, 10> neighbours;
for 10 ints.
(this is a general question I am not referring it to a specific language as almost all languages use pointers)
I have been looking around a lot but I cannot see why using 2D arrays cannot replace pointers.
For example if a program do not have a pointer why is it dangerous to use 2D arrays... what is the actual difference between 2D Arrays and Pointers.
If 2D arrays will do the job what is the need of pointers?
For Example if we use two arrays then an integer to keep record for the how many record have been used
I want to know what kind of problems I (programmers) may face if I use 2D Arrays instead of pointer?
In languages that don't have pointers (e.g. older versions of FORTRAN), programmers will often use arrays to build more-complex data structures. This is a technique of last-resort in most cases, because the pointer notation is more natural, and so avoids a lot of cognitive load, where you have to keep track of multiple arrays, multiple index variables, and the like, just to implement a simple list or tree structure.
In C and many related languages, arrays are a fixed size, set at initialization, which means that if you want to have a data structure that can become arbitrarily-large, you pretty much have to use pointers. On the other hand, if you know that a particular data set actually has a fixed size, then using an array to represent it can actually be somewhat more-efficient than the equivalent pointer-based structure.
I know that Intel Fortran has libraries with functions and subroutines for working with sparse matricies, but I'm wondering if there is also some sort of data type or automated method for creating the sparse matricies in the first place.
BACKGROUND: I have a program that uses some 3 & 4 dimensional arrays that can be very large in the first 2 dimensions (~10k to ~100k elements in each dimension, maybe more). In the first 2 dimensions, each array is mostly (95% or so) populated w/ zeroes. To make the program friendly to machines with a "normal" amount of RAM available, I'd like to convert to sparse matricies. The manner in which the current conventional arrays are handled & updated throughout the code is pretty dependent on the code application, so I'm looking for a way to convert to sparse matrix storage without significant modification to the code. Basically, I'm lazy, and I don't want to revise the entire memory management implementation or write an entire new module where my arrays live and are managed. Is there a library or something else for Fortran that would implement a data type or something so that I can use sparse matrix storage without re-engineering each array and how it is handled? Thanks for the help. Cheers.
There are many different sparse formats and many different libraries for handling sparse matrices in Fortran (e.g. sparskit, petsc, ...) However, none of them can offer that compact array handling formalism, which is available in Fortran for intrinsic dense arrays (especially the subarray notation). So, you'll have to touch your code at several places, when you want to change it to use sparse matrices.