I'm currently testing several Linpack Apps as well as the original Linpack from netlib.org.
While most currently available Implementations of the Linpack Benchmark only calculate a give Matrix (e.g. 500x500), the original Linpack was meant to calculate two Matrixes for a given dimension n. n and n+1 (eg. if n is 500 -> then first 500x500 and 501x501) as far as how I understood how these things work.
But why does it calculate the second Matrix with n+1?
That is what I wondered when I produced a version, for PCs (that had low resolution timers in the early days). This was accepted by Netlib in 1996 and can be found there:
http://www.netlib.no/netlib/benchmark/linpack-pc.c
I believe that it was looking for memory address alignment performance issues that must have been important at some time. The following has results for this on PCs and other versions for Windows, Linux and Androids, including Java.
http://www.roylongbottom.org.uk/linpack%20results.htm
This version is unsuitable for multiprocessors but users are allowed to implement their own linear equation solver. See:
http://netlib.org/benchmark/performance.pdf
Related
I'm testing the performance of DGEMM and SGEMM on multiple libraries on the Apple M1 with a program that does the following: set dimensions 1000x1000, call cblas_dgemm using alpha and beta as 2 and repeat with dimensions 2000x2000, 3000x3000, etc. This means that for every iteration, dgemm will: do alpha * (AxB) and the result of that to is added to the result of beta * C. My idea would be to do the big workload (alpha * (AxB)) on a high performance core, the beta*C on the efficiency core and join the work of both cores (in case the efficiency core takes longer than the performance core, the performance core would start with the next iteration (2000x2000)).
My question is: is there any real way to do this? I'm a bit noobish and not sure if it's doable. Another approach I thought would be to divide the workload between the two cores in real time, but Apple doesn't make selecting the cores in C really easy. Thanks in advance
Following my former post about comparing the time required to do a simple array addition job (C[i]=A[i]+B[i]) on different devices, I improved the code a little bit to repeat the process for different array length and give back the time required:
The X axis is the array length in logarithm with a base 2 and Y is the time in logarithm with base 10. As it can be seen somewhere between 2^13 and 2^14 the GPUs become faster than the CPU. I guess it is because the memory allocation becomes negligible in comparison to the calculation. (GPI1 is a typo I meant GPU1).
Now hoping my C-OpenCL code is correct I can have an estimation of the time required to do an array addition on different devices: f1(n) for CPU, f2(n) for the first GPU and f3(n) for the second GPU. If I have an array job with a length of n I should theoretically be able to divide it into 3 parts as n1+n2+n3=n and in a way to satisfy the f1(n1)=f2(n2)=f3(n3) and distribute it on three devices on my system to have the fastest possible calculation. I think I can do it using lets say OpenMP or any other multithreading method and use the cores of my CPU to host three different OpenCL tasks. That's not what I like to do because:
It is a wast of resources. Two of the cores are just hosting while could be used for calculation
It makes the code more complicated.
I'm not sure how to do it. I'm now using the Apple Clang compiler with -framework OpenCL to compile the code, but for OpenMP I have to use the GNU compiler. I don't know how to both OpenMP and OpenCL on one of these compilers.
Now I'm thinking if there is any way to do this distribution without multithreading? For example if one of the CPU cores assigns the tasks to the three devices consequentially and then catches the results in the same (or different) order and then concatenate them. It probably needs a little bit of experimenting to adjust for the timing of the task assignment of the subtasks, but I guess it should be possible.
I'm a total beginner to the OpenCL so I would appreciate if you guys could help me know if it is possible and how to do it. Maybe there are already some examples doing so, please let me know. Thanks in advance.
P.S. I have also posted this question here and here on Reddit.
The problem as its read implicitly tells you the solution should be concurrent (asynchronous) thus you require to add the results from three different devices at same time, otherwise what you will do is to run a process first on device A, then device B and then device C (better to run a single process on the fastest device), if you plan to efficiently learn to exploit OpenCL programming (either on mCPU/GPUs) you should be comfortable to do Asynchronous programming (indeed multi threaded).
I want to write a traffic generator that replicates the primitive read and write demands that are made on memory by a running computer.
But running computers also show (very strong) locality in their memory references and across a 64 bit address space only a very small range of addresses will be referenced (in fact I have tested this on on one benchmark and about 9000 pages of the billions on offer are touched).
What is a good way to model such a sparse probability density function (in C or C++ ideally) - I have probabilities for the benchmark but don't need to follow them too closely (as I could just use the benchmark references in any case but want something a bit more flexible).
To clarify I also have data about how many reads should come from each page, but what I am interested in is picking the sequence of pages. (The Markov chain idea suggested in the comments might be the way to do this)
For what it's worth I decided to use a pretty crude hack - along these lines: pick a random number between 1 and 0, find the element in the distribution that has a frequency/probability equal or greater than this number (picking the minimum probability of all elements in this set). Seems to work (I did this in R)
According to my measurements of dgemm from both cublas and atlas, atlas severly beats cublas in terms of speed. Is this to be expected for a system with an Intel i7 950 and Nvidia GTX470?
I tested matrices of size 10x10 up to 6000x6000 in increments of 50. Atlas always wins. I measure both total application execution and just the multiplication step.
Anyone else have experience with this? Is this the expected results?
Thanks in advance.
edit: (same code, same results on a Xeon X5670 and Nvidia Tesla C2050)
edit2: It appears a great deal of slowness if attributed to initialisation of the cublas library. I continue to work on it. I'll update here when I learn more.
Did you use the single-threaded versions of both libraries? As far as I understand, both GotoBLAS and Atlas tend to sneakily use multiple threads when working on large matrices.
That said, at large matrix sizes the algorithm used tends to matter much more than the low-level implementation. Naive matrix multiplication is O(N^3), whereas Strassen algorithm scales much better, about O(N^2.81) or so. However, Strassen algorithm happens to vectorize very nicely (to much larger SSE and AVX registers, yielding almost 2 to 8-fold increase in efficiency, depending on floating-point format and register size).
I am not sure how well the two GPUs you mentioned handle double-precision math. Typically they're optimized for single precision (32-bit floats), dropping to a third or a quarter of that speed when handling doubles.
There are other factors in your tests that may skew the results. For example, you may be including the matrix transfer time to the CPU. Whether that matches real world use cases, I don't know; I don't have an Nvidia GPU to test.. but I suspect not. Usually there are multiple operations, and the matrix does not need to be transferred between operations.
I've been writing my own low-level SSE3 matrix functions using SSE/AVX vector built-in functions provided by GCC and ICC C99 compilers; early testing indicates it beats the current Fortran implementations by a wide margin, especially at the very small (say up to 8x8, optimized for each size) and very large (above 1000x1000, using Strassen algorithm) sizes for dense matrices.
We are looking for exemplar problems and codes that will run on any or all of shared memory, distributed memory, and GPGPU architectures. The reference platform we are using is LittleFe (littlefe.net), an open-design, low cost educational cluster currently with six dual core CPUs, each with an nVidia chipset.
These problems and solutions will be good for teaching parallelism to any newbie by providing working examples and opportunities to roll up your sleeves and code. Stackoverflow experts have good insight and are likely to have some favorites.
Calculating area under a curve is interesting, simple and easy to understand, but there are bound to be ones that are just as easily expressed and chock full of opportunities to practice and learn.
Hybrid examples using more than one of the memory architectures are most desirable, and reflective of where parallel programming seems to be trending.
On LittleFe we have predominantly been using three applications. The first is an analysis of optimal targets on a dartboard which is highly parallel with little communication overhead. The second is Conway's game of life which is a typical of problems sharing boundary conditions. It has a moderate communication overhead. The third is an n-body model of galaxy formation which requires heavy communication overhead.
The CUDA programming guide(PDF) contains a detailed analysis of the implementation of matrix multiplication on a GPU. That seems to be the staple "hello world" example for learning GPU programing.
Furthermore, the CUDE SDK contains tens of other well explained examples of GPU programming in CUDA and OpenCL. My favorite is the colliding balls example. (a demo with a few thousands of balls colliding in real time)
Two of my favorites are numerical integeration and finding prime numbers. For the first we code the midpoint rectangle rule on the function f(x) = 4.0 / (1.0 + x*x). Integration of the function between 0 and 1 give an approximation of the constant pi, which makes checking the correctness of the answer easy. The parallelism is across the range of the integration (computing the areas of rectangles).
For the second, we input an integer range and then identify and save the prime numbers in that range. We use a brute force division of values by all possible factors; if any divisors are found that are not 1 or the number, then the value is composite. If a prime is found, count it and store in a shared array. The parallelism is dividing up the range since testing for primality of N is independent of testing M. There is some trickiness needed to share the prime store between threads or to gather distributed parital answers.
These are very basic and simple problems to solve, which allows students to focus on the parallel implementation and not so much on the computation involved.
One of the more complex but easy example problems is the BLAS routine sgemm or dgemm (C = alpha * A x B + beta * C) where A, B, C are matrices of valid sizes and alpha and beta are scalars. The types may be single precision floating point (sgemm) or double precision floating point (dgemm).
The implementation of this simple routine on different platforms and architectures teaches some insights about the functionality and working principles. For more details on BLAS and the ?gemm routine have a look to http://www.netlib.org/blas.
You need only to pay attention that for a double precision implementation on the GPU the GPU needs to have double precision capabilities.