Can the announced Tegra K1 be a contender against x86 and x64 chips in supercomputing applications? - mobile

To clarify, can this RISC base processor (the Tegra K1) be used without significant changes to today's supercomputer programs, and perhaps be a game changer because if it's power, size, cost, and energy usage? I know it's going up against some x64 or x86 processors. Can the code used for current supercomputers be easily converted to code that will run well on these Mobile chips? Thanks.

Can the code used for current supercomputers be easily converted to code that will run well on these Mobile chips?
It depends what you call "supercomputers code". Usually supercomputers run high-level functional code (usually fully compiled code like C++, sometimes VMs-dependent code like Java) on top of other low-lewel code and technologies such as OpenCL or CUDA for accelerators or MPICH for communication between nodes.
All these technologies have ARM implementations so the real thing is to make the functional code is ARM-compatible. This is usually straightforward as code written in high level language is mostly hardware-independent. So the short answer is: yes.
However, what may be more complicated is to scale this code to these new processors.
Tegra K1 is nothing like the GPUs embedded in supercomputers. It has far less memory, runs slightly slower and has only 192 cores.
Its price and power consumption make it possible, however, to build supercomputers with hundreds of them inside.
So code which have been written for traditionnal supercomputers (a few high-performance GPUs enbedded) will not reach the peak performance of 'new' supercomputers (built with a lot of cheap and weak GPUs). There will be a price to pay to existing code on these new architectures.

For modern supercomputing needs, you'd need to answer if a processor can perform well for the energy it consumes. Current architecture of Intel along with GPUs fulfill those needs and Tegra architecture do not perform as well in terms of power-performance to Intel processors.

The question is should it? Intel keeps proving that ARM is inferior and the only factor speaking for using RISC base processors is their price, which I highly doubt is a concern when building super computer.

Related

Do processors have optimizations and architecture preferences targeted firstly or mainly to C/C++ languages?

I have read an article C Is Not a Low-level Language, where is such paragraph:
Unfortunately, simple translation providing fast code is not true for
C. In spite of the heroic efforts that processor architects invest in
trying to design chips that can run C code fast, the levels of
performance expected by C programmers are achieved only as a result of
incredibly complex compiler transforms. The Clang compiler, including
the relevant parts of LLVM, is around 2 million lines of code. Even
just counting the analysis and transform passes required to make C run
quickly adds up to almost 200,000 lines (excluding comments and blank
lines).
What does a bold sentence mean? Does it mean that manufacturers design processors with some optimizations and architecture decisions targeted firstly or even specifically to C (C++) code? Or it just means that they are trying to design processors that executes any code faster, including the code written in C language?
If some preferences to C exists, what are they?
My couple of thoughts:
a branch prediction algorithm tuned in to patterns happening mainly in C code.
instructions which are useful and used in C but aren't useful in other languages. Otherwise other languages (compilers) will use them too.
I knows about language specific processors like Jazelle or Lisp machine for Java and Lisp respectively, but similar technologies can't be applied to C, because there are no bytecode.
Processors don't necessarily have optimizations targeted at C, but they do provide features to make C (and other procedural languages in general) map more cleanly to the platform.
Take cache-coherency in a multi-threaded environment as an example. From a C perspective, a global variable shared by two threads should look the same to both threads. If one thread writes to it the other should be able to see those modifications. But in a multi-core CPU with independent caches, that takes extra effort to support. Core 1 has to be able to detect that core 2 is accessing an address it has modified in cache and flush that out to memory (or somehow share it directly to core 2's cache).
That's essentially the thesis of that entire article. C's abstract machine model doesn't necessarily map cleanly to real modern high-performance processors like it did to the (by comparison extremely simple) PDP-11, and CPUs and compilers have to take great pains to paper over those differences.
The "heroic efforts" of the processor architects is largely referring to the design of cache and memory subsystems on the CPUs.
For a very long time now, the instruction executions circuits inside the CPUs have been far, far quicker than the electronics that looks after fetching/writing data from/to memory, largely because the technologies we have for RAM chips is hasn't really got better. Where the cores have speeded up the memory hasn't, and so the cache and memory subsystem has to get ever more elaborate in order to be able to pre-fetch data and move it towards the execution circuits ahead of time. Needless to say, this doesn't always pan out well.
It's also partly because of the physical distance between the CPU and RAM chips. Though only a few inches (if that) of track on a motherboard, that distance is significant; the speed of a signal down the track is about 1ns every 8 inches. For signals clocked in the GHz range (1 cycle << 1ns), a short track is a long way. This is partly why Apple have gone down the route of putting RAM onto the same package as the CPU in the home-grown M1 silicon.
Back to caches - the likes of Intel (and AMD, ARM) have strived to make CPUs that have good, general purpose performance, so that they run pretty much any code well. Modern compilers help a lot - if they know what the cache in the CPU is likely to do in any particular circumstance, the compilers can arrange code to fit in with what the hardware is likely to do.
A reasonable question then is, is that effective? Well, yes and no. Yes, because compiled code does run quite well, but no for a couple of reasons. The first is that ultimate performance for any given algorightm is rarely reached by the compiler / CPU, and secondly all this complexity makes it nigh on impossible for a good programmer to do their own optimisation.
Some CPUs help out the hero-programmer here. PowerPC (at least some variants) has instructions where the programmer can give the cache system a hint that the programme will shortly need data from such-and-such a location in RAM. The CPU uses that instruction to pre-load the L1 cache with that data, so that when the program actually starts to perform operations on data at that address it's already in cache.
The IBM Cell processor took this to a whole new level. The SPE math cores (there were 8 of them) had no cache, and no way of addressing data in CPU RAM at all. What there was instead was 256K of static RAM per core into which all code and data had to fit, and a way for code to push code and data in and out of that static RAM very quickly (256Gbyte/sec at the time, which was very very quick). The developer was completely on their own; they had to write code to load code and data into a core, get that executed, and then write more code to get the results out to wherever. This was actually pretty liberating; instead of having a cache and memory subsystem trying to automatically deliver data to executions cores, get in the way or (worse) just hide inefficiencies from you, one had the freedom to break down an algorigthm into core-sized lumps knowing that if it fitted, it'd be very quick, or knowing for sure it didn't fit.
Miles Budnek's answer addresses the issues that arise from multi-core CPUs with a cache-coherency and a Symetric Multi Processing (SMP) environment. It's even harder for the cache designer to get it right if there's multuple cores that might very well start tampering with a value. The difficulties involved has lead to vulnerabilities like Meltdown and Spectre.
SMP could be said to be an "optimisation" put into CPUs by designers to aid the C (or other) developer in transitioning code from single to multiple thread. It's an attractive thought - in the way that a single thread programme can see all of it's data merely by addressing it, why not extend the same visibility of data to all threads in the programme?
Turns out that this is what makes it very difficult to design modern CPUs. However the reasons why the industry went this way are plain enough - the smallest possible delta between single and multicore CPUs was going to be the least troublesome for the existing software community to adopt. That's perfectly reasonable.
But it is running out of steam, fast. A better approach (if the goal is the outright pursuit of performance) would be to go back to the old Transputer architectures from Inmos from the 1980s, early 1990s. In such architectures, data held by one core could only be processed by another if the software was written to explicitly transfer the data. Sounds familiar? Yes - Cell process was a bit like that.
Interestingly, languages such as Rust, Go, Erlang have all implemented Communicating Sequential Processes as a parallel processing paradigm. The irony is that, these days, CSP has to be implemented on top of a SMP environment, which is itself an artificial construct brought about by the interconnect between CPUs, cores and memory (e.g. QPI, Hypertransport). Basically, if the whole software world got fully comfortable with CSP then CPU designers wouldn't have to design cache-coherency into their multi-core CPUs. Rust in particular is very well suited, as it already has a strong concept of data ownership in its syntax (which could be leveraged to shovel data around between cores automatically).
The article referred to by the OP seems to me to have it in for C for some reason. There were so many points in it I felt triggered by, but I don't want to go addressing each one point by point. Maybe there is some bias or special interest that has not been declared. As a C programmer, with a particular interest in writing high performance programs, I thought I'd give my two cents on some of the issues raised. Hopefully, this might be of interest to others in the industry with or without a programming background.
From my point of view, the strengths of C are mainly as follows....
C allows you to do things you just can't do in 'higher level' languages.
A well written C (see weakness no.1) program is hard to beat on performance on the same hardware, written in another language.
C is comfortable handling binary data allowing for memory conservation.
C is well established with lots of libraries and programmers.
Objects in memory can be made easy to process from anywhere in the program by using pointers so the data itself doesn't need to be passed around.
Multi-threaded and multi-process programs are quite easy to implement.
It has Read-Write shared memory between threads (and processes with some fancy low-level stuff?)
Assembly can be inlined where needed (though it's not C then I know!).
... and main weaknesses...
Utilising SIMD capabilities is not possible in standard C, and difficult to implement in a portable way with intrinsics.
It takes a lot of code to do simple things for which there are no library functions.
Buffer overflow potential is easily missed, even for experienced programmers.
C pointers can be confusing.
The C programming language has a special place in the evolution of programming languages and I for one, would welcome a replacement that is a better fit to what is possible with modern hardware if it doesn't tie the hands of the programmer and offers better security and performance. From the article,...
'A processor designed purely for speed, not for a compromise between speed and C support, would likely support large numbers of threads, have wide vector units, and have a much simpler memory model. Running C code on such a system would be problematic, so, given the large amount of legacy C code in the world, it would not likely be a commercial success.'
Such things exist already, GPUs! Modern CPUs are much more like GPUs than they used to be now core counts can be 100+. I have used OpenCL C to write programs with amazing computational performance but they can't do everything well. Some applications can not be efficiently parallelised, if at all. OpenCL C program performance can become terrible when there is even a small amount of branching. Also, it is so much easier to exhause your memory bandwidth and fast cache when running many threads that it might be judged not worth the added complexity over a good single threaded implementation.
In OpenCL C, the programmer has somewhat more control of where data is stored in memory which can definately aid performance. Maybe it's a costly mistake to try to make programming languages too hardware independent. Might it be better to review some (LLVM like) intermediate standard, like in OpenCL C, where one can define 'private', 'local' and 'constant' memory objects to get performance improvements over 'global' memory objects. Such a standard wouldn't need to be tied to an instruction set. As a programmer, I welcome fast CPU instructions but it would be nice if they could be much more easily utilised in portable code AND compilable to portable binaries. Maybe this is something compiler writers could look into along with using SIMD vector registers rather than memory for pushing and popping. As I see it, there are four levels of portability.
Hardware independent source code to run on any hardware conforming to the intermediate standard. The burden is on the compiler to create binaries that will run correctly and efficiently on any hardware conforming to the intermediate standard.
Hardware independent source code to run on any hardware conforming to the intermediate standard. The burden is on the host compiler to create binaries that will run on the host's hardware configuration conforming to the intermediate standard, but may not run on other hardware conforming to the same.
Hardware dependent source code where the logical execution path through the source depends on the architecture of the hardware on which it is run. Programs need to 'query' the hardware configuration.
Hardware specific source code.
In a fantasy world where one can just imagine new standards, hardware, and programming languages, one could choose which level of portablity to aim for. I think that C was supposed to be hardware independent, but it isn't really if you want to get the best performance out of your hardware. OpenCL C tries also, but doesn't quite make it, though with run-time kernel compilation it does a pretty good job. The host program has the same issues though as any other. I don't think there are any 'Level 1' portable languages currently.
Sorry my response is a bit rambling. It's unfortunate that it's difficult to have an objective constructive discussion about the pros and cons of different ideas about future changes in software and hardware. Personally, I think FPGAs have huge potential but are still a long way from where they would need to be to go mainstream. Any new computing language will probably become out of date when hardware changes occur and software trends change. It's remarkable that C still occupies such a prominent space. In another 10 or 20 years time, C will probably still be going strong. How many other modern languages will still be commonplace then?

What's the advantage of running OpenCL code on aCPU? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am learning OpenCL programming and am noticing something weird.
Namely, when I list all OpenCL enabled devices on my machine (Macbook Pro), I get the following list:
Intel(R) Core(TM) i7-4850HQ CPU # 2.30GHz
Iris Pro
GeForce GT 750M
The first is my CPU, the second is the onboard graphics solution by Intel and the third is my dedicated graphics card.
Research shows that Intel has made their hardware OpenCL compatible so that I can tap into the power of the onboard graphics unit. That would be the Iris Pro.
With that in mind, what is the purpose of the CPU being OpenCL compatible? Is it merely for convenience so that kernels can be run on a CPU as backup should no other cards be found or is there any kind of speed advantage when running code as OpenCL kernels instead of regular (C, well threaded) programs on top of a CPU?
See https://software.intel.com/sites/default/files/m/d/4/1/d/8/Writing_Optimal_OpenCL_28tm_29_Code_with_Intel_28R_29_OpenCL_SDK.pdf for basic info.
Basically the Intel OpenCL compiler performs horizontal autovectorization for certain types of kernels. That means with SSE4 you get 8 threads running in parallel in a single core in similar fashion as Nvidia GPU runs 32 threads in a single 32 wide simd unit.
There are 2 major benefits on this approach: What happens if in 2 years they increase the SSE vector width to 16? Then you will instantly get autovectorization for 16 threads when you run on that CPU. No need to recompile your code. The second benefit is that it's far easier to write an OpenCL kernel that autovectorizes easily compared to writing it in ASM or C and getting your compiler to produce efficient code.
As OpenCL implementations mature, it's possible to achieve good levels of performance portability for your kernels across a wide range of devices. Some recent work in my research group shows that, in some cases, OpenCL codes achieve a similar fraction of hardware peak performance on the CPU and the GPU. On the CPU, the OpenCL kernels were being very effectively auto-vectorised by Intel's OpenCL CPU implementation. On the GPU, efficient code was being generated for HPC and desktop devices from Nvidia (who's OpenCL still works surprisingly well) and AMD.
If you want to develop your OpenCL code anyway in order to exploit the GPU, then you're often getting a fast multi-core+SIMD version "for free" by running the same code on the CPU.
For two recent papers from my group detailing the performance portability results we've achieved across four different real applications with OpenCL, see:
"On the performance portability of structured grid codes on many-core computer architectures", S.N. McIntosh-Smith, M. Boulton, D. Curran and J.R. Price. ISC, Leipzig, June 2014. DOI: 10.1007/978-3-319-07518-1_4
"High Performance in silico Virtual Drug Screening on Many-Core Processors", S. McIntosh-Smith, J. Price, R.B. Sessions, A.A. Ibarra, IJHPCA 2014. DOI: 10.1177/1094342014528252
I have considered this for a while. You can get most of the advantages of OpenCL for the CPU without using OpenCL and without too much difficulty in C++. To do this you need:
Something for multi-threading - I use OpenMP for this
A SIMD library - I use Agner Fog's Vector Library Class (VCL) for this which covers SSE2-AVX512.
A SIMD math library. Once again I use Anger Fog's VCL for this.
A CPU dispatcher. Agner Fog's VCL has an example to do this.
Using the CPU dispatcher you determine what hardware is available and choose the best code path based on the hardware. This provides one of the advantages of OpenCL.
This gives you most of the advantages of OpenCL on the CPU without all its disadvantages. You never have to worry that a vendor stops supporting a driver. Nvidia has only a minimal amount of support for OpenCL - including several year old bugs it will likely never fix (which I wasted too much time on). Intel only has Iris Pro OpenCL drivers for Windows. Your kernels using my suggested method can use all C++ features, including templates, instead of OpenCL's restricted and extended version of C (though I do like the extensions). You can be sure your code does what you want this way and are not at the whim of some device driver.
The one disadvantage with my suggested method is that you can't just install a new driver and have it optimize for new hardware. However, the VCL already supports AVX512 so it's already built for hardware that is not out yet and won't be superseded for several years. And in any case to get the most use of your hardware you will almost certainly have to rewrite your kernel in OpenCL for that hardware - a new driver can only help so much.
More info on the SIMD math library. You could use Intel's expensive closed source SVML for this (which is what the Intel OpenCL drivers uses if you search of svml after you install the Intel OpenCL drivers - don't confuse the SDK with the drivers). Or you could use AMD's free but closed source LIBM. However,neither of these work well on the competitors processor. Agner Fog's VCL works well on both processors, is open source, and free.

Pure C OpenCL vs Python OpenCL performance

I am looking for performance measurement between Python wrapper to OpenCL and Pure C OpenCL. Performance measurements can varies with time, memory, etc..
- Are there any benchmarks available?
- What should be the expectation about the time performance differences?
- What kind of tasks (parallel of course...) should make a difference?
It is likely that PyOpenCL is your best choice. I would choose to use C only in very specific situations (a super-critical need for speed/low-latency on the host). For most casual parallel programs, it is fine for the host side to have plenty of slack, because all the real work gets done on the device.
You can consider PyOpenCL and OpenCL to have identical performance on the device.
Maybe use C if you are, like... designing a self-driving car, and every millisecond/amp matters. But even in that situation, it is likely that Python could be used effectively.
The best way to figure out if your specific program is slowed down is to time your code. For PyOpenCL that means:
import time
and
cl.command_queue_properties.PROFILING_ENABLE
Many smart companies and individuals choose to code first in Python, because they can build a flexible, working prototype quickly. If they end up needing more host performance later, it is relatively easy to port Python to C.
Hope that helps!
OpenCL uses precompiled programs, that later sent to device for execution. They are so-called "kernels". These kernels are deployed to be executed on end-device. This means main cost that must be measured is OpenCL implementation API I/O. Therefore, you can't rely on memory/CPU measurements, as real OpenCL part will use same of them.
AFAIK, no benchmarks available, but it is not hard to do one, if you will need it (matrix multiplication is hello world example, overall).
OpenCL is not that kind, that uses I/O on every CPU cycle. Field of use - really big data processing, that uses one big input, a lot of processing operations, and one output (no matter small or big). No one says that OpenCL can't be used with many I/O and minimal calculation variations, but implementation API overhead not worth it.
Expectations must be that I/O is pretty same fast in approximation to overall application performance.
There is a benchmark here: https://github.com/bennylp/saxpy-benchmark, comparing PyOpenCL against OpenCL as well as other frameworks/methods such as CUDA, plain C++, Numpy, R, Octave, and even TensorFlow (disclaimer: I'm the author)
According to the benchmark results, the performance difference between OpenCL and PyOpenCL varies too wildly. The PyOpenCL GPU target is almost 7x slower than OpenCL, but for the CPU target PyOpenCL is actually more than 2x faster than OpenCL!

Embedded Systems Bit Count

I do apologize if this is a duplicate even though I did search around here for a similar question, I only found one.
So my programming team in my Engineering class currently use a 32-bit 72MHz ARM Cortex-M3 microprocessor. We're all seniors in high school, and we're struggling to use the libraries and whatnot, mostly due to poor docs from the manufacturer of the Bioloid Premium we're using. However we are about to purchase an 8-bit 16MHz AVR microcontroller because it has a wider range of support online and an easier-to-use library + more documentation. My question here is, would the decreased bit-count as well as the lower processor speed really matter to us? We're not going to be doing a lot of process-intensive programming, but more like a basic robotics class.
So, main differences between an 8-bit 16MHz AVR microprocessor and a 32-bit 72MHz ARM Cortex-M3 microprocessor?
Also, (if it holds any relevancy):
We're using a Bioloid Premium by Robotis w/ CM530 (ARM), about to switch to CM510 (AVR).
We'll be using Embedded C instead of Robotis' RoboPlus IDE as our instruction set.
I have googled around, found out what a bit-count was, and more about it's impact on processor speed, but not a lot of documents about it give a clear and concise answer and that's why I came here, because it's for clear and concise answers. (So please don't tell me to Google it when I've spent the past twenty minutes doing so.)
We're using a Bioloid Premium by Robotis w/ CM530 (ARM), about to
switch to CM510 (AVR). We'll be using Embedded C instead of Robotis'
RoboPlus IDE as our instruction set.
I looked around at the products you refer to, and your question seems to be missing the issues you should really be concerned with.
The Bioloid Premium kit looks pretty sweet, with all the parts put together and configured for you already. Much of robotics courses are usually concerned with designing the hardware. You are not going to be doing any of that. So your tasks really come down to programming the hardware you are given.
That said, there is a world of difference between the RoboPlus IDE, which seems similar to the Lego Mindstorms drag and drop interface, and writing code in C using AVR Studio!
I have used AVR Studio before, but there was a major change in versions recently. You might need to modify the example programs to work in the latest version, and you will probably need some help with that.
It looks like they supply you with enough example code to use the periperpherals, but I don't see right away how to write a main() function to do something like follow a plan. Perhaps, there are some examples online.
But to answer your question, you are probably not going to run into any limitations in terms of processor capacity. They switched to a cheaper and more powerful processor to write the newer version of their control software, but the old hardware will be great, too. Working in C, you will become familiar with how to actually use an MCU, and that knowledge will transfer to other chips. The AVR family is a great one to start with. It has lots of features and is pretty sensible in how it works, with lots of documentation and third-party support. Definitely download the datasheet from Atmel for the chip you are using, although it is a dense and difficult read. You will only need to read parts of it. Also, check out the AVR Freaks forums.
This sounds like a fantastic high school course. Have fun with it!
My question here is, would the decreased bit-count as well as the lower processor speed really matter to us? [...] So, main differences between an 8-bit 16MHz AVR microprocessor and a 32-bit 72MHz ARM Cortex-M3 microprocessor?
What a cool project! This is a great opportunity to learn a bit about how processors work and what bit-width and clock speed mean.
Clock speed is conceptually the easiest to understand. Microcontrollers like the AVR and ARM use a clock crystal that sets the speed the circuitry operates at. With a faster clock, the processor can execute more instructions in the same amount of time. The 72MHz clock is more than 4x the 16MHz one, so the ARM processor is going to be able to run 4x faster than the AVR. But what does "run faster" really mean? Processors execute instructions. At the basic level, these are instructions like "add two numbers" and "make the voltage on this pin high". The ARM processor is going to be a lot faster here, but consider what hardware it's going to be talking to: servos. Servo motors listen to a fairly low-speed PWM signal, so at that speed the difference between 72MHz and 16MHz isn't going to become that relevant.
But what about bit-width? This one is a bit more tricky. It doesn't really affect the speed at which your processor runs, but it affects the complexity of the instructions it executes. Let's say that you want to add two really big numbers together. Numbers like 100,000 and 200,000. When we add those together on paper, it's just one step. But an 8-bit processor like the AVR can only operate on numbers as large as 65,536. So in order to operate on numbers that large it'll need to break up the addition into several smaller steps. The 32-bit ARM, on the other hand, can work on numbers that large. So it does the addition in one step. I hope that makes sense.
Anyway, I've done a lot of work with servos on even slower processors than your 16MHz AVR. It'll most likely be just fine for what you want to do, and like you found it has a much more active hobbyist community. And if you're looking for quick examples of code, the Cornell 4760 page has some great projects that you could learn from.

Intel based hardware speed ups for DCT?

We are writing an image processing algorithm targeting some Intel hardware. Generally we prefer generic C implementations, but we have identified an algorithm that at its core does a ton of Discrete Cosine Transforms (DCT's) that works extremely well. Unfortunately, our throughput requirements are such that a generic C implementation is about 2 orders of magnitude too slow. I can get one order of magnitude through some other tricks, so if I can improve my DCT's by about an order of magnitude I have a path towards success.
Is the Intel MMX a way to get at hardware acceleration to do these DCT's? Is there other intel specific libraries and/or hardware that I can exploit to speed these bad boys up?
Where do I start to look? This is a new job for me, and my first time digging hard into Intel hardware, so any pointers would be most appreciated.
Take a look at Intel's Integrated Performance Primitives library. It contains a wealth of routines that are optimized heavily to take use of the Intel architecture, specifically MMX and SSE. Among many other things, IPP also contains routines for the DCT (documentation here).

Resources