Can I use GPU as GPGPU on any system - c

I wish to use the GPU of a system as GPGPU. The machine is remote, I don't have administrative rights and I don't know anything about its drivers. What I know is that it has has a Matrox VGA card. Can I use as GPGPU with C code and gcc compiler or do I need to have some kind of drivers? Or can I only use OpenGL and twist the logic to suit my purpose?

There is no easy way to do this using OpenGL. It would be easy to do this if you know the graphics card supports GPGPU functionality. i.e. CUDA, OpenCL, or AMD stream. Then you can use one of these APIs to write a program which uses GPU for computation. For, this you will need corresponding SDKs. But, even using this APIs it is non-trivial to use GPU for complex calculations.

A few Matrox video cards support OpenGL and/or DirectX, so you might get away with what you want to do through shaders written in OpenGL/GLSL or DirectX/HLSL.
Check the specification of your video card.
Warning: these cards are not known to have particularly good GPUs.

Using the OpenCL/CUDA/Stream capabilities of a Graphics card requires drivers that expose the functionality. Aside from that, older cards (like say the ATI X800 series) do not have the required hardware to efficiently do what GPGPU requires, and thus are unusable for such purposes.
I doubt Matrox VGA cards have any support for GPGPU whatsoever.

Related

How to create a graphic canvas in pure c to display graphic figures without any libraries and platform independent

I think it must be possible to create a graphics library without any other library.
Just to draw circles and triangles and rectangles with basic math. If so, where is the point how to make the "drawable area"?
Is it possible to draw to screen in pure C or is assembly required?
Graphics programming is inherently platform dependent. Let's say, for the sake of argument, there were only two operating systems: linux and windows. You can use platform specific functions on both of them, to create windows and draw something. For your application to be "platform independent" in this context would mean to detect which OS you are running on (say, using preprocessor defines at compile time) and use different system calls based on that. However, this gets really messy, really fast.
It gets even worse when you're talking about 3D (or hardware accelerated 2D), because different graphics cards again behave differently. So, again, even if there were only two graphics cards (plus the two operating systems), you're already at four different cases for the same basic operation of, say, drawing a circle inside a window.
Can you do it?
Technically, yes. But graphics libraries exist precisely because most people wouldn't want to.
What i would personally recommend, if you don't want to rely to heavily on third party libraries, is to use OpenGL. Yes, it's a library, but it comes preinstalled on most systems.
If you actually want to create your own platform independent graphics library, i would suggest to get comfortable using the existing ones first, just to get a feel for what is involved in making something like that work.

Changing display modes from the command line

Way way back in the day Itried to learn C from a game programming book. If I recall correctly, one of the first things your game "engine" would do would be to switch display modes to render. This involved a bit of asm to switch to a 640x480 display mode (mode 13 maybe?) so you could draw directly to the screen. Something like that.
My question is, what is the modern equivalent of this? I'm interested in writing a command line program that does something similar; drops into some kind of raster mode for me to draw to, but, I do not assume that my program would be running under some kind of window manager like kde, unity,Aqua etc.
Would this be something that OpenGL could provide (or does OpenGL assume a window manager too). My proposed program isn't a game, but would ideally start with a basic clear screen that I can draw primitives (2d lines, circles rects etc)
Cheers!
Modern operating systems don't give programmers as convenient access to low-level graphics routines as they used to. Partially, this is due to the advent of the GPU, which makes utilizing the graphics hardware a much more significant challenge than if you only had a CPU. The other reason is as window managers have gotten more and more complex, the graphical sandbox each operating system gives a programmer is more constrained.
That being said, OpenGL is definitely worth looking at. Its cross-platform, versatile, and automatically utilizes any hardware available (including the graphics card). OpenGL itself doesn't directly provide access to a windowing context, but you can easily create that with the OpenGL utility library (GLUT). OpenGL is however very low-level, you'll have to deal with frame buffers and flushing and bit masks and all sorts of low-level nonsense that can make OpenGL development a nightmare if you haven't done it before.
If I were starting a project, I would probably want a more robust graphics environment that provides drawing functions and windowing out of the box. Both SDL and SFML provide low-level graphics APIs that will be a little more friendly to start with. They are both implemented on top of OpenGL so you can use any of the OpenGL features when you want to, but you don't have to worry about some of the more tedious details.
As a side note, C might not be the best language to get started with graphics programming these days. If you want a really simple graphics environment that is becoming more relevant every day, you might want to check out what the web has to provide. JavaScript and the HTML5Canvas provide a very simple interface for drawing primitives, images, etc.

OpenCL which SDK is best?

I am a beginner in OpenCL programming. My PC has windows 8.1 with both intel graphics and AMD Radeon 7670. When I searched to download an OpenCL SDK and sample helloworld programs, I found that there are separate SDKs and programs in entirely different formats available. I have to use C not C++. Can anyone suggest which SDK I should install? Please help.
At the lowest level, the various OpenCL SDKs are the same; they all include cl.h from the Khronos website. Once you've included that header you can write to the OpenCL API, and then you need to link to OpenCL.lib, which is also supplied in the SDK. At runtime, your application will load the OpenCL.dll that your GPU vendor has installed in /Windows/System32.
Alternatively, you can include cl.hpp and use the C++ wrapper, but since you said you're a C programmer, and because most of the books use the C API, stick with cl.h. I think this might account for the "programs in entirely different formats" observation you made which is why I bring it up here.
The benefit of one SDK over another typically is for profiling and debugging. The AMD SDK, for example, includes APP Profiler (or now CodeXL) which will help you figure out how to make your kernels faster. NVIDIA supplies Parallel Nsight for the same purpose, and Intel also has performance tools.
So you might choose your SDK based on the hardware in your machine, but understand that once you've coded to the OpenCL API, your application can run on other GPUs from other vendors -- that is the benefit of OpenCL. You should even be able to get samples from one vendor to execute on hardware from another.
One thing to be careful of is versions: If you code to an OpenCL 1.2 SDK you might not run on OpenCL 1.1 hardware.
For me the best thing with OpenCL is that you do not need an SDK at all because it abstracts different Vendor implementations behind a common Interface (see Answer in this Thread: Do I really need an OpenCL SDK?).

Is it possible to work with the sound card a system and produce notes using C language?

I was wondering if its possible to use a sound card , and produce various notes from it using assembly level or C programming language .
See this SO answer Streaming Data to Sound Card Using C on Windows
which points you towards http://www.portaudio.com/
PortAudio is a free, cross-platform, open-source, audio I/O library. It lets you write simple audio programs in 'C' or C++ that will compile and run on many platforms including Windows, Macintosh OS X, and Unix (OSS/ALSA). It is intended to promote the exchange of audio software between developers on different platforms. Many applications use PortAudio for Audio I/O.
PortAudio provides a very simple API for recording and/or playing sound using a simple callback function or a blocking read/write interface. Example programs are included that play sine waves, process audio input (guitar fuzz), record and playback audio, list available audio devices, etc.
Yes, it is possible. Implementation of that will greatly depend on the system you will be coding for.
You're generally will have a choice to work with DAC (digital to analog converter) output, or, possible, accessing MIDI.
I'm no expert in sound generation, but... Of course it's possible. Something is needed to make calls to the sound card at some point in any application that uses audio, after all.
However, in almost all cases it's better to make calls to an API, and let the existing sound card driver of the system do all the busywork. Much more portable (...to an extent, at least), and much easier.
http://www.linux.com/archive/feature/113775 might have some good info. For Windows Vista/7, you can check out http://msdn.microsoft.com/en-us/library/dd370784%28v=vs.85%29.aspx.
Oh, and in many cases you'd be better off using an existing software library that can produce the notes rather than trying to generate the waveforms yourself. (See Fredrik's answer.)

How to activate nVidia cards programmatically on new MacBookPros for CUDA programming?

The new MacBookPros come with two graphic adapters, the Intel HD Graphics, and the NVIDIA GeForce GT 330M. OS X switches back and forth between them, depending on the workload, detection of an external monitor, or activation of Rosetta.
I want to get my feet wet with CUDA programming, and unfortunately the CUDA SDK doesn't seem to take care of this back-and-forth switching. When Intel is active, no CUDA device gets detected, and when the NVidia card is active, it gets detected. So my current work-around is to use the little tool gfxCardStatus (http://codykrieger.com/gfxCardStatus/) to force the card on or off, just as I need it, but that's not satisfactory.
Does anybody here know what the Apple-blessed, Apple-recommended way is to (1) detect the presence of a CUDA card, (2) to activate this card when present?
Well, supposedly MacOsX should switch back and forth when needed, and apparently it doesn't consider CUDA.
In Snow Leopard Apple introduced OpenCL, which is supposed to be used to program the GPU by any application, this is probably Apple's recommended way of achieving that, instead of CUDA.
I am testing CUDA and OpenCL on the NVidia-Platform. All my application (i have to write it with cuda and opencl framework) achieve the same performance (measured in MFlops).
BUT: if you use local memory optimization for NVidia, than there is some problems to run this application with ATI-GPU. So this is not really cross platform:(

Resources