Easy way to display a continuously updating image in C/Linux - c

I'm a scientist who is quite comfortable with C for numerical computation, but I need some help with displaying the results. I want to be able to display a continuously updated bitmap in a window, which is calculated from realtime data. I'd like to be able to update the image quite quickly (e.g. faster than 1 frame/second, preferably 100 fps). For example:
char image_buffer[width*height*3];//rgb data
initializewindow();
for (t=0;t<t_end;t++)
{
getdata(data);//get some realtime data
docalcs(image_buffer, data);//process the data into an image
drawimage(image_buffer);//draw the image
}
What's the easiest way to do this on linux (Ubuntu)? What should I use for initializewindow() and drawimage()?

If all you want to do is display the data (ie no need for a GUI), you might want to take a look at SDL: It's straight-forward to create a surface from your pixel data and then display it on screen.
Inspired by Artelius' answer, I also hacked up an example program:
#include <SDL/SDL.h>
#include <assert.h>
#include <stdint.h>
#include <stdlib.h>
#define WIDTH 256
#define HEIGHT 256
static _Bool init_app(const char * name, SDL_Surface * icon, uint32_t flags)
{
atexit(SDL_Quit);
if(SDL_Init(flags) < 0)
return 0;
SDL_WM_SetCaption(name, name);
SDL_WM_SetIcon(icon, NULL);
return 1;
}
static uint8_t * init_data(uint8_t * data)
{
for(size_t i = WIDTH * HEIGHT * 3; i--; )
data[i] = (i % 3 == 0) ? (i / 3) % WIDTH :
(i % 3 == 1) ? (i / 3) / WIDTH : 0;
return data;
}
static _Bool process(uint8_t * data)
{
for(SDL_Event event; SDL_PollEvent(&event);)
if(event.type == SDL_QUIT) return 0;
for(size_t i = 0; i < WIDTH * HEIGHT * 3; i += 1 + rand() % 3)
data[i] -= rand() % 8;
return 1;
}
static void render(SDL_Surface * sf)
{
SDL_Surface * screen = SDL_GetVideoSurface();
if(SDL_BlitSurface(sf, NULL, screen, NULL) == 0)
SDL_UpdateRect(screen, 0, 0, 0, 0);
}
static int filter(const SDL_Event * event)
{ return event->type == SDL_QUIT; }
#define mask32(BYTE) (*(uint32_t *)(uint8_t [4]){ [BYTE] = 0xff })
int main(int argc, char * argv[])
{
(void)argc, (void)argv;
static uint8_t buffer[WIDTH * HEIGHT * 3];
_Bool ok =
init_app("SDL example", NULL, SDL_INIT_VIDEO) &&
SDL_SetVideoMode(WIDTH, HEIGHT, 24, SDL_HWSURFACE);
assert(ok);
SDL_Surface * data_sf = SDL_CreateRGBSurfaceFrom(
init_data(buffer), WIDTH, HEIGHT, 24, WIDTH * 3,
mask32(0), mask32(1), mask32(2), 0);
SDL_SetEventFilter(filter);
for(; process(buffer); SDL_Delay(10))
render(data_sf);
return 0;
}

I'd recommend SDL too. However, there's a bit of understanding you need to gather if you want to write fast programs, and that's not the easiest thing to do.
I would suggest this O'Reilly article as a starting point.
But I shall boil down the most important points from a computations perspective.
Double buffering
What SDL calls "double buffering" is generally called page flipping.
This basically means that on the graphics card, there are two chunks of memory called pages, each one large enough to hold a screen's worth of data. One is made visible on the monitor, the other one is accessible by your program. When you call SDL_Flip(), the graphics card switches their roles (i.e. the visible one becomes program-accessible and vice versa).
The alternative is, rather than swapping the roles of the pages, instead copy the data from the program-accessible page to the monitor page (using SDL_UpdateRect()).
Page flipping is fast, but has a drawback: after page flipping, your program is presented with a buffer that contains the pixels from 2 frames ago. This is fine if you need to recalculate every pixel every frame.
However, if you only need to modify smallish regions on the screen every frame, and the rest of the screen does not need to change, then UpdateRect can be a better way (see also: SDL_UpdateRects()).
This of course depends on what it is you're computing and how you're visualising it. Analyse your image-generating code - maybe you can restructure it to get something more efficient out of it?
Note that if your graphics hardware doesn't support page flipping, SDL will gracefully use the other method for you.
Software/Hardware/OpenGL
This is another question you face. Basically, software surfaces live in RAM, hardware surfaces live in Video RAM, and OpenGL surfaces are managed by OpenGL magic.
Depending on your hardware, OS, and SDL version, programatically modifying the pixels of a hardware surface can involve a LOT of memory copying (VRAM to RAM, and then back!). You don't want this to happen every frame. In such cases, software surfaces work better. But then, you can't take advantage of double buffering, nor hardware accelerated blits.
Blits are block-copies of pixels from one surface to another. This works well if you want to draw a whole lot of identical icons on a surface. Not so useful if you're generating a temperature map.
OpenGL lets you do much more with your graphics hardware (3D acceleration for a start). Modern graphics cards have a lot of processing power, but it's kind of hard to use unless you're making a 3D simulation. Writing code for a graphics processor is possible but quite different to ordinary C.
Demo
Here's a quick demo SDL program that I made. It's not supposed to be a perfect example, and may have some portability problems. (I will try to edit a better program into this post when I get time.)
#include "SDL.h"
#include <assert.h>
#include <math.h>
/* This macro simplifies accessing a given pixel component on a surface. */
#define pel(surf, x, y, rgb) ((unsigned char *)(surf->pixels))[y*(surf->pitch)+x*3+rgb]
int main(int argc, char *argv[])
{
int x, y, t;
/* Event information is placed in here */
SDL_Event event;
/* This will be used as our "handle" to the screen surface */
SDL_Surface *scr;
SDL_Init(SDL_INIT_VIDEO);
/* Get a 640x480, 24-bit software screen surface */
scr = SDL_SetVideoMode(640, 480, 24, SDL_SWSURFACE);
assert(scr);
/* Ensures we have exclusive access to the pixels */
SDL_LockSurface(scr);
for(y = 0; y < scr->h; y++)
for(x = 0; x < scr->w; x++)
{
/* This is what generates the pattern based on the xy co-ord */
t = ((x*x + y*y) & 511) - 256;
if (t < 0)
t = -(t + 1);
/* Now we write to the surface */
pel(scr, x, y, 0) = 255 - t; //red
pel(scr, x, y, 1) = t; //green
pel(scr, x, y, 2) = t; //blue
}
SDL_UnlockSurface(scr);
/* Copies the `scr' surface to the _actual_ screen */
SDL_UpdateRect(scr, 0, 0, 0, 0);
/* Now we wait for an event to arrive */
while(SDL_WaitEvent(&event))
{
/* Any of these event types will end the program */
if (event.type == SDL_QUIT
|| event.type == SDL_KEYDOWN
|| event.type == SDL_KEYUP)
break;
}
SDL_Quit();
return EXIT_SUCCESS;
}

GUI stuff is a regularly-reinvented wheel, and there's no reason to not use a framework.
I'd recommend using either QT4 or wxWidgets. If you're using Ubuntu, GTK+ will suffice as it talks to GNOME and may be more comfortable to you (QT and wxWidgets both require C++).
Have a look at GTK+, QT, and wxWidgets.
Here's the tutorials for all 3:
Hello World, wxWidgets
GTK+ 2.0 Tutorial, GTK+
Tutorials, QT4

In addition to Jed Smith's answer, there are also lower-level frameworks, like OpenGL, which is often used for game programming. Given that you want to use a high frame rate, I'd consider something like that. GTK and the like aren't primarily intended for rapidly updating displays.

In my experience Xlib via MIT-SHM extension was significantly faster than SDL surfaces, not sure I used SDL in the most optimal way though.

Related

How to efficiently draw to plain win32 windows using Direct2D and GDI

I've been working on a GUI toolkit for my future programming needs. It's basically reinventing the wheel and implementing many controls found in Windows' common controls, QT and other frameworks. It's going to be used by me mainly.
It's main design guidelines are:
implemented in plain C (not C++) and Win32 (GDI + Direct2D) (no other external dependencies)
easy to look at (even for a long time)
customization similar to QT's css-based stylesheets
easy to render (not much complex geometry)
really good performance (no performance issues, even in large GUI projects)
It's been going quite well for now and I have managed to implement quite a few important and trivial controls. Right now, I am building my slider control that can be a rotary slider (like QDial), or a horizontal or vertical bar slider.
While there are no obvious bugs that I have noticed during my testing, I am questioning the way I am rendering the control (using Direct2D and GDI).
Below you can find the commented draw code and the result it produces. I know it's not perfect by any means but it works flawlessly for me. Please do not judge my coding style for this question is really not on that.
static int __Slider_Internal_DCDBufferDraw(Slider *sSlider) {
if (!sSlider->_Base.sDraw)
return ERROR_OK;
/* start timer */
LARGE_INTEGER t1, t2;
QueryPerformanceCounter(&t1);
/* appearance depends on enabled state of the control */
_Bool blIsEnabled = IsWindowEnabled(sSlider->_Base.hwWindow /* control's HWND instance */);
D2D1_ELLIPSE sInnerCircle = {
.point = { __SlR_C /* center of circle, essentially width / 2 */, __SlR_C + __ClH(sSlider) / 6.0f /* center + some offset */ },
.radiusX = __SlR_IR + 0.5f, /* IR = inner radius */
.radiusY = ___SlR_IR + 0.5f
};
D2D1_ELLIPSE sOuterCircle = {
.point = { __SlR_C, __SlR_C },
.radiusX = __SlR_OR + 0.5f, /* OR = outer radius */
.radiusY = __SlR_OR + 0.5f
};
D2D1_BEGIN:
/*
Global struct "gl_sD2D1Renderer" contains a ID2D1Factory, a ID2D1DCRenderTarget (.sDCTarget), and a ID2D1SolidColorBrush (.sDCSCBrush).
Every control uses this DC to draw Direct2D content. Before anything is drawn, the DC is bound. Right now, I draw everything to an control instance-specific
HDC "sSlider->_Base.sDraw->hMemDC" in this function. In my actual WM_PAINT handler, I just BitBlt the memory bitmap. This
(1) removes flickering,
(2) improves draw speed for normal WM_PAINT commands, for example, when the client area of the window is uncovered/moved/etc.
In these cases, I just use the most recent representation without redrawing everything because the control
only changes its appearance in reaction to user input.
The brush is used to basically draw all the color information. It just gets its color changed every time it's needed.
The reason I am using a DC render target is because
(1) of its reusability (can be used for drawing all controls, without having to create separate render targets for each control instance)
(2) GDI compatibility (see "__Slider_Internal_DrawNumbersAndText"'s comment below to learn why I need it)
*/
ID2D1DCRenderTarget_BindDC(gl_sD2D1Renderer.sDCTarget, sSlider->_Base.sDraw->hMemDC, &sSlider->_Base.sClientRect);
ID2D1DCRenderTarget_BeginDraw(gl_sD2D1Renderer.sDCTarget);
ID2D1DCRenderTarget_Clear(gl_sD2D1Renderer.sDCTarget, &colBkgnd);
/* rotate the smaller circle by the current slider position (min ... max) */
D2D1_MATRIX_3X2_F sMatrix;
D2D1MakeRotateMatrix(sSlider->flPos, (D2D1_POINT_2F){ __SlR_C, __SlR_C}, &sMatrix);
ID2D1DCRenderTarget_SetTransform(gl_sD2D1Renderer.sDCTarget, &sMatrix);
/* draw the outer circle */
ID2D1SolidColorBrush_SetColor(gl_sD2D1Renderer.sDCSCBrush, blIsEnabled ? &colBtnSurf : &colBtnDisSurf);
ID2D1DCRenderTarget_FillEllipse(gl_sD2D1Renderer.sDCTarget, &sOuterCircle, (ID2D1Brush *)gl_sD2D1Renderer.sDCSCBrush);
ID2D1SolidColorBrush_SetColor(gl_sD2D1Renderer.sDCSCBrush, &colOutline);
ID2D1DCRenderTarget_DrawEllipse(gl_sD2D1Renderer.sDCTarget, &sOuterCircle, (ID2D1Brush *)gl_sD2D1Renderer.sDCSCBrush, 1.0f, NULL);
/* draw the inner circle */
ID2D1SolidColorBrush_SetColor(gl_sD2D1Renderer.sDCSCBrush, blIsEnabled ? (sSlider->_Base.wState & STATE_CAPTURE || sSlider->_Base.wState & STATE_MINSIDE ? &colBtnSelSurf : &colMark) : &colMarkDis);
ID2D1DCRenderTarget_FillEllipse(gl_sD2D1Renderer.sDCTarget, &sInnerCircle, (ID2D1Brush *)gl_sD2D1Renderer.sDCSCBrush);
ID2D1SolidColorBrush_SetColor(gl_sD2D1Renderer.sDCSCBrush, &colOutline);
ID2D1DCRenderTarget_DrawEllipse(gl_sD2D1Renderer.sDCTarget, &sInnerCircle, (ID2D1Brush *)gl_sD2D1Renderer.sDCSCBrush, 1.0f, NULL);
/* reset the transform */
ID2D1DCRenderTarget_SetTransform(gl_sD2D1Renderer.sDCTarget, &gl_sD2D1Renderer.sIdentityMatrix);
/* draw ticks using Direct2D */
__Slider_Internal_DrawTicks(sSlider, 0); /* draw small ticks */
__Slider_Internal_DrawTicks(sSlider, 1); /* draw big ticks */
/* Call EndDraw, check for render target errors, drop the render target if necessary, recreate it and "goto D2D1_BEGIN;" */
ID2D1DCRenderTarget_SafeEndDraw(gl_sD2D1Renderer.sDCTarget, NULL, NULL);
/*
Draw text using plain GDI (no DirectWrite because there is no functioning C-API.)
I have to do this here because I need to finish rendering the D2D content first. If I render GDI content in between Direct2D calls,
it would just be overdrawn because drawing is actually done in "EndDraw", rather than in "DrawEllipse", "Clear", etc. These calls just
build a batch while "Ellipse" or "TextExtOut" do immediately draw.
*/
__Slider_Internal_DrawNumbersAndText(sSlider);
/* end timer */
QueryPerformanceCounter(&t2);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
double elapsed = (double)(t2.QuadPart - t1.QuadPart) / (freq.QuadPart / 1000.0);
printf("Draw call of \"%s\" took: %g ms\n", sSlider->_Base.strID, elapsed);
return ERROR_OK; /* 0 */
}
static int __Slider_Internal_DrawTicks(Slider *sSlider, int dwType) {
/* BTC = big tick count */
/* STC = small tick count */
/* check if ticks can be drawn, return if, for instance, not all data is present or tick drawing is disabled */
if (!(dwType ? sSlider->wBTC : sSlider->wSTC) || !(sSlider->wType & (dwType ? SLO_BIGTICKS : SLO_SMALLTICKS)))
return __ERROR_OK;
/* tick color */
ID2D1SolidColorBrush_SetColor(gl_sD2D1Renderer.sDCSCBrush, &colOutline); /* RGB(0, 0, 0) */
float flCurrPos = sSlider->sPosRange.flMin; /* start at the minimum possible angle for this slider */
/* calculate the step, i.e. angle to advance based on requested tick count and valid position (angle) range */
float flStep = (sSlider->sPosRange.flMax - sSlider->sPosRange.flMin) / (dwType ? sSlider->wBTC : sSlider->wSTC);
D2D1_MATRIX_3X2_F sMatrix; /* rotation matrix */
D2D1_POINT_2F sP1, sP2; /* start and end point of the line representing a tick */
D2D1_POINT_2F sCenter = {
__SlR_C + 0.5f,
__SlR_C + 0.5f
};
/* calculate tick dimensions given the type (= small or large) */
__Slider_getTickDimensions(sSlider, &sP1, &sP2, dwType);
int dwCount = 0;
do {
/* prevent drawing over big ticks */
if (!dwType && !(sSlider->wSTC % sSlider->wBTC))
if (!(dwCount % (sSlider->wSTC / sSlider->wBTC)))
goto ADD_STEP; /* just advance, do not draw */
if (sSlider->wType & SLT_RADIAL) { /* only do this if our slider is a rotary knob */
/* use the rotation matrix to draw the ticks in the same manner the inner circle of the slider is drawn */
D2D1MakeRotateMatrix(flCurrPos, sCenter, &sMatrix);
ID2D1DCRenderTarget_SetTransform(gl_sD2D1Renderer.sDCTarget, &sMatrix);
}
ID2D1DCRenderTarget_DrawLine(gl_sD2D1Renderer.sDCTarget, sP1, sP2, (ID2D1Brush *)gl_sD2D1Renderer.sDCSCBrush, 1.0f, NULL);
ADD_STEP:
flCurrPos += flStep; /* advance current position by previously */
} while (dwCount++ < (dwType ? sSlider->wBTC : sSlider->wSTC));
return ERROR_OK;
}
static int __Slider_Internal_DrawNumbersAndText(Slider *sSlider) {
/* only draw numbers if the option is specified */
if (sSlider->wType & SLO_NUMBERS) {
float flPosX, flPosY;
CHAR strString[8] = { 0 }; /* number string buffer */
SIZE sExtends = { 0 };
/* the same as in "DrawTicks" */
float flAngle = sSlider->sPosRange.flMin;
int dwNumber = sSlider->sNRange.dwMin; /* first number in the number range */
float flAStep = (sSlider->sPosRange.flMax - sSlider->sPosRange.flMin) / sSlider->wBTC; /* angle step */
int dwNStep = (sSlider->sNRange.dwMax - sSlider->sNRange.dwMin) / sSlider->wBTC; /* number step */
do {
/* this should be clear what it does */
sprintf_s(strString, 7, "%i", dwNumber);
GetTextExtentPoint32A(sSlider->_Base.sDraw->hMemDC, strString, (int)strlen(strString), &sExtends);
/* calculate text position around the outer circle */
/* gl_flBTL = big tick length, gl_flTDP = pitch between outer circle edge and tick start */
flPosX = cosf(__toRad(flAngle - 90.0f)) /* deg to rad */ * (__SlR_OR + gl_flTDP + gl_flBTL + 10.0f);
flPosY = sinf(__toRad(flAngle - 90.0f)) * (__SlR_OR + gl_flTDP + gl_flBTL + 10.0f);
TextOutA(
sSlider->_Base.sDraw->hMemDC,
(int)(__SlR_C - flPosX),
(int)(__SlR_C - flPosY - sExtends.cy / 2.0f),
strString,
(int)strlen(strString)
);
flAngle += flAStep;
dwNumber += dwNStep;
/* prevent overdrawing first number when 360 degrees range */
if (dwNumber == sSlider->sNRange.dwMax && sSlider->sPosRange.flMin == 0.0f && sSlider->sPosRange.flMax == 360.0f)
break;
} while (dwNumber <= sSlider->sNRange.dwMax);
}
/* draw the main slider text in the middle at the bottom edge of the control */
/* __Cl* = extends of the client area of the window (X = left, Y = top, W = right, H = bottom) */
TextOut(
sSlider->_Base.sDraw->hMemDC,
(__ClW(sSlider) - __ClX(sSlider)) / 2,
(__ClH(sSlider) - __ClY(sSlider)) / 2 + (int)__SlR_OR + 10,
sSlider->_Text.strText,
sSlider->_Text.dwLengthInChars
);
return TEGTK_ERROR_OK; /* 0 */
}
With certain exemplary values given, it produces this result:
While I find the result visually pleasing and its rendering procedure relatively simple, I think it's slow. I have not noticed any performance issues yet; therefore, I have measured the time it takes to complete an entire draw call. Note that this is done every time the slider's appearance changes due to user input.
I have also found that when I move the mouse slowly, the draw calls are way slower than when I move the mouse quickly.
Slow mouse movement:
Fast mouse movement:
The issue is now that I create a separate memory DC for every control instance, which I later draw to using the code above. I have heard that I can only use 10k GDI objects per process. I already use at least 2 per control (a DC and a bitmap). What if I have a really large GUI project with a lot going on? I really do not ever want to run into the limits.
That's why I was thinking of moving the paint code entirely into WM_PAINT and using the DC I get from "BeginPaint()" (so no extra memory DC and bitmap needed). Basically forcing an entire repaint when it gets called. That's where the speed issue comes into play as WM_PAINT can be sent really frequently.
I know I can smartly repaint only what's needed, but the atomic primitive draw calls do not do a lot when it comes to performance. What takes a lot of time is binding the DC and EndDraw.
I now have a dilemma because I want to be both fast but also not using more GDI objects than I absolutely have to. So not using a separate memory buffer is an option if the draw described above is in-fact not slow objectively.
These are my questions:
Is my drawing code actually slow or is it okay if redrawing the control takes like 1-5 ms on average?
What can I do to improve its performance if it's actually slow? (I have tried to buffer as much computational data as I can -- while it essentially doubles the control's memory footprint, it does not really do anything for performance.)
How is the actual redrawing done in commercially available frameworks such as QT and wxWidgets?
I really hope it's clear what I want. If there are any questions, feel free to ask. It's not only about good code, but also about good design. I want to make sure I do not implement major design flaws that early in the project.

C - how to read color of a screen pixel (FAST)? (in Windows)

So I am looking for a way to read a color of a screen pixel in C code.
I already found implementation in C for *nix (which uses X11/Xlib library, that as I understood is for *nix systems only) and I tried the code on a linux machine, and it ran pretty fast (it reads 8K of pixels in about 1 second).
Here's the code in C that I've found and forked:
#include <X11/Xlib.h>
void get_pixel_color (Display *d, int x, int y, XColor *color)
{
XImage *image;
image = XGetImage (d, RootWindow (d, DefaultScreen (d)), x, y, 1, 1, AllPlanes, XYPixmap);
color->pixel = XGetPixel (image, 0, 0);
XFree (image);
XQueryColor (d, DefaultColormap(d, DefaultScreen (d)), color);
}
// Your code
XColor c;
get_pixel_color (display, 30, 40, &c);
printf ("%d %d %d\n", c.red, c.green, c.blue);
And I was looking for equivalent solution for Windows as well.
I came across this code (I've put the code about reading screen pixel in a 'for' loop):
FARPROC pGetPixel;
HINSTANCE _hGDI = LoadLibrary("gdi32.dll");
if(_hGDI)
{
pGetPixel = GetProcAddress(_hGDI, "GetPixel");
HDC _hdc = GetDC(NULL);
if(_hdc)
{
int i;
int _red;
int _green;
int _blue;
COLORREF _color;
ReleaseDC(NULL, _hdc);
for (i=0;i<8000;i++)
{
_color = (*pGetPixel) (_hdc, 30 ,40);
_red = GetRValue(_color);
_green = GetGValue(_color);
_blue = GetBValue(_color);
}
ReleaseDC(NULL, _hdc);
printf("Red: %d, Green: %d, Blue: %d", _red, _green, _blue);
}
FreeLibrary(_hGDI);
(using gdi32.dll and windows.h...)
and the 'for' portion of the code (where we read 8K of pixels) runs ALOT slower than the solution in C.
it takes 15 seconds to finish compared to 1 second with X11/Xlib.h library!
So, how can I make it better? or there is any other better and FASTER implementation to read pixel's colors with C code in Windows machine?
Thanks ahead!
I would suggest using loop unwinding. Basically, what this does is execute multiple cycles of your loop in a single iteration:
// Loop the equivalent of `n` cycles, ignoring the least significant bit
for (unsigned int i = 0; i < (n & ~0x01); i += 2)
{
do_some_operation(i);
do_some_operation(i + 1);
}
// Perform the last cycle manually, if one needs to be completed
if (n & 0x01)
{
do_some_operation(n - 1);
}
In this code, the loop ignores the least significant bit of n (which determines the parity of n) so that we are safe to increment i by 2 and perform the equivalent of 2 cycles in just 1 cycle, meaning that this loop is ~2 times faster than a conventional for (unsigned int i = 0; i < n; i++) loop. The final if statement checks the parity of n. If n is odd, the last cycle of the loop is performed.
Of course, this could be reimplemented to increment i by more than 2, but this would become increasingly complex. There is also an alternative to this, Duff's Device. It is basically the same idea, but uses a switch/case block.
After many tests, I've found that just about /anything/ you do to read pixels off the screen in windows using the GDI takes ~16ms (or about 1 frame) whether it is reading a single pixel, or reading even a small area with BitBlt. There doesn't seem to be any clear solution. I will be experimenting with the media libraries to see if I can get anywhere, but the Internet is pretty sure doing anything like this in Windows is just an awful mess, and really terrible things have to be done to do things like VNC or Fraps.

C how to draw a point / set a pixel without using graphics library or any other library functions

I am trying to understand how I can draw a set of points (/set the pixels) that form a circle without using the library functions.
Now, getting the (x,y) co-ordinates of the points given the radius is straightforward.
for (x=-r; x <r; x=x+0.1) {
y = sqrt(r*r - x*x);
draw(x,y, 0, 0);
}
But once I have the points, how do you actually draw the circle is what is confusing to me. I can use the graphic library but I want to understand how you can do it without using the graphics library
void draw(float x, float y, float center_x, float center_y) {
//logic to set pixel given x, y and circle's center_x and center_y
// basically print x and y on the screen say print as a dot .
// u 'd need some sort of a 2d char array but how do you translate x and y
// to pixel positions
}
Could someone share any links/references or explain how this works?
char display[80][26];
void clearDisplay(void) {
memset(display, ' ', sizeof(display));
}
void printDisplay(void) {
for (short row=0; row<26; row++) {
for (short col=0; col<80; col++) {
printf("%c", display[col][row]);
}
printf("\n");
}
}
void draw(float x, float y, float center_x, float center_y) {
if (visible(x,y)) {
display[normalize(x)][normalize(y)] = '.';
}
}
EDITH:
you changed your comment, to incorporate more of your question, so I will expand my answer a bit.
you have two sets of coordinates:
world coordinates (like scaled longitude and latitude on a world map or femtometers on a electromagnet microscope) these are mostly your x and y
display coordinates: these are the representation of your displaying device, like a Nexus 7 or a Nexus 10 Tablet, with its physical dimensions (pixels or pels or dots per inch)
you need a metric, that transforms your world coordinates into display coordinates. To make things more complicated, you need a window (the part of the world you want to show the user) to clip the things you do cannot show (like africa, when you want to show europe). And you may want to zoom your world coordinates to match your display coordinates (how much of europe you want to display)
These metrics and clippings are simple algebraic transformations
zoom the world-coordinate to display-coordinate: display-x = world-x * factor (femtometer or kilometer to pixel)
translate the world-center to display-center: display-X + adjustment
and so on. Just wikipedia for "alegebraic transformation" or "geometrics"
It's a tough question because technically C doesn't have any built-in input/output capability. Even writing to a file requires a library. On certain systems, like real-mode DOS, you could directly access the video memory and set pixels by writing values into that memory, but modern OSes really get in the way of doing that. You could try writing a bootloader to launch the program in a more permissive CPU mode without an OS, but that's an enormous can of worms.
So, using the bare mimimum, the stdio library, you can write to stdout using ascii graphics as the other answer shows, or you can output a simple graphics format like xbm which can be viewed with a separate image-viewing program. A better choice might be the netpbm formats.
For the circle-drawing part, take a look at the classic Bresenham algorithm. Or, for way too much information, chapter 1 of Jim Blinn's A Trip Down the Graphics Pipeline describes 15 different circle-drawing algorithms!
I have a file that takes advantage of unicode braille.
#include <stdio.h>
#include <wchar.h>
#include <locale.h>
#define PIXEL_TRUE 1
#define PIXEL_FALSE 0
// Core Function.
void drawBraille(int type) {
if(type > 255) {
return;
}
setlocale(LC_CTYPE, "");
wchar_t c = 0x2800 + type;
wprintf(L"%lc", c);
}
/*
Pixel Array.
0x01, 0x08,
0x02, 0x10,
0x04, 0x20,
0x40, 0x80,
*/
int pixelArr[8] = {
0x01, 0x08,
0x02, 0x10,
0x04, 0x20,
0x40, 0x80
};
typedef int cell[8];
void drawCell(cell cell) {
int total;
for(int i = 0; i < 8; i++) {
if(cell[i] == 1) {
total += pixelArr[i];
}
}
drawBraille(total);
}
// Main.
int main(void) {
cell a = {
0, 1,
0, 0,
1, 1,
0, 1
};
drawCell(a);
return 0;
}
You're Welcome!

NDS Homebrew: Multiple Animation Speeds for Sprites

I have been experimenting with the devkitARM toolchain for NDS homebrew recently. Something I would like better understand, however, is how to control sprite animation speeds. The only way I know of doing this is by 'counting frames'. For example, the following code could be placed into the "animate_simple" example included with devkitpro:
int main(void) {
int frame = 0;
...
while(1) {
...
if(++frame > 9)
frame = 0;
}
return 0;
}
This is generally fine, but it ensures that all the animation initialized in the main loop runs at a set speed. How would I go about having two different sprites, each animating at different speeds? Any insight would be greatly appreciated.
Use a separate frame counter for each sprite. For example, you can create a sprite struct:
typedef struct _Sprite Sprite;
struct _Sprite
{
int frame;
int count;
int delay; /* handles the speed of animation */
int limit;
};
// initialize all fields
void sprite_update(Sprite* s)
{
if ( ( s->count++ % s->delay ) == 0 ) )
{
if ( s->frame++ > s->limit ) s->frame = 0;
}
}
Give delay small value for faster animation, and a larger value for slow animation.
Sample Usage:
Sprite my_spr, my_spr_2;
/* initialize the fields, or write some function for initializing them. */
my_spr.delay = 5; /* fast */
my_spr_2.delay = 10; /* slow */
/* < in Main Loop > */
while(1){
...
sprite_update(&my_spr);
sprite_update(&my_spr_2);
}
Note:
Since you are targeting only one platform, the best way to control animation speed is to monitor the frame rate ( or "counting frames" as you call it ). Good thing about programming for consoles is, you don't have to set timed delays. Since all console devices of the same model usually run at the same speed, so the animation speed you get on your machine ( or emulator ) will be the same everyone gets. Different Processors Speeds are a real headache when programming for the PC.

How can this function be optimized? (Uses almost all of the processing power)

I'm in the process of writing a little game to teach myself OpenGL rendering as it's one of the things I haven't tackled yet. I used SDL before and this same function, while still performing badly, didn't go as over the top as it does now.
Basically, there is not much going on in my game yet, just some basic movement and background drawing. When I switched to OpenGL, it appears as if it's way too fast. My frames per second exceed 2000 and this function uses up most of the processing power.
What is interesting is that the program in it's SDL version used 100% CPU but ran smoothly, while the OpenGL version uses only about 40% - 60% CPU but seems to tax my graphics card in such a way that my whole desktop becomes unresponsive. Bad.
It's not a too complex function, it renders a 1024x1024 background tile according to the player's X and Y coordinates to give the impression of movement while the player graphic itself stays locked in the center. Because it's a small tile for a bigger screen, I have to render it multiple times to stitch the tiles together for a full background. The two for loops in the code below iterate 12 times, combined, so I can see why this is ineffective when called 2000 times per second.
So to get to the point, this is the evil-doer:
void render_background(game_t *game)
{
int bgw;
int bgh;
int x, y;
glBindTexture(GL_TEXTURE_2D, game->art_background);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &bgw);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &bgh);
glBegin(GL_QUADS);
/*
* Start one background tile too early and end one too late
* so the player can not outrun the background
*/
for (x = -bgw; x < root->w + bgw; x += bgw)
{
for (y = -bgh; y < root->h + bgh; y += bgh)
{
/* Offsets */
int ox = x + (int)game->player->x % bgw;
int oy = y + (int)game->player->y % bgh;
/* Top Left */
glTexCoord2f(0, 0);
glVertex3f(ox, oy, 0);
/* Top Right */
glTexCoord2f(1, 0);
glVertex3f(ox + bgw, oy, 0);
/* Bottom Right */
glTexCoord2f(1, 1);
glVertex3f(ox + bgw, oy + bgh, 0);
/* Bottom Left */
glTexCoord2f(0, 1);
glVertex3f(ox, oy + bgh, 0);
}
}
glEnd();
}
If I artificially limit the speed by called SDL_Delay(1) in the game loop, I cut the FPS down to ~660 ± 20, I get no "performance overkill". But I doubt that is the correct way to go on about this.
For the sake of completion, these are my general rendering and game loop functions:
void game_main()
{
long current_ticks = 0;
long elapsed_ticks;
long last_ticks = SDL_GetTicks();
game_t game;
object_t player;
if (init_game(&game) != 0)
return;
init_player(&player);
game.player = &player;
/* game_init() */
while (!game.quit)
{
/* Update number of ticks since last loop */
current_ticks = SDL_GetTicks();
elapsed_ticks = current_ticks - last_ticks;
last_ticks = current_ticks;
game_handle_inputs(elapsed_ticks, &game);
game_update(elapsed_ticks, &game);
game_render(elapsed_ticks, &game);
/* Lagging stops if I enable this */
/* SDL_Delay(1); */
}
cleanup_game(&game);
return;
}
void game_render(long elapsed_ticks, game_t *game)
{
game->tick_counter += elapsed_ticks;
if (game->tick_counter >= 1000)
{
game->fps = game->frame_counter;
game->tick_counter = 0;
game->frame_counter = 0;
printf("FPS: %d\n", game->fps);
}
render_background(game);
render_objects(game);
SDL_GL_SwapBuffers();
game->frame_counter++;
return;
}
According to gprof profiling, even when I limit the execution with SDL_Delay(), it still spends about 50% of the time rendering my background.
Turn on VSYNC. That way you'll calculate graphics data exactly as fast as the display can present it to the user, and you won't waste CPU or GPU cycles calculating extra frames inbetween that will just be discarded because the monitor is still busy displaying a previous frame.
First of all, you don't need to render the tile x*y times - you can render it once for the entire area it should cover and use GL_REPEAT to have OpenGL cover the entire area with it. All you need to do is to compute the proper texture coordinates once, so that the tile doesn't get distorted (stretched). To make it appear to be moving, increase the texture coordinates by a small margin every frame.
Now down to limiting the speed. What you want to do is not to just plug a sleep() call in there, but measure the time it takes to render one complete frame:
function FrameCap (time_t desiredFrameTime, time_t actualFrameTime)
{
time_t delay = 1000 / desiredFrameTime;
if (desiredFrameTime > actualFrameTime)
sleep (desiredFrameTime - actualFrameTime); // there is a small imprecision here
}
time_t startTime = (time_t) SDL_GetTicks ();
// render frame
FrameCap ((time_t) SDL_GetTicks () - startTime);
There are ways to make this more precise (e.g. by using the performance counter functions on Windows 7, or using microsecond resolution on Linux), but I think you get the general idea. This approach also has the advantage of being driver independent and - unlike coupling to V-Sync - allowing an arbitrary frame rate.
At 2000 FPS it only takes 0.5 ms to render the entire frame. If you want to get 60 FPS then each frame should take about 16 ms. To do this, first render your frame (about 0.5 ms), then use SDL_Delay() to use up the rest of the 16 ms.
Also, if you are interested in profiling your code (which isn't needed if you are getting 2000 FPS!) then you may want to use High Resolution Timers. That way you could tell exactly how long any block of code takes, not just how much time your program spends in it.

Resources