I have run into a problem that I am rather stumped on because every solution I can think of has an issue that makes it not work fully. I am working on a game on the MSP430FF529 that when first powered up has two images drawn to the screen infinitely using a loop and cycle delays. I would like to have it so that when the user presses the start button (a simple high-edge trigger on a port) that the program immediately stops drawing those screens, no matter what part of the process its in, and starts executing the rest of the code that runs the game.
I could put the function that puts the images on screen in a do while loop but then it wouldn't be asynchronous as the current image being drawn would have to finish before it moved on.
I'd use the break command but I don't think that works in ISRs and only when its directly in the loop.
I could put the entire rest of the program in the ISR I use for the start button press so that the screen drawing is essentially never returned to but thats really messes, poor coding, and would cause a lot of problems later.
Essentially, I want to make it so that when the button is pressed the program will immediately jump to the part of the program that is the actual game and forget about drawing those images on the screen. Is it possible to somehow have an ISR that doesn't return to what was currently happening after the code in the routine is executed? Basically, once the program starts moving forward (the start button is pressed) I don't want to come back to the function that draws the images unless I explicitly call it again.
The only thing I can think of is the goto command, which I feel in this particular instance would not actually be too bad, though I want to avoid using it for fear of it becoming a habit due to it being a poor solution in most cases. However, that might not even work because I have a feeling that using goto in a ISR would really mess up the stack.
Any ideas? Any suggestions are appreciated.
What you want is basically a "context switch". You should modify the program counter pointer and stack pointer which will be restored when you return from the ISR, and then do the normal ISR return so the interrupt mask is cleared, stack is restored, etc. As noted in the comments to your question, this likely requires some manual assembly code.
I'm not familiar with the MSP430, but on other architectures this is in a structure of saved registers on the kernel stack or interrupt-context stack (or maybe just "the stack" on some microcontrollers), or it might be in some special registers, and it's saved automatically by the CPU when it jumps to your ISR. So you have to change these register pointers where they are.
If you relax your requirement from "immediately" to "so fast that the user doesn't notice", you can put the if (button_pressed) into some loop in the image drawing routine.
If you really want to abort the image drawing immediately, you can do so by resetting the MCU (for example, by writing a wrong password to the WDT). In the application initialization code, check if one of the causes of the reset was your own software:
bool start_button = false;
for (;;) {
int cause = SYSRSTIV;
if (cause == SYSRSTIV_WDTKEY)
start_button = true;
if (cause == SYSRSTIV_NONE)
break;
// you might handle debugging of other reset causes here ...
}
if (!start_button)
draw_images();
else
actual_game();
(This assumes that your code never accidentally writes a wrong WDT password, but even if that happens, you're only skipping the intro images.)
Related
I have a simple OpenGL test app in C which draws different things in response to key input. (Mesa 8.0.4, tried with Mesa-EGL and with GLFW, Ubuntu 12.04LTS on a PC with NVIDIA GTX650). The draws are quite simple/fast (rotating triangle type of stuff). My test code does not limit the framerate deliberately in any way, it just looks like this:
while (true)
{
draw();
swap_buffers();
}
I have timed this very carefully, and I find that the time from one eglSwapBuffers() (or glfwSwapBuffers) call to the next is ~16.6 milliseconds. The time from after a call to eglSwapBuffers() to just before the next call is only a little bit less than that, even though what is drawn is very simple. The time that the swap buffers call takes is well under 1ms.
However, the time from the app changing what it's drawing in response to the key press to the change actually showing up on screen is >150ms (approx 8-9 frames worth). This is measured with a camera recording of the screen and keyboard at 60fps. (Note: It is true I do not have a way to measure how long it takes from key press to the app getting it. I am assuming it is <<150ms).
Therefore, the questions:
Where are graphics buffered between a call to swap buffers and actually showing up on screen? Why the delay? It sure looks like the app is drawing many frames ahead of the screen at all times.
What can an OpenGL application do to cause an immediate draw to screen? (ie: no buffering, just block until draw is complete; I don't need high throughput, I do need low latency)
What can an application do to make the above immediate draw happen as fast as possible?
How can an application know what is actually on screen right now? (Or, how long/how many frames the current buffering delay is?)
Where are graphics buffered between a call to swap buffers and actually showing up on screen? Why the delay? It sure looks like the app is drawing many frames ahead of the screen at all times.
The command is queued, whatever drawn to the backbuffer, waits till next vsync if you have set swapInterval and at the next vsync, this buffer should be displayed.
What can an OpenGL application do to cause an immediate draw to screen? (ie: no buffering, just block until draw is complete; I don't need high throughput, I do need low latency)
Use of glFinish will ensure everything is drawn before this API returns, but no control over when it actually gets to the screen other than swapInterval setting.
What can an application do to make the above immediate draw happen as fast as possible?
How can an application know what is actually on screen right now? (Or, how long/how many frames the current buffering delay is?)
Generally you can use sync (something like http://www.khronos.org/registry/egl/extensions/NV/EGL_NV_sync.txt) to find out this.
Are you sure the method of measuring latency is correct ? What if the key input actually has significant delay in your PC ? Have you measured latency from the event having been received in your code, to the point after swapbuffers ?
You must understand that the GPU has specially dedicated memory available (on board). At the most basic level this memory is used to hold the encoded pixels you see on your screen (it is also used for graphics hardware acceleration and other stuff, but that is unimportant now). Because it takes time loading a frame from your main RAM to your GPU RAM you can get a flickering effect: for a brief moment you see the background instead of what is supposed to be displayed. Although this copying happens extremely fast, it is noticeable to the human eye and quite annoying.
To counter this, we use a technique called double buffering. Basically double buffering works by having an additional frame buffer in your GPU RAM (this can be one or many, depending on graphics library you are working with and the GPU, but two is enough to work) and using a pointer to indicate which frame should be displayed. Thus while the first frame is being displayed, you are already creating the next in your code using some draw() function on an image structure in main RAM, this image is then copied to your GPU RAM (while still displaying the previous frame) and then when calling eglSwapBuffers() the pointer switches to your back buffer (I guessed it from your question, I'm not familiar with OpenGL, but this is quite universal). You can imagine this pointer switch does not require very much time. I hope you see now that directly writing an image to the screen actually causes much more delay (and annoying flickering).
Also ~16.6 milliseconds does not sound like that much. I think most time is lost creating/setting the required data structures and not really in the drawing computations itself (you could test this by just drawing the background).
At last I like to add that I/O is usually pretty slow (slowest part of most programs) and 150ms is not that long at all (still twice as fast as a blink of an eye).
Ah, yes you've discovered one of the peculiarities of the interaction of OpenGL and display systems only few people actually understand (and to be frank I didn't fully understand it until about 2 years ago as well). So what is happening here:
SwapBuffers does two things:
it queues a (private) command to the command queue that's used also for OpenGL drawing calls that essentially flags a buffer swap to the graphics system
it makes OpenGL flush all queued drawing commands (to the back buffer)
Apart from that SwapBuffers does nothing by itself. But those two things have interesting consequences. One is, that SwapBuffers will return immediately. But as soon as the "the back buffer is to be swapped" flag is set (by the queued command) the back buffer becomes locked for any operation that would alter its contents. So as long no call is made that would alter the contents of the back buffer, things will not block. And commands that would alter the contents of the back buffer will halt the OpenGL command queue until the back buffer has been swapped and released for further commands.
Now the length of the OpenGL command queue is an abstract thing. But the usual behavior is, that one of the OpenGL drawing commands will block, waiting for the queue to flush in response to swap buffers having happened.
I suggest you spray your program with logging statements using some high performance, high resolution timer as clock source to see where exactly the delay happens.
Latency will be determined both by the driver, and by the display itself. Even if you wrote directly to the hardware, you would be limited by the latter.
The application can only do so much (i.e. draw fast, process inputs as closely as possible to or during drawing, perhaps even modify the buffer at the time of flip) to mitigate this. After that you're at the mercy of other engineers, both hardware and software.
And you can't tell what the latency is without external monitoring, as you've done.
Also, don't assume your input (keyboard to app) is low latency either!
So I am trying to read sensor data from an IMU and update angles accordingly. I open the sensor data file, read line by line, convert quaternion to rotations and then update my model. The problem is that when calling the glutPostRedisplay() from the while loop the loop continues while glutPostRedisplay() operates in paralell. This makes it appear that everything happens instantly. What I want to do is to force the program to halt until the display is updated.
I can't think of another way to do this because I don't want to constantly open and close a file or keep track of where in the file I currently am. It would be easier if I could just read the line, process it, then force OpenGL to render, then read the next line etc.
Does anyone have any suggestions?
Note: Currently the while loop fully executes by the time I am able to render. I have tried using glutSwapBuffers() directly after the glutPostRedisplay()
GLUT doesn't work that way. You will have to stop your loop and return from whatever function you want in order for GLUT to issue a rendering command. Generally you put this kind of stuff in an idle func.
Or you could use GLFW, which allows you to have ownership of the rendering loop. Or you could use FreeGLUT's glutMainLoopEvent, which allows you to have ownership of the rendering loop (though you need a bit of code rewriting to make this work).
Instead of using a while loop, you can register a function to be called when GLUT isn't busy with glutIdleFunc. Read from the IMU in this idle function and call glutPostRedisplay when you need to force a redisplay.
Hello I am new to C programming but I am making a menu for a game. I have a fish in ascii art displayed and it gets moved one character over every .5 secs. I accomplish this by a simple loop and it keeps on going across the screen then when it reaches the end, the fish is cleared and then it gets repeated again. Now while this animation is going on I would like to prompt the user for an input, however when I do that with getchar or scanf for example the fish loop waits until I press something and the animation stops until I press a key. Coould someone please shed some light on my problem??
Thank-you
You can't do this with any of the standard input methods. You're going to either have to use something like ncurses, or put the terminal into raw mode and do some pretty fancy manipulations. I have no idea what platform you're on, but raw mode is difficult under Linux, and even harder under Windows, so I'd stick with a library if you can.
Welcome to the world of Threads.
To understand threads think of how your computer works. If your computer ran without threads, you would not be able to run multiple applications at the same time. Threads allow for multiple parts of a program or interface to run at the same time without depending on eachother.
In your case, you will want a thread for the input and a separate thread for the animation. Thus allowing both to run separately.
I need to be able to access the X Event Loop to add clipboard support for a game API. The problem is the game API does not know which API it will use for display (It could use SDL or other). As a result, I do not have direct access to the X event loop. Is there a function in XLib to get a pointer to my display so that I can process messages and add clipboard support?
Thanks
If it runs on X11, there has to be a Display pointer in the graphics object somewhere. You can allocate a new one with XOpenDisplay(NULL); but that's not likely to achieve what you want. You'd still have to find the Windows and other info which is tricky enough when a program does it once.
You really need to dig through the existing code to find the X11 module. There's likely to be a single function that performs on iteration of the "Event Loop" as a subroutine of the real "main Processing Loop". If you can't simply add your new code there, you can at least see how the program already accesses this information.
If you're using OpenGL for graphics, you can exploit that. At some point in the program where you know, the OpenGL context is made current call glXGetCurrentDisplay. However you should be carefull not to interfere with the programs main event loop.
I've been trying to make a game using ncurses.
However, I am stumped with how to make the timing part of my main loop work.
Would someone be able to add some insight into how I could add framerate code to my main loop, while keeping portability and not compromising speed.
Thanks in advance!
The normal way to handle this type of problem I believe is to pass in the duration since the last loop (often called delta) as a parameter to the system. This allows you to update the progress of entities in the game based on the amount of real world time that has passed. For example:
new_position = old_position + delta*speed
This allows entities in your game to move at a constant speed independent of the frame rate of your program.
Assuming you have functionality to update your gamestate after a small period of time, next you need to be able to poll the user for input. If you do not specify otherwise, ncurses will block when you ask for input. To prevent this, look up the init functions here. In particular, the halfdelay() function may be of use of you (it implements a sort of framerate).
Just remember to check for ERR on the return value of getch() when using this mode.