glutTimerFunc() not limiting framerate - c

glutTimerFunc isnt making a delay it just loops forever. Like fxp. while(1).
Did I something wrong? Or is it a compatibility issue?
I am using arch linux x64 with gcc. And I've been kinda mixing 32 bit programs with 64 bit ones.
I am trying to make a program that checks for input whilst updating frames constantly under a delay
My includes are:
#include <GL/glut.h>
#include <GL/glu.h>
#include <stdio.h>
#include <string.h>
And my main functions are:
void timer(void)
{
glutPostRedisplay();
glutTimerFunc ( 30 , mainloop , 0 );
}
int main() {
loadconfiguration();
char *myargv [1];
int myargc=1;
myargv [0]=strdup ("./file");
glutInit(&myargc, myargv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
glutInitWindowPosition(100, 100);
glutInitWindowSize(displayx, displayy);
printf("Making a window\n");
winIDMain = glutCreateWindow("GL Game");
mainloop();
}
void mainloop(void){
Initilize();
glutSetWindow (winIDMain);
glutDisplayFunc (render);
glutReshapeFunc (reshape);
glutKeyboardFunc (keyboard);
glutMouseFunc (mouse);
glutIdleFunc (timer);
glutMainLoop();
}
Don't worry other functions are clean :)
The code worked earlier I don't know why it doesn't work now.

Your mainloop should be called init. All it does is set glut callbacks. Rather than call glutPostRedisplay in the idle function, you should call it in a timer function. In other words, don't call glutIdleFunc(timer);. Instead, call timer() once yourself and have it add a timer to itself glutTimerFunc (30 , timer, 0);.
However, I would recommend doing the timing for a frame limiter yourself as it will be much more accurate. I wrote this answer for exactly that.

Related

Would pausing sleep for a second be better than constantly calling the time function to see if the next second had passed

#include <time.h>
#include <iostream>
void TemperatureCtrl(float curTemp, float TargetTemp, float errTemp);
float TemperatureGet();
int direction = 0;
int main()
{
float CurTemp;
time_t tim = 0;
struct tm ttm;
time_t tim2 = 0;
while (1)
{
time(&tim2);
if (tim != tim2)
{
tim = tim2;
localtime_r(&tim, &ttm);
CurTemp = TemperatureGet();
printf("%02d:%02d:%02d Temp:%.1f℃\r\n", ttm.tm_hour, ttm.tm_min, ttm.tm_sec, CurTemp);
TemperatureCtrl(CurTemp, 23.0, 0.5);
}
}
}
maybe used sleep(1000) is better ?
Use sleep () to pause for a second and let the thread hang for a second. It will be better for CPU usage.
maybe used sleep(1000) is better?
Yes.
Calling sleep will suspend the current thread for a specified time interval. Other threads could take the CPU to do their work, it's more efficient.
Constantly calling the time function is too inefficient. The current thread occupies the CPU, but there is no meaningful work to do.

Correct sleep function of libltc

I am playing around with libLTC to generate timecode. I have a rough working example below:
#include <curses.h>
#include <time.h>
#include <ltc.h>
int main() {
initscr();
nodelay(stdscr, TRUE);
LTCFrame frame;
LTCFrameExt Frame;
SMPTETimecode stime;
do {
clear();
ltc_frame_increment(&frame, 25, LTC_TV_625_50, LTC_USE_DATE);
ltc_frame_to_time(&stime, &frame, LTC_USE_DATE);
printw("%02d:%02d:%02d%c%02d | %8lld %8lld%s\n",
stime.hours,
stime.mins,
stime.secs,
(Frame.ltc.dfbit) ? '.' : ':',
stime.frame,
Frame.off_start,
Frame.off_end,
Frame.reverse ? " R" : ""
);
refresh();
} while (getch() != 'q');
endwin();
return 0;
}
The issue I have currently is that the loop runs too fast and as a result so does the TC, I wondered what the correct way to slow this down so that it runs at the correct rate? There is the sleep() function but would need to change for each frame rate?
There are many ways to approach this problem. The easiest approach would be the nanosleep() function, so you could calculate how many nanoseconds you want to wait to execute the next iteration at the bottom of your do loop.
A more sophisticated approach would use the settimer() function to use the RTC to raise SIGALRM at the appropriate time. Because this would be done with signal handling, there is no need at all for a do/while loop.

How can you get a random number on GBDK?

I'm new to C and to GBDK and I want to code a random number generator that decides between 0 and 1.
Like a 'hacker' simulator.
I have tried a lot of examples from the Internet. But none worked.
Screenshot from the output of the last attempt I made: https://i.ibb.co/f8G39vX/bgberrors.png
Last attempt code:
#include <gb/gb.h>
#include <stdio.h>
#include <rand.h>
void init();
void main()
{
init();
while(1)
{
UINT8 r = ((UINT8)rand()) % (UINT8)4;
printf(r);
}
}
void init()
{
DISPLAY_ON;
}
How can I accomplish it?
#include <gb/gb.h>
#include <stdint.h>
#include <stdio.h>
#include <rand.h>
void init();
void main()
{
init();
printf(" \n\n\n\n\n\n\n\n PRESS START!\n");
// abuse user input for seed generation
waitpad(J_START);
uint16_t seed = LY_REG;
seed |= (uint16_t)DIV_REG << 8;
initrand(seed);
while(1)
{
UINT8 r = ((UINT8)rand()) % (UINT8)2;
printf("%d", r);
}
}
void init()
{
DISPLAY_ON;
}
Tested with GBDK-2020 4.0.3
Also check the "rand" example in GBDK-2020.
Regarding the comments:
Yes, GBDK has it's own lib (including stdlib). It's probably a fork of SDCC's lib 20 years ago. Current SDCC has rand() in stdlib.h, but GBDK-2020 doesn't. Max is 0xFF, I don't know of a define for that.
Float should be avoided as much as possible, it's completely done in software, there is no hardware support for this. Double isn't really supported by the compiler and falls back to float.
There are no man pages, documentation is available here: https://gbdk-2020.github.io/gbdk-2020/docs/api/rand_8h.html or read the gbdk_manual.pdf comming with gbdk-2020

Context switching - Is makecontext and swapcontext working here (OSX)

I'm having some fun with context switching. I've copied the example code into a file
http://pubs.opengroup.org/onlinepubs/009695399/functions/makecontext.html
and i defined the macro _XOPEN_SOURCE for OSX.
#define _XOPEN_SOURCE
#include <stdio.h>
#include <ucontext.h>
static ucontext_t ctx[3];
static void
f1 (void)
{
puts("start f1");
swapcontext(&ctx[1], &ctx[2]);
puts("finish f1");
}
static void
f2 (void)
{
puts("start f2");
swapcontext(&ctx[2], &ctx[1]);
puts("finish f2");
}
int
main (void)
{
char st1[8192];
char st2[8192];
getcontext(&ctx[1]);
ctx[1].uc_stack.ss_sp = st1;
ctx[1].uc_stack.ss_size = sizeof st1;
ctx[1].uc_link = &ctx[0];
makecontext(&ctx[1], f1, 0);
getcontext(&ctx[2]);
ctx[2].uc_stack.ss_sp = st2;
ctx[2].uc_stack.ss_size = sizeof st2;
ctx[2].uc_link = &ctx[1];
makecontext(&ctx[2], f2, 0);
swapcontext(&ctx[0], &ctx[2]);
return 0;
}
I build it
gcc -o context context.c -g
winges at me about get, make, swap context being deprecated. Meh.
When I run it it just hangs. It doesn't seem to crash. It just hangs.
I tried using gdb, but once I step into the swapcontext, it just is blank. It doesn't jump into f1. I just keep hitting enter and it will just move the cursor into a new line on the console?
Any idea what's a happening? Something to do with working on the Mac/deprecate methods?
Thanks
It looks like your code is just copy/pasted from the ucontext documentation, which must make it frustrating that it's not working...
As far as I can tell, your stacks are just too small. I couldn't get it to work with any less than 32KiB for your stacks.
Try making these changes:
#define STACK_SIZE (1<<15) // 32KiB
// . . .
char st1[STACK_SIZE];
char st2[STACK_SIZE];
yup fixed it. why did it fix it though?
Well, let's dig into the problem a bit more. First, let's find out what's actually going on.
When I run it it just hangs. It doesn't seem to crash. It just hangs.
If you use some debugger-fu (be sure to use lldb—gdb just doesn't work right on os x), then you will find that when the app is "hanging", it's actually spinning in a weird loop in your main function, illustrated by the arrow in the comments below.
int
main (void)
{
char st1[8192];
char st2[8192];
getcontext(&ctx[1]);
ctx[1].uc_stack.ss_sp = st1;
ctx[1].uc_stack.ss_size = sizeof st1;
ctx[1].uc_link = &ctx[0];
makecontext(&ctx[1], f1, 0);
getcontext(&ctx[2]);// <---------------------+ back to here
ctx[2].uc_stack.ss_sp = st2;// |
ctx[2].uc_stack.ss_size = sizeof st2;// |
ctx[2].uc_link = &ctx[1];// |
makecontext(&ctx[2], f2, 0); // |
// |
puts("about to swap...");// |
// |
swapcontext(&ctx[0], &ctx[2]);// ------------+ jumps from here
return 0;
}
Note that I added an extra puts call above in the middle of the loop. If you add that line and compile/run again, then instead of the program just hanging you'll see it start spewing out the string "about to swap..." ad infinitum.
Obviously something screwy is going on based on the given stack size, so let's just look for everywhere that ss_size is referenced...
(Note: The authoritative source code for the Apple ucontext implementation is at https://opensource.apple.com/source/, but there's a GitHub mirror that I'll use since it's nicer for searching and linking.)
If we take a look at makecontext.c, we see something like this:
if (ucp->uc_stack.ss_size < MINSIGSTKSZ) {
// fail without an error code since makecontext is a void function
return;
}
Well, that's nice! What is MINSIGSTKSZ? Well, let's take a look in signal.h:
#define MINSIGSTKSZ 32768 /* (32K)minimum allowable stack */
#define SIGSTKSZ 131072 /* (128K)recommended stack size */
Apparently these values are actually part of the POSIX standard. Although I don't see anything in the ucontext documentation that references these values, I guess it's kind of implied since ucontext preserves the current signal mask.
Anyway, this explains the screwy behavior we're seeing. Since the makecontext call is failing due to the stack size being too small, the call to getcontext(&ctx[2]) is what is setting up the contents of ctx[2], so the call to swapcontext(&ctx[0], &ctx[2]) just ends up swapping back to that line again, creating the infinite loop...
Interestingly, MINSIGSTKSZ is 32768 bytes on os x, but only 2048 bytes on my linux box, which explains why it worked on linux but not os x.
Based on all of that, it looks like a safer option is use the recommended stack size from sys/signal.h:
char st1[SIGSTKSZ];
char st2[SIGSTKSZ];
That, or switch to something that isn't deprecated. You might take a look at Boost.Context if you're not averse to C++.

Waiting in DOS using djgpp -- alternatives to busy wait?

I recently wrote a little curses game and as all it needs to work is some timer mechanism and a curses implementation, the idea to try building it for DOS comes kind of naturally. Curses is provided by pdcurses for DOS.
Timing is already different between POSIX and Win32, so I have defined this interface:
#ifndef CSNAKE_TICKER_H
#define CSNAKE_TICKER_H
void ticker_init(void);
void ticker_done(void);
void ticker_start(int msec);
void ticker_stop(void);
void ticker_wait(void);
#endif
The game calls ticker_init() and ticker_done() once, ticker_start() with a millisecond interval as soon as it needs ticks and ticker_wait() in its main loop to wait for the next tick.
Using the same implementation on DOS as the one for POSIX platforms, using setitimer(), didn't work. One reason was that the C lib coming with djgpp doesn't implement waitsig(). So I created a new implementation of my interface for DOS:
#undef __STRICT_ANSI__
#include <time.h>
uclock_t tick;
uclock_t nextTick;
uclock_t tickTime;
void
ticker_init(void)
{
}
void
ticker_done(void)
{
}
void
ticker_start(int msec)
{
tickTime = msec * UCLOCKS_PER_SEC / 1000;
tick = uclock();
nextTick = tick + tickTime;
}
void
ticker_stop()
{
}
void
ticker_wait(void)
{
while ((tick = uclock()) < nextTick);
nextTick = tick + tickTime;
}
This works like a charm in dosbox (I don't have a real DOS system right now). But my concern is: Is busy waiting really the best I can do on this platform? I'd like to have a solution allowing the CPU to at least save some energy.
For reference, here's the whole source.
Ok, I think I can finally answer my own question (thanks Wyzard for the helpful comment!)
The obvious solution, as there doesn't seem any library call doing this, is putting a hlt in inline assembly. Unfortunately, this crashed my program. Looking for the reason, it is because the default dpmi server used runs the program in ring 3 ... hlt is reserved to ring 0. So to use it, you have to modify the loader stub to load a dpmi server running your program in ring 0. See later.
Browsing through the docs, I came across __dpmi_yield(). If we are running in a multitasking environment (Win 3.x or 9x ...), there will already be a dpmi server provided by the operating system, and of course, in that case we want to give up our time slice while waiting instead of trying the privileged hlt.
So, putting it all together, the source for DOS now looks like this:
#undef __STRICT_ANSI__
#include <time.h>
#include <dpmi.h>
#include <errno.h>
static uclock_t nextTick;
static uclock_t tickTime;
static int haveYield;
void
ticker_init(void)
{
errno = 0;
__dpmi_yield();
haveYield = errno ? 0 : 1;
}
void
ticker_done(void)
{
}
void
ticker_start(int msec)
{
tickTime = msec * UCLOCKS_PER_SEC / 1000;
nextTick = uclock() + tickTime;
}
void
ticker_stop()
{
}
void
ticker_wait(void)
{
if (haveYield)
{
while (uclock() < nextTick) __dpmi_yield();
}
else
{
while (uclock() < nextTick) __asm__ volatile ("hlt");
}
nextTick += tickTime;
}
In order for this to work on plain DOS, the loader stub in the compiled executable must be modified like this:
<path to>/stubedit bin/csnake.exe dpmi=CWSDPR0.EXE
CWSDPR0.EXE is a dpmi server running all code in ring 0.
Still to test is whether yielding will mess with the timing when running under win 3.x / 9x. Maybe the time slices are too long, will have to check that. Update: It works great in Windows 95 with this code above.
The usage of the hlt instruction breaks compatibility with dosbox 0.74 in a weird way .. the program seems to hang forever when trying to do a blocking getch() through PDcurses. This doesn't happen however on a real MS-DOS 6.22 in virtualbox. Update: This is a bug in dosbox 0.74 that is fixed in the current SVN tree.
Given those findings, I assume this is the best way to wait "nicely" in a DOS program.
Update: It's possible to do even better by checking all available methods and picking the best one. I found a DOS idle call that should be considered as well. The strategy:
If yield is supported, use this (we are running in a multitasking environment)
If idle is supported, use this. Optionally, if we're in ring-0, do a hlt each time before calling idle, because idle is documented to return immediately when no other program is ready to run.
Otherwise, in ring-0 just use plain hlt instructions.
Busy-waiting as a last resort.
Here's a little example program (DJGPP) that tests for all possibilities:
#include <stdio.h>
#include <dpmi.h>
#include <errno.h>
static unsigned int ring;
static int
haveDosidle(void)
{
__dpmi_regs regs;
regs.x.ax = 0x1680;
__dpmi_int(0x28, &regs);
return regs.h.al ? 0 : 1;
}
int main()
{
puts("checking idle methods:");
fputs("yield (int 0x2f 0x1680): ", stdout);
errno = 0;
__dpmi_yield();
if (errno)
{
puts("not supported.");
}
else
{
puts("supported.");
}
fputs("idle (int 0x28 0x1680): ", stdout);
if (!haveDosidle())
{
puts("not supported.");
}
else
{
puts("supported.");
}
fputs("ring-0 HLT instruction: ", stdout);
__asm__ ("mov %%cs, %0\n\t"
"and $3, %0" : "=r" (ring));
if (ring)
{
printf("not supported. (running in ring-%u)\n", ring);
}
else
{
puts("supported. (running in ring-0)");
}
}
The code in my github repo reflects the changes.

Resources