I am playing around with libLTC to generate timecode. I have a rough working example below:
#include <curses.h>
#include <time.h>
#include <ltc.h>
int main() {
initscr();
nodelay(stdscr, TRUE);
LTCFrame frame;
LTCFrameExt Frame;
SMPTETimecode stime;
do {
clear();
ltc_frame_increment(&frame, 25, LTC_TV_625_50, LTC_USE_DATE);
ltc_frame_to_time(&stime, &frame, LTC_USE_DATE);
printw("%02d:%02d:%02d%c%02d | %8lld %8lld%s\n",
stime.hours,
stime.mins,
stime.secs,
(Frame.ltc.dfbit) ? '.' : ':',
stime.frame,
Frame.off_start,
Frame.off_end,
Frame.reverse ? " R" : ""
);
refresh();
} while (getch() != 'q');
endwin();
return 0;
}
The issue I have currently is that the loop runs too fast and as a result so does the TC, I wondered what the correct way to slow this down so that it runs at the correct rate? There is the sleep() function but would need to change for each frame rate?
There are many ways to approach this problem. The easiest approach would be the nanosleep() function, so you could calculate how many nanoseconds you want to wait to execute the next iteration at the bottom of your do loop.
A more sophisticated approach would use the settimer() function to use the RTC to raise SIGALRM at the appropriate time. Because this would be done with signal handling, there is no need at all for a do/while loop.
Related
#include <time.h>
#include <iostream>
void TemperatureCtrl(float curTemp, float TargetTemp, float errTemp);
float TemperatureGet();
int direction = 0;
int main()
{
float CurTemp;
time_t tim = 0;
struct tm ttm;
time_t tim2 = 0;
while (1)
{
time(&tim2);
if (tim != tim2)
{
tim = tim2;
localtime_r(&tim, &ttm);
CurTemp = TemperatureGet();
printf("%02d:%02d:%02d Temp:%.1f℃\r\n", ttm.tm_hour, ttm.tm_min, ttm.tm_sec, CurTemp);
TemperatureCtrl(CurTemp, 23.0, 0.5);
}
}
}
maybe used sleep(1000) is better ?
Use sleep () to pause for a second and let the thread hang for a second. It will be better for CPU usage.
maybe used sleep(1000) is better?
Yes.
Calling sleep will suspend the current thread for a specified time interval. Other threads could take the CPU to do their work, it's more efficient.
Constantly calling the time function is too inefficient. The current thread occupies the CPU, but there is no meaningful work to do.
glutTimerFunc isnt making a delay it just loops forever. Like fxp. while(1).
Did I something wrong? Or is it a compatibility issue?
I am using arch linux x64 with gcc. And I've been kinda mixing 32 bit programs with 64 bit ones.
I am trying to make a program that checks for input whilst updating frames constantly under a delay
My includes are:
#include <GL/glut.h>
#include <GL/glu.h>
#include <stdio.h>
#include <string.h>
And my main functions are:
void timer(void)
{
glutPostRedisplay();
glutTimerFunc ( 30 , mainloop , 0 );
}
int main() {
loadconfiguration();
char *myargv [1];
int myargc=1;
myargv [0]=strdup ("./file");
glutInit(&myargc, myargv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH);
glutInitWindowPosition(100, 100);
glutInitWindowSize(displayx, displayy);
printf("Making a window\n");
winIDMain = glutCreateWindow("GL Game");
mainloop();
}
void mainloop(void){
Initilize();
glutSetWindow (winIDMain);
glutDisplayFunc (render);
glutReshapeFunc (reshape);
glutKeyboardFunc (keyboard);
glutMouseFunc (mouse);
glutIdleFunc (timer);
glutMainLoop();
}
Don't worry other functions are clean :)
The code worked earlier I don't know why it doesn't work now.
Your mainloop should be called init. All it does is set glut callbacks. Rather than call glutPostRedisplay in the idle function, you should call it in a timer function. In other words, don't call glutIdleFunc(timer);. Instead, call timer() once yourself and have it add a timer to itself glutTimerFunc (30 , timer, 0);.
However, I would recommend doing the timing for a frame limiter yourself as it will be much more accurate. I wrote this answer for exactly that.
I'm producing a game in C on a microprocessor. The score is controlled by how long you can survive; the score increases by 1 every 3 seconds. The score is an integer which is declared globally, but displayed from a function.
int score = 0;//globally declared
void draw_score(int score_d)
{
char score_draw[99];
sprintf(score_draw,"%d", score_d);
draw_string(score_draw, 9, 0);
}
I was thinking of a function which just increases the score by one with a delay on it, however that has not worked.
void score_increaser(int score)
{
score++;
_delay_ms( 3000 );
}
Does it need to be in a while loop? the function itself would go into a while loop in the main anyway.
C is pass by value.
score_increaser() as shown in your question increases just a copy of what is passed in.
To fix this there are (mainly) two options:
As score is defined globally, do not pass in anything:
void score_increaser(void) {
score++;
_delay_ms( 3000 );
}
This modifes the globale score directly.
Pass in the address of score and de-reference it inside the function
void score_increaser(int * pscore) {
(*pscore)++;
_delay_ms( 3000 );
}
Call it like this
...
score_increaser(&score);
...
A 3rd, a bit more complex, approach (which assumes signals are supported on the target platform) would
setup a signal and a referring handler, then
setup a timer to fire a signal every N seconds.
This signal then is handled by the handler, which in turn
increases the global score and
starts the timer again.
This might look like:
#include <signal.h> /* for signal() and sig_atomic_t */
#include <unistd.h> /* for alarm() */
#define DURATION (3) /* Increase score every 3 seconds. */
sig_atomic_t score = 0;
void set_alarm(unsigned);
void handler_alarm(int sig)
{
++score;
set_alarm(DURATION);
}
void set_alarm(unsigned duration)
{
signal(SIGALRM, handler_alarm);
alarm(duration);
}
int main(void)
{
set_alarm(DURATION);
... /* The game's codes here. */
}
This latter approach has the advantage that your game's code does not need to take care about increasing score. score is just increased every 3 seconds as long as the program runs.
I'd recommend using a timer interrupt. Configure the timer to 3 seconds.
volatile int score = 0; //global
void Intr_Init(peripheral_t per)
{
//Initialize the timer interrupt
}
void draw_score(int score_d)
{
char score_draw[99];
sprintf(score_draw,"%d", score_d);
draw_string(score_draw, 9, 0);
}
int main(void)
{
Intr_Init(TIMER);
while(1)
{
//Code that makes your game run
draw_score(score);
}
}
ISR (TIMER1_COMPA_vect)
{
//clear disable interrupt
score++;
//enable interrupt
}
In embedded, you should rely on Timers for better time critical tasks and accuracy. The way Delay routines are implemented is usually a loop or a up/down counter. Whereas a timer is usually based on counting SysTicks.
Another major advantage of Interrupts is that you let processor do its tasks all the while instead of making it block in a delay loop.
score is global value then do not need to pass it in function if that function has access to that global space
void score_increaser() {
score++;
_delay_ms( 3000 );
}
here is a good method for handling the score.
in the 'start game' function,
clear 'score' to 0
setup a timer:
--to expire once each 3 seconds
--enable the automatic reload feature,
--enable the timer interrupt
--enable the timer counter
in the timer interrupt handler function
--increment 'score'
--clear the timer interrupt pending flag
in the 'end game' function
disable the timer counter
disable the timer interrupt
display the 'score' value
You dont need parameter for the score since it's declared globally..
//global
int score = 0;
void score_increaser()
{
_delay_ms(3000);
score++;
}
calling is like: score_increaser(); should do the work..
i suggest you check for score in any other line/function.. maybe you have redeclared it or accidentally changed the value..
hope this helped..
I am making a program in which I am getting data from a serial device. The problem which I am facing is that the device gives me the wrong data until I run while(1) and then read the data. So I thought of running a for loop for 100000 times and then reading the data but still it was giving wrong data. I can only use while(1). So is there anyway I can stop while(1) after sometime like 7-10sec.?
please help,thanks.!!
I think it will help.
int i=0;
while(1){
// do your work.
if ( i == 100 ) break; // for an example.
i++;
}
printf("After While\n");
Is it necessary for your while loop to iterate on 1? Perhaps you could loop on time(NULL) instead, for example:
time_t t = time(NULL) + 10;
while (time(NULL) < t) {
/* ... */
}
This is not exactly precise; The loop could run for anything between 9 seconds and 10 seconds, perhaps even longer depending on how saturated your CPU usage is by other tasks. It doesn't look like you're looking for anything precise, however, and this should give you some idea...
If for whatever silly reason you must use while (1), then you can use this idea together with if and break like so:
time_t t = time(NULL) + 10;
while (1) {
if (time(NULL) >= t) {
break;
}
/* ... */
}
To exit the loop you have to use break statement.
while(1)
{
//your code...
sleep(7);//to sleep for 7 seconds
break;//jumps out of the loop after 7 seconds of delay
}
#include <time.h>
#include <stdio.h>
int main()
{
time_t end = time(NULL) + 7; //7s
while (1) {
//your code...
printf("running...\n");
if (time(NULL) >= end) {
break;
}
//your code..
}
return 0;
}
while(1) {
delay(10000); //To delay for 10 seconds.
break;
}
If you can't use delay() then probably use some loop to get significant amount of time delay and thereafter break the loop.
How would I output text one letter at a time like it's typing without using Sleep() for every character?
Sleep is the best option, since it doesn't waste CPU cycles.
The other option is busy waiting, meaning you spin constantly executing NoOps. You can do that with any loop structure that does absolutely nothing. I'm not sure what this is for, but it seems like you might also want to randomize the time you wait between characters to give it a natural feel.
I would have a Tick() method that would loop through the letters and only progress if a random number was smaller than a threshold I set.
some psuedocode may look like
int escapeIndex = 0;
int escapeMax = 1000000;
boolean exportCharacter = false;
int letterIndex = 0;
float someThresh = 0.000001;
String typedText = "somethingOrOther...";
int letterMax = typedText.length();
while (letterIndex < letterMax){
escapeIndex++;
if(random(1.0) < someThresh){
exportCharacter = true;
}
if(escapeIndex > escapeMax) {
exportCharacter = true;
}
if(exportCharacter) {
cout << typedText.charAt(letterIndex);
escapeIndex = 0;
exportCharacter = false;
letterIndex++;
}
}
If I were doing this in a video game lets say to simulate a player typing text into a terminal, this is how I would do it. It's going to be different every time, and it's escape mechanism provides a maximum time limit for the operation.
Sleeping is the best way to do what you're describing, as the alternative, busy waiting, is just going to waste CPU cycles. From the comments, it sounds like you've been trying to manually hard-code every single character you want printed with a sleep call, instead of using loops...
Since there's been no indication that this is homework after ~20 minutes, I thought I'd post this code. It uses usleep from <unistd.h>, which sleeps for X amount of microseconds, if you're using Windows try Sleep().
#include <stdio.h>
#include <unistd.h>
void type_text(char *s, unsigned ms_delay)
{
unsigned usecs = ms_delay * 1000; /* 1000 microseconds per ms */
for (; *s; s++) {
putchar(*s);
fflush(stdout); /* alternatively, do once: setbuf(stdout, NULL); */
usleep(usecs);
}
}
int main(void)
{
type_text("hello world\n", 100);
return 0;
}
Since stdout is buffered, you're going to have to either flush it after printing each character (fflush(stdout)), or set it to not buffer the output at all by running setbuf(stdout, NULL) once.
The above code will print "hello world\n" with a delay of 100ms between each character; extremely basic.