Creating a second-counter in C for Gameboy - c

So I am new to C and have been assigned the task of making a game. I'll be using a Gameboy emulator and have been discouraged from importing any libraries beyond the basics.
I would like to come up with a way to run a second-counter (that will display on screen,) but unable to use the time.h library, I feel a bit stuck.
Is there anyway I could go about doing this?
I was told that Gameboy runs rather slow, and if I could get it caught in a 'busy loop' I could 'approximate a second' and count that way.
I've considered the sleep function to do this, but that's in unistd.h library.
I've also considered perhaps setting up a loop and counting up to 10 thousand (or whatever number may take a second to calculate,) but all of this will be happening simultaneously to the game at hand, and I am afraid things like that will delay the gameplay and other things happening.
Any recommendations?
Edit: I think anything beyond stdlib.h and stdio.h is disallowed.

The gameboy has a hardware timer.
Sometimes it's useful to have a timer that interrupts
at regular intervals for routines that require
periodic or percise updates. The timer in the GameBoy
has a selectable frequency of 4096, 16384, 65536, or
262144 Hertz. This frequency increments the Timer
Counter (TIMA). When it overflows, it generates an
interrupt. It is then loaded with the contents of
Timer Modulo (TMA). The following are examples:
;This interval timer interrupts 4096 times per second
ld a,-1
ld ($FF06),a ;Set TMA to divide clock by 1
ld a,4
ld ($FF07),a ;Set clock to 4096 Hertz
So write your interrupt handler to keep track of the number of interrupts and update the displayed clock every 4096 (=0x1000) interrupts.

Implementations of some (not all) delay functions are blocking, (nothing else in code runs until delay function returns. Also are susceptible to run-time options such as whether in debug or release mode of execution, etc. (i.e. will run at inconsistent times depending on these modes)
an implementation that is not blocking, i.e. that during the delay, system events are given time-slices to continue would likely require use of multi-threading. But since C is not intrinsically a multithreaded language, you would have to use additional non-standard C libs that you are not allowed to use.
Given you understand this, and would be OK with a simple technique (i.e. blocking, with susceptibility to execution mode), then just use a simple time loop. Follow these steps:
1) create a development (test) function that is simply a for loop with a hard-coded index value:
void sec_delay(void)
{
//loop incrementer value
int i = 334000000;//change value here during testing to adjust duration of calling
//test program. eg: for 60 calls, should elapse 60 seconds.
//this value run on PC shown below provided 1 second duration.
while(i-- > 0);
}
2) characterize sec_delay
Run a program that calls sec delay 60 times and time its execution against a clock or stopwatch.
void sec_delay(void);
int main(void)
{
int i;
for(i=0;i<60;i++)//target 60 seconds, change to 600 for 10 min. or 10 for 10 sec.
{
sec_delay();
}
printf("Done");//uses stdio.h
getchar(); //uses stdio.h
return 0;
}
3) Use the execution time for this executable to adjust the loop incrementer value so you can be as close to 1 minute as possible. For higher accuracy, loop the main program 600 times and map the incrementer in sec_loop so that time elapsed is 10 minutes.
Once you have characterized the sec_delay() function as described, you essentially have something you can use for a 1 second timer.
4) Now that you have a a value for 1 second elaplse time, create a new prototype:
void delay(float);
And create a #define:
#define SEC 334000000.0 //enter your value here in float
Finally, define delay() function:
void delay(float secs)//note, fractions of seconds can be called
{
if(secs < 0) break;//leave for negative values;
int i = (int)secs*SEC ;
while(i-- > 0);
}

Related

Subtracting and getting seconds as input in C

I have the following code:
#include <stdio.h>
#include <time.h>
int main(){
clock_t timerS;
int i=1, targetTime=2;
scanf("%d", &targetTime);
while(i!=0){
timerS = clock();
while ((double)((clock() - timerS) / CLOCKS_PER_SEC) < targetTime){
//do something
}
//do another thing but delayed by the given time
if(targetTime>=0.5)
targetTime-=0.02;
else i=0;
}
return 0;
}
And what I want to do is having a loop which does something for (initially) an inputted amount of seconds and also doing another thing after targetTime-seconds have passed.
But after the first loop, to change the speed with which these operations are made(more specifically -0.02 seconds in this case).
An example would be getting multiple user inputs from user for 2 seconds, and displaying all the inputs made in these 2 seconds afterwards.
First problem is
If the initial given time is smaller than 1 second (for example 0.6), the other thing isn't delayed by 0.6 seconds, but is done immediately.
Second problem is
Actually similar to the first, if I subtract 0.02 seconds (in this case) from targetTime, it again does the other thing immediately and not in targetTime-0.02 seconds as I intend it to.
I'm new to this "clock" and "time" topic in C so I guess I'm doing something wrong regarding how these operations should be done. Also, please don't give an overly-complicated explanation/solution because of the above-mentioned reason.
Thanks!
Don't use the clock(2) system call, as it is obsolete and has been fully superseeded by machine independent replacements.
You can use, if your system supports it, clock_gettime(2), that will give you up to nanosecond precission (depending on the platform, but at least in linux on Intel architectures it is almost warranted) or, if you cannot use it, at least you'll have gettimeofday(2), which is derived from BSD systems, and provides you with a clock with microsecond resolution.
If you want to stop your program for some delay, you have also sleep(2) (second based) usleep(2) (microsecond based) or even nsleep(2) (nanosecond based)
Anyway, any of these calls has a tick that is not based on the system heartbeat, and the resolution is uniform and not system dependant.
I mistakenly initiated targetTime as int instead of double. Changing it to double solves the issue easily. Sorry!

Is there a standard function in C that allows for setting a specific delay in nSec?

In my application I need to generate a function in C that will provide a specific time delay in nano seconds. This delay timer must be done in software as I don't have any hardware timers left in my AVR MCU. My problem is that I would like to be able to set the value in nanoseconds. My MCU clock is 20MHz (50nS period). I thought a quick "for" loop, like;
for (n=0; n<value; n++)
but that won't take into account how many cycles are added to each time around the loop when compiled. Has anyone got any suggestions? I really don't want to write the code in assembler.
You give us too few information btw. but I think I can answer without them but it makes answer long. Lets start with easier problem that is you have this situation that your action need to be executed less times than the most frequent isr is executing. For example you need send byte every 1s but your isr is executing every 1ms. So in short you need to send byte every 1000 executions ISR, then you make counter in ISR thats incrementing every ISR and when reaches 1000 you send byte and set cnt to 0.
ISR()
{
cnt++;
if(cnt >= 1000)
{
execute(Z);
cnt = 0;
}
}
When you have opposed problem, isr is slower than desired time of executing your actions then I stand for redesign your use of timers. You should then make this ISR to execute faster and then divide time by counting exectued isr as I described above. This was mentioned in comments.
My suggestion is that you rethink the way you use timers.
Since you are using an AVR, you should look into using the AVR-Libc delay functions, _delay_us and _delau_ms, which are documented here:
https://www.nongnu.org/avr-libc/user-manual/group__util__delay.html
They are standard in the context of AVRs, but not standard for all C environments in general.
Some example code to get you started:
#define F_CPU 20000000
#include <util/delay.h>
int main() {
while (1) {
_delay_us(0.05);
}
}
Note that even though the _delay_us and _delay_ms functions each take a double as an argument, all floating point arithmetic is done at compile time if possible in order to produce efficient code for your delay.

Linux timer interval

I want to run a timer with interval of 5 ms. I created a Linux timer and when a sigalrm_handler is called I'm checking elapsed time from a previous call. I'm getting times like: 4163, 4422, 4266, 4443, 4470 4503, 4288 microseconds when I want intervals to be about 5000 microseconds with a least possible error. I don't know why this interval is not constant but it varies and is much lower than it should be.
Here is my code:
static int time_count;
static int counter;
struct itimerval timer={0};
void sigalrm_handler(int signum)
{
Serial.print("SIGALRM received, time: ");
Serial.println(micros()-time_count);
time_count=micros();
}
void setup() {
Serial.begin(9600);
timer.it_value.tv_sec = 1;
timer.it_interval.tv_usec = 5000;
signal(SIGALRM, &sigalrm_handler);
setitimer(ITIMER_REAL, &timer, NULL);
time_count = micros();
}
I want to run a timer with interval of 5 ms.
You probably cannot get that period reliably, because it is smaller than a reasonable PC hardware can handle.
As a rule of thumb, 50 Hz (or perhaps 100Hz) is probably the highest reliable frequency you could get. And it is not a matter of software, but of hardware.
Think of your typical processor cache (a few megabytes). You could need a few milliseconds to fill it. Or think of time to handle a page fault; it probably would take more than a millisecond.
And Intel Edison is not a top-fast processor. I won't be surprised if converting a number to a string and displaying that string on some screen could take about a millisecond (but I leave you to check that). This could explain your figures.
Regarding software, see also time(7) (or consider perhaps some busy waiting approach inside the kernel; I don't recommend that).
Look also into /proc/interrupts several times (see proc(5)) by running a few times some cat /proc/interrupts command in a shell. You'll probably see that the kernel gets interrupted less frequently than once every one or a few milliseconds.
BTW your signal handler calls non-async-signal-safe functions (so is undefined behavior). Read signal(7) & signal-safety(7).
So it looks like your entire approach is wrong.
Maybe you want some RTOS, at least if you need some hard real-time (and then, you might consider upgrading your hardware to something faster and more costly).

Generic Microcontroller Delay Function

Come someone please tell me how this function works? I'm using it in code and have an idea how it works, but I'm not 100% sure exactly. I understand the concept of an input variable N incrementing down, but how the heck does it work? Also, if I am using it repeatedly in my main() for different delays (different iputs for N), then do I have to "zero" the function if I used it somewhere else?
Reference: MILLISEC is a constant defined by Fcy/10000, or system clock/10000.
Thanks in advance.
// DelayNmSec() gives a 1mS to 65.5 Seconds delay
/* Note that FCY is used in the computation. Please make the necessary
Changes(PLLx4 or PLLx8 etc) to compute the right FCY as in the define
statement above. */
void DelayNmSec(unsigned int N)
{
unsigned int j;
while(N--)
for(j=0;j < MILLISEC;j++);
}
This is referred to as busy waiting, a concept that just burns some CPU cycles thus "waiting" by keeping the CPU "busy" doing empty loops. You don't need to reset the function, it will do the same if called repeatedly.
If you call it with N=3, it will repeat the while loop 3 times, every time counting with j from 0 to MILLISEC, which is supposedly a constant that depends on the CPU clock.
The original author of the code have timed and looked at the assembler generated to get the exact number of instructions executed per Millisecond, and have configured a constant MILLISEC to match that for the for loop as a busy-wait.
The input parameter N is then simply the number of milliseconds the caller want to wait and the number of times the for-loop is executed.
The code will break if
used on a different or faster micro controller (depending on how Fcy is maintained), or
the optimization level on the C compiler is changed, or
c-compiler version is changed (as it may generate different code)
so, if the guy who wrote it is clever, there may be a calibration program which defines and configures the MILLISEC constant.
This is what is known as a busy wait in which the time taken for a particular computation is used as a counter to cause a delay.
This approach does have problems in that on different processors with different speeds, the computation needs to be adjusted. Old games used this approach and I remember a simulation using this busy wait approach that targeted an old 8086 type of processor to cause an animation to move smoothly. When the game was used on a Pentium processor PC, instead of the rocket majestically rising up the screen over several seconds, the entire animation flashed before your eyes so fast that it was difficult to see what the animation was.
This sort of busy wait means that in the thread running, the thread is sitting in a computation loop counting down for the number of milliseconds. The result is that the thread does not do anything else other than counting down.
If the operating system is not a preemptive multi-tasking OS, then nothing else will run until the count down completes which may cause problems in other threads and tasks.
If the operating system is preemptive multi-tasking the resulting delays will have a variability as control is switched to some other thread for some period of time before switching back.
This approach is normally used for small pieces of software on dedicated processors where a computation has a known amount of time and where having the processor dedicated to the countdown does not impact other parts of the software. An example might be a small sensor that performs a reading to collect a data sample then does this kind of busy loop before doing the next read to collect the next data sample.

Precise Linux Timing - What Determines the Resolution of clock_gettime()?

I need to do precision timing to the 1 us level to time a change in duty cycle of a pwm wave.
Background
I am using a Gumstix Over Water COM (https://www.gumstix.com/store/app.php/products/265/) that has a single core ARM Cortex-A8 processor running at 499.92 BogoMIPS (the Gumstix page claims up to 1Ghz with 800Mhz recommended) according to /proc/cpuinfo. The OS is an Angstrom Image version of Linux based of kernel version 2.6.34 and it is stock on the Gumstix Water COM.
The Problem
I have done a fair amount of reading about precise timing in Linux (and have tried most of it) and the consensus seems to be that using clock_gettime() and referencing CLOCK_MONOTONIC is the best way to do it. (I would have liked to use the RDTSC register for timing since I have one core with minimal power saving abilities but this is not an Intel processor.) So here is the odd part, while clock_getres() returns 1, suggesting resolution at 1 ns, actual timing tests suggest a minimum resolution of 30517ns or (it can't be coincidence) exactly the time between a 32.768KHz clock ticks. Here's what I mean:
// Stackoverflow example
#include <stdio.h>
#include <time.h>
#define SEC2NANOSEC 1000000000
int main( int argc, const char* argv[] )
{
// //////////////// Min resolution test //////////////////////
struct timespec resStart, resEnd, ts;
ts.tv_sec = 0; // s
ts.tv_nsec = 1; // ns
int iters = 100;
double resTime,sum = 0;
int i;
for (i = 0; i<iters; i++)
{
clock_gettime(CLOCK_MONOTONIC, &resStart); // start timer
// clock_nanosleep(CLOCK_MONOTONIC, 0, &ts, &ts);
clock_gettime(CLOCK_MONOTONIC, &resEnd); // end timer
resTime = ((double)resEnd.tv_sec*SEC2NANOSEC + (double)resEnd.tv_nsec
- ((double)resStart.tv_sec*SEC2NANOSEC + (double)resStart.tv_nsec);
sum = sum + resTime;
printf("resTime = %f\n",resTime);
}
printf("Average = %f\n",sum/(double)iters);
}
(Don't fret over the double casting, tv_sec in a time_t and tv_nsec is a long.)
Compile with:
gcc soExample.c -o runSOExample -lrt
Run with:
./runSOExample
With the nanosleep commented out as shown, the result is either 0ns or 30517ns with the majority being 0ns. This leads me to believe that CLOCK_MONOTONIC is updated at 32.768kHz and most of the time the clock has not been updated before the second clock_gettime() call is made and in cases where the result is 30517ns the clock has been updated between calls.
When I do the same thing on my development computer (AMD FX(tm)-6100 Six-Core Processor running at 1.4 GHz) the minimum delay is a more constant 149-151ns with no zeros.
So, let's compare those results to the CPU speeds. For the Gumstix, that 30517ns (32.768kHz) equates to 15298 cycles of the 499.93MHz cpu. For my dev computer that 150ns equates to 210 cycles of the 1.4Ghz CPU.
With the clock_nanosleep() call uncommented the average results are these:
Gumstix: Avg value = 213623 and the result varies, up and down, by multiples of that min resolution of 30517ns
Dev computer: 57710-68065 ns with no clear trend. In the case of the dev computer I expect the resolution to actually be at the 1 ns level and the measured ~150ns truly is the time elapsed between the two clock_gettime() calls.
So, my question's are these:
What determines that minimum resolution?
Why is the resolution of the dev computer 30000X better than the Gumstix when the processor is only running ~2.6X faster?
Is there a way to change how often CLOCK_MONOTONIC is updated and where? In the kernel?
Thanks! If you need more info or clarification just ask.
As I understand, the difference between two environments(Gumstix and your Dev-computer) might be the underlying timer h/w they are using.
Commented nanosleep() case:
You are using clock_gettime() twice. To give you a rough idea of what this clock_gettime() will ultimately get mapped to(in kernel):
clock_gettime -->clock_get() -->posix_ktime_get_ts -->ktime_get_ts() -->timekeeping_get_ns()
-->clock->read()
clock->read() basically reads the value of the counter provided by underlying timer driver and corresponding h/w. A simple difference with stored value of the counter in the past and current counter value and then nanoseconds conversion mathematics will yield you the nanoseconds elapsed and will update the time-keeping data structures in kernel.
For example , if you have a HPET timer which gives you a 10 MHz clock, the h/w counter will get updated at 100 ns time interval.
Lets say, on first clock->read(), you get a counter value of X.
Linux Time-keeping data structures will read this value of X, get the difference 'D'compared to some old stored counter value.Do some counter-difference 'D' to nanoseconds 'n' conversion mathematics, update the data-structure by 'n'
Yield this new time value to the user space.
When second clock->read() is issued, it will again read the counter and update the time.
Now, for a HPET timer, this counter is getting updated every 100ns and hence , you will see this difference being reported to the user-space.
Now, Let's replace this HPET timer with a slow 32.768 KHz clock. Now , clock->read()'s counter will updated only after 30517 ns seconds, so, if you second call to clock_gettime() is before this period, you will get 0(which is majority of the cases) and in some cases, your second function call will be placed after counter has incremented by 1, i.e 30517 ns has elapsed. Hence , the value of 30517 ns sometimes.
Uncommented Nanosleep() case:
Let's trace the clock_nanosleep() for monotonic clocks:
clock_nanosleep() -->nsleep --> common_nsleep() -->hrtimer_nanosleep() -->do_nanosleep()
do_nanosleep() will simply put the current task in INTERRUPTIBLE state, will wait for the timer to expire(which is 1 ns) and then set the current task in RUNNING state again. You see, there are lot of factors involved now, mainly when your kernel thread (and hence the user space process) will be scheduled again. Depending on your OS, you will always face some latency when your doing a context-switch and this is what we observe with the average values.
Now Your questions:
What determines that minimum resolution?
I think the resolution/precision of your system will depend on the underlying timer hardware being used(assuming your OS is able to provide that precision to the user space process).
*Why is the resolution of the dev computer 30000X better than the Gumstix when the processor is only running ~2.6X faster?*
Sorry, I missed you here. How it is 30000x faster? To me , it looks like something 200x faster(30714 ns/ 150 ns ~ 200X ? ) .But anyway, as I understand, CPU speed may or may not have to do with the timer resolution/precision. So, this assumption may be right in some architectures(when you are using TSC H/W), though, might fail in others(using HPET, PIT etc).
Is there a way to change how often CLOCK_MONOTONIC is updated and where? In the kernel?
you can always look into the kernel code for details(that's how i looked into it).
In linux kernel code , look for these source files and Documentation:
kernel/posix-timers.c
kernel/hrtimer.c
Documentation/timers/hrtimers.txt
I do not have gumstix on hand, but it looks like your clocksource is slow.
run:
$ dmesg | grep clocksource
If you get back
[ 0.560455] Switching to clocksource 32k_counter
This might explain why your clock is so slow.
In the recent kernels there is a directory /sys/devices/system/clocksource/clocksource0 with two files: available_clocksource and current_clocksource. If you have this directory, try switching to a different source by echo'ing its name into second file.

Resources