Digital clock in C - how to update the seconds? [closed] - c

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 3 years ago.
Improve this question
I was making a digital clock in C programming.
It takes input of current time from the user, then it updates the second to show the time in format HH:MM:SS.
I am confused with the part of for loop that is inside second.
for(i=0;i<8999990;i++)
i=i+1-1.
.
I have tried to dry run the code.
Suppose, I gave input 10:30:20 as hh:min:sec respectively.
Now, the for loop will start.
for loop for hr, then for loop for min, then for loop for sec...then for loop for i...
when sec is 20, for loop for i will run 89999990 times, and do i=i+1-1, ie update i value....
then sec will be 21....
what i am surprised of is how the "i" loop is creating an impact on the sec value?
and how that much fast?
[code]
#include<conio.h>
#include<stdio.h>
#include<stdlib.h>
int main()
{
int h=0,m=0,s=00;
double i;
printf("Enter time in format of HH MM SS\n");
scanf("%d%d%d",&h,&m,&s);
start:;
for(h;h<24;h++){
for(m;m<60;m++){
for(s;s<60;s++){
system("cls");
printf("\n\n\n\n\n\n\t\t\t\t\t\t%d:%d:%d\n",h,m,s);
for( i=0;i<89999900;i++){
i=i+1-1;
}
}
s=0;
}
m=0;
}
goto start;
}
[/code]

These kind of dirty "burn-away" loops were common long time ago, and still exist at some extent in embedded systems. They aren't professional since they are very inaccurate and also tightly coupled to a certain compiler build on a certain system. You cannot get accurate timing out of it for a PC, that's for sure.
First of all, programmers always tend to write them wrong. In the 1980s you could write loops like this because compilers were crap, but nowadays any half-decent compiler will simply remove that whole loop from the executable. This is because it doesn't contain any side-effects. So in order to make the code work at all, you must declare i as volatile to prevent that optimization.
Once that severe bug is fixed, you'll have to figure out how long time the loop actually takes. It is a sum of all CPU instructions needed to run it: a compare, some calculation and then increasing the iterator by 1. If you disassemble the program you can calculate this by adding together the number of cycles needed by all instructions, then multiply that with the time it takes to execute one cycle, which is normally 1/CPU frequency (ignoring pipelining, potential RAM access time losses, multicore etc etc).
In the end you'll come up with the conclusion that whoever wrote this code just pulled a number out of their... hat, then at best benchmarked the loop execution time with all optimizations enabled. But far more likely, I'd say this code was written by someone who didn't know what they were doing, which we can already tell from the missing volatile.

You should not use goto. To have an infinite loop, you can use a while(1) loop as shown.
You should use the system provided sleep or delay function. rather than writing your own for loop. This is because the no of iterations will change if you go to a different machine.
The program given below is a crude clock and will accumulate delays, as there will be some time required for execution of the code which is not considered during the design.
The code given below is assuming linux sleep function, for windows sleep function you can refer Sleep function in Windows, using C. If you are using an embedded system, there will be delay functions available, or you can write your own using one of the timers.
while(1)
{
for(h;h<24;h++)
{
for(m;m<60;m++)
{
for(s;s<60;s++)
{
system("cls");
printf("\n\n\n\n\n\n\t\t\t\t\t\t%d:%d:%d\n",h,m,s);
sleep(1000);
}
s=0;
}
m=0;
}
h = 0;
}

Related

Is there any advantage to using for loop with no conditions (for (;;)) instead of a while(true) in embedded C code? [duplicate]

This question already has answers here:
When implementing an infinite loop, is there a difference in using while(1) vs for(;;) vs goto (in C)?
(8 answers)
Endless loop in C/C++ [closed]
(12 answers)
Closed 14 days ago.
I recently took over an ARM Cortex M4 C code base from another engineer who unexpectedly quit the job. As I was looking through the codes to understand what was going on, I came across several uses of for(;;) structure. I understood it to mean an infinite loop, but is it advantageous to use this over while(1) or while(True)? It's just strange to me why you would want to use an infinite for-loop structure when you don't want to have a starting and stopping condition.
Not sure whether I should leave it be or convert it to a while(true) structure instead. I don't know what's going on at the lower level when you use for(;;) as the structure since, normally a for loop would keep track of the variable count. In the past, when the counting variable like i is declared as an int16, you could have an index overrun when reaching 32,768 for a signed int 16. Maybe for(;;) is just automatically gets substituted as while(1) anyway, so there's no memory overrun.
The firmware produced from this code had issues with unexpectedly stopping ("hanging up") after running for a long time (>4 months), but that could be from various other problems due to memory overrun somewhere else. The codes don't seem to do memory management too well. I don't see any memory clean-up or freeing up array after infinitely running the loop. Anyway, I don't know whether for(;;) would contribute to this, but maybe it's better to use while(true) instead?
The compiler is IAR Workbench for ARM to compile the C code for STM32L4R9x chip.
is it advantageous to use this over while(1) or while(True)?
No. Code for clarity*1 and save your time for larger issues like "code had issues with unexpectedly stopping".
BTW: question has while(True) and while(true). Case is important in C and also exactly how was True/true defined? Sticking with for(;;) avoids that issue.
I don't know whether for(;;) would contribute to this,
It does not contribute.
Tip: Posting code is far more informative that only describing code.
*1 Best to follow your group's coding standard.

How does function parameters and local variables affect on the performance of a C program [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
is it worth to reduce the number of function parameters and local variables to enhance the performance of a C program?
in the provided code below, maybe the example doesn't make a difference if the function is called few times during the execution but maybe it make sense if it's called n times, so is there any performance benefits in this case?
int n[4];
// read numbers ...
do_sum1(n[0], n[1], n[2], n[4]);
do_sum2(n);
// Functions definition
// --------------------
void do_sum1(int a, int b, int c, int d)
{
printf("%d\n", a + b + c + d);
}
void do_sum2(int n[4])
{
printf("%d\n", n[0] + n[1] + n[2] + n[3]);
}
The question is trickier than it seems.
First of, let's assume the function is not inlined (as otherwise they are more than likely to be compiled to the same code) and let's analyze the effect.
From one hand, the number of parameters to function affects the performance. All other things being equal, the more parameters are passed, the worse the performance - since copying parameters to the place function expects to find them (be it a stack, a register or any other storage) takes non-0 time.
From the other hand, semantic matters, and in this particular case, things are not equal! In your first case, you are passing 4 integer parameters. Assuming AMD64 ABI, they will be passed in CPU registers - which are super-fast to be accessed for both reading and writing.
However, in the second case you are effectively passing a pointer to a memory location. Which means, accessing values through this pointer means indirection, and at best the values would be found in L1 CPU cache (most likely), but at worst will be read from main memory (super slow!). While L1 cache is fast, it is still much slower when compared to register access.
Bottom line:
I expect the second case to be slower, than the first one.
is it worth to reduce the number of function parameters and local variables to enhance the performance of a C program?
The first question to ask is, will it enhance the performance of your program? Will it make a measurable difference at all? Don't blindly assume that it will.
The second question to ask is, what are the tradeoffs of doing so? How will it affect your ability to debug and maintain your code? Yes, you could make everything global to eliminate passing parameters and using locals, but the effect of that will be to make your code much harder to understand and maintain.
When thinking about improving performance, you should start from the highest level and work your way down:
Are you using the right data structures and/or algorithms for the problem at hand? For example, an unoptimized quicksort will still beat the pants off an aggressively tuned bubble sort (in the average case), binary searches are usually faster than linear searches, etc.
Are you using the appropriate tools or libraries, or are you hand-hacking everything yourself? It's possible there's already a solution out there that's been tested and tuned, such that you don't have to write from scratch.
Have you implemented your design well? For example, do you have any invariants in your loop bodies?
If the answer to all of those question is "yes", then the next step is to let the compiler do some optimizing (such as using the -O2 flag with gcc). C compilers have gotten very good at optimizing code, and depending on the program you can see some significant speedup.
If at this point you still feel your code is too slow, then you need to do some analysis. Run the code through a profiler to find where the bottlenecks are. At this point, you can start to look at micro-optimizations like reducing the number of parameters being passed to a function. Just be aware that it may not make enough of a difference to be worth the effort.

How to measure the quality of my code?

I have some experience with coding but one of the biggest questions which annoying me is how to improve my code.
I every time check the complexity, readability and the correctness of the code but my question is how I can measure the size and the time of specific commands.
For example:
when I have the next problem:
A is an integer
B is an integer
C is an integer
if - A is bigger the B assign C=A
else - C=B
for that problem, we have 2 simple solutions -
1. use if-else statement
2. use ternary operator
for a dry check of the size of the file before compilation, i get that the second solution file is less from the first in a half (for 1000000 operations I get a difference of some MB).
My question is how I can measure the time difference between some codes which make the same operation but with different commands, and how much the compiler makes optimization for commands which is close like the 2 from the example.
The best and most direct way is to check an assembly code generated by your compiler at different optimization level.
//EDIT
I didn't mention benchmarking, because your question is about checking the difference between two source codes using different language constructions to do the same job.
Don't get me wrong, benchmaking is recommended solution of assuring general software performance, but in this particular scenario, it might be unreliable, because of extremely small execution time frames basic operations have.
Even when you calculate amortized time from multiple runs, the difference might be to much dependend on the OS and environment and thus pollute your results.
To learn more on the subject I recommend this talk from Cppcon, it's actaully kinda interesting.
But most importantly,
Quick peek under the hood by exploring assembly code can give you information whether two statements has been optimized into exactly same code. It might not be so clear from benchmarking the code.
In the case you asked (if vs tenary operator) it should always lead to same machine code, because tenary operator is just a syntactic sugar for if and physically it's actually the same operation.
Analyse the Time Complexity of the two algorithms. If they seem competitive,
Benchmark.
Provide a sufficient large input for your problem, so that the timing is not affected by other -OS- overheads.
Develop two programs that solve the same problem, but with a different approach.
I have some methods in Time measurements to time code. Example:
#include <sys/time.h>
#include <time.h>
typedef struct timeval wallclock_t;
void wallclock_mark(wallclock_t *const tptr)
{
gettimeofday(tptr, NULL);
}
double wallclock_since(wallclock_t *const tptr)
{
struct timeval now;
gettimeofday(&now, NULL);
return difftime(now.tv_sec, tptr->tv_sec)
+ ((double)now.tv_usec - (double)tptr->tv_usec) / 1000000.0;
}
int main(void)
{
wallclock_t t;
double s;
wallclock_mark(&t);
/*
* Solve the problem with Algorithm 1
*/
s = wallclock_since(&t);
printf("That took %.9f seconds wall clock time.\n", s);
return 0;
}
You will get a time measurement. Then you use solve the problem with "Algorithm 2", for example, and compare these measurements.
PS: Or you could check the Assembly code of every approach, for a more low level approach.
One of the ways is using time function in the bash shell followed by execution repeated for a large number of times. This will show which is better. And make A template which does nothing of the two and you can know the buffer time.
Please take the calculation for many cases and compare averages before making any conclusions.

Looping timer function in avr?

I recently had to make an Arduino project using avr library and without delay lib. In that i had to create an implementation of the delay function.
After searching on the internet i found this particular code in many many places.
And the only explanation i got was it kills time in a callibrated manner.
void delay_ms(int ms) {
int delay_count = F_CPU / 17500;//Where is this 17500 comming from
volatile int i;
while (ms != 0) {
for (i=0; i != delay_count; i++);
ms--;
}
}
iam not able to understand how the following works,(though it did do the job) i.e., how did we determine delay count to be F_cpu/17500. Where is this number comming from.
Delay functions is better to be done in assembly, because you must know how many instruction cycle your code take to know how to repeat it to achieve the total delay.
I didn't test your code but this value (17500) is designed to reach 1ms delay.
for example if F_CPU = 1000000so delay_count = 57, to reach 1ms it count 57 count a simple calculation you could found that every count will take 17us and this value is the time for loop take when compiled to assembly.
But of course different compiler versions will produce different assembly code which means inaccurate delay.
My advice to you is to use standard avr/delay.h library. i cannot see any reason why can't you use it? But if you must create another one so you should learn assembly!!

How to figure performance of small expressions?

I want to figure performance of small expressions,to I decide what to use. Consider the below code. Several recursivee calls to it may happen.
void foo(void) {
i++;
if(etc(ch)) {
//..
}
else if(ch == TOKX) {
p=1;
baa();
c=0;
p=0;
}
//more ifs
}
Question:
Recursives calls may happen to foo(),and i should be incremented only if p has non-zero value(it means that it will be used in other part of code) Should I put a if(p) i++; or only leave i++;?
Is to answer(myself) questions like this that I'm looking for some tool. Someonee can believe that it's "loss of time" or say "optmization is root of evil".. but for cases like this,I don't believe that it is applicable to my situation. IMHO. Tell us your opinion if you think otherwise.
A "ideal" tool,could to show how long time each expression take to run.
It make me to think how is software debugging in biggest software's companies like IBM,Microsoft,Sun etc. Maybe it's theme to another thread.. more useful that this here,I think.
Platform: Should be Linux and MS-Windows.
The old adage is something like "don't optimize until you're sure, absolutely positive, you need to".. and there are reasons for that.
That said, here are a few thoughts:
avoid recursion if you can
at a macro level, something like the "time" command in linux can tell you how long your app is running. Put the method in a loop that runs 10k times and measure that, to average out the numbers
if you want to measure time spent in individual functions, profiling is what you want. Visual Studio has some good built-in stuff for this in Windows, but there are many, many options.
http://en.wikipedia.org/wiki/List_of_performance_analysis_tools
First please understand which measurements matter: there's total wall-clock time taken by the program, and there's percent of time each statement is active, where "active" means "on the stack".
Total wall-clock time is easily measured by subtracting system time after from system time before. If it is very short, just loop the code 1000 times, or whatever. You don't need many digits of precision.
Percent of time each statement is active is best measured by means of stack samples taken on wall-clock time (not CPU-only time). Any good profiler based on wall-clock stack sampling will work, such as Zoom or maybe Oprofile. It's not just the taking of samples that's important, but what is presented to you. It is best if it tells you "inclusive percent by line of code", which is simply the percent of stack samples containing the line of code. Again, you don't need many digits of precision, which means you don't need an enormous number of samples.
The reason inclusive percent by line of code is important, as opposed to other measurements (like self-time, function measurements, invocation counts, milliseconds, and so on) is that it represents the fraction of total wall clock time that line is responsible for, and would not be spent if it were not there.
If you could get rid of it, that tells you how much time it would save.

Resources