How to Determine how much stack space an embeded program is using - c

I am wondering what the best way to Determine how much stack space a program is using, are there any techniques or tools to generate the statistics, rather than counting by hand?
The program is hoping to analyze is a C program in code composer, if that makes a difference.
Thank you

You can fill the stack ram with some pattern (0xDEADBEEF for example) and then run for a while then examine the stack to see how much was used. You would still have to do the analysis to find the deepest paths, and then generate the deepest nested interrupts on top of that if it is ever possible in the application.

There is some info about running the static analysis tool on TI's website here. Generally, static analysis will tell you how much stack is used by the deepest call tree from main(), but it won't include the ISRs. You need to manually look at the call tree and add in the ISR call depth. If you have several priority levels of ISRS, don't forget that a higher priority ISR can interrupt a lower priority one.

Related

What does __latent_entropy is used for in C

Please I would like to understand in which case do we use the keyword __latent_entropy in a C function signature.
I saw some google results talking about a GCC plugin, but I don't still understand what is its impact.
Thanks
You can have a look at the Kconfig's description of what enabling latent_entropy GCC plugin does (it also has a mention of its impact in Linux' performance):
config GCC_PLUGIN_LATENT_ENTROPY
bool "Generate some entropy during boot and runtime"
help
By saying Y here the kernel will instrument some kernel code to
extract some entropy from both original and artificially created
program state. This will help especially embedded systems where
there is little 'natural' source of entropy normally. The cost
is some slowdown of the boot process (about 0.5%) and fork and
irq processing.
Note that entropy extracted this way is not cryptographically
secure!
This plugin was ported from grsecurity/PaX. More information at:
* https://grsecurity.net/
* https://pax.grsecurity.net/
Here you'll find a more detailed description of the latent_entropy GCC plugin. Some content taken from the link:
...
this is where the new gcc plugin comes in: we can instrument the kernel's
boot code to do some hash-like computation and extract some entropy from
whatever program state we decide to mix into that computation. a similar
idea has in fact been implemented by Larry Highsmith of Subreption fame
in http://www.phrack.org/issues.html?issue=66&id=15 where he (manually)
instrumented the kernel's boot code to extract entropy from a few kernel
variables such as time (jiffies) and context switch counts.
the latent entropy plugin takes this extraction to a whole new level. first,
we define a new global variable that we mix into the kernel's entropy pools
on each initcall. second, each initcall function (and all other boot-only
functions they call) gets instrumented to compute a 'random' number that
gets mixed into this global variable at the end of the function (you can
think of it as an artificially created return value that each instrumented
function computes for our purposes). the computation is a mix of add/xor/rol
(the happy recovery Halvar mix :) with compile-time chosen random constants
and the sequence of these operations follows the instrumented functions's
control flow graph. for the rest of the gory details see the source code ;).
...

Profiling a Single Function Predictably

I need a better way of profiling numerical code. Assume that I'm using GCC in Cygwin on 64 bit x86 and that I'm not going to purchase a commercial tool.
The situation is this. I have a single function running in one thread. There are no code dependencies or I/O beyond memory accesses, with the possible exception of some math libraries linked in. But for the most part, it's all table look-ups, index calculations, and numerical processing. I've cache aligned all arrays on the heap and stack. Due to the complexity of the algorithm(s), loop unrolling, and long macros, the assembly listing can become quite lengthy -- thousands of instructions.
I have been resorting to using either, the tic/toc timer in Matlab, the time utility in the bash shell, or using the time stamp counter (rdtsc) directly around the function. The problem is this: the variance (which might be as much as 20% of the runtime) of the timing is larger than the size of the improvements I'm making, so I have no way of knowing if the code is better or worse after a change. You might think then it's time to give up. But I would disagree. If you are persistent, many incremental improvements can lead to a two or three times performance increase.
One problem I have had multiple times that is particularly maddening is that I make a change and the performance seems to improve consistently by say 20%. The next day, the gain is lost. Now it's possible I made what I thought was an innocuous change to the code and then completely forgot about it. But I'm wondering if it's possible something else is going on. Like maybe GCC doesn't yield a 100% deterministic output as I believe it does. Or maybe it's something simpler, like the OS moved my process to a busier core.
I have considered the following, but I don't know if any of these ideas are feasible or make any sense. If yes, I would like explicit instructions on how to implement a solution. The goal is to minimize the variance of the runtime so I can meaningfully compare different versions of optimized code.
Dedicate a core of my processor to run only my routine.
Direct control over the cache(s) (load it up or clear it out).
Ensuring my dll or executable always loads to the same place in memory. My thinking here is that maybe the set-associativity of the cache interacts with the code/data location in RAM to alter performance on each run.
Some kind of cycle accurate emulator tool (not commercial).
Is it possible to have a degree of control over context switches? Or does it even matter? My thinking is the timing of the context switches is causing variability, maybe by causing the pipeline to be flushed at an inopportune time.
In the past I have had success on RISC architectures by counting instructions in the assembly listing. This only works, of course, if the number of instructions is small. Some compilers (like TI's Code Composer for the C67x) will give you a detailed analysis of how it's keeping the ALU busy.
I haven't found the assembly listings produced by GCC/GAS to be particularly informative. With full optimization on, code is moved all over the place. There can be multiple location directives for a single block of code dispersed about the assembly listing. Further, even if I could understand how the assembly maps back into my original code, I'm not sure there's much correlation between instruction count and performance on a modern x86 machine anyway.
I made a weak attempt at using gcov for line-by-line profiling, but due to an incompatibility between the version of GCC I built and the MinGW compiler, it wouldn't work.
One last thing you can do is average over many, many trial runs, but that takes forever.
EDIT (RE: Call Stack Sampling)
The first question I have is, practically, how do I do this? In one of your power point slides, you showed using Visual Studio to pause the program. What I have is a DLL compiled by GCC with full optimizations in Cygwin. This is then called by a mex DLL compiled by Matlab using the VS2013 compiler.
The reason I use Matlab is because I can easily experiment with different parameters and visualize the results without having to write or compile any low level code. Further, I can compare my optimized DLL to the high level Matlab code to ensure my optimizations have not broken anything.
The reason I use GCC is that I have a lot more experience with it than with Microsoft's compiler. I'm familiar with many flags and extensions. Further, Microsoft has been reluctant, at least in the past, to maintain and update the native C compiler (C99). Finally, I've seen GCC kick the pants off commercial compilers, and I've looked at the assembly listing to see how it's actually done. So I have some intuition of how the compiler actually thinks.
Now, with regards to making guesses about what to fix. This isn't really the issue; it's more like making guesses about how to fix it. In this example, as is often the case in numerical algorithms, there is really no I/O (excluding memory). There are no function calls. There's virtually no abstraction at all. It's like I'm sitting on top of a piece of saran wrap. I can see the computer architecture below, and there's really nothing in-between. If I re-rolled up all the loops, I could probably fit the code on about one page or so, and I could almost count the resultant assembly instructions. Then I could do a rough comparison to the theoretical number of operations a single core is capable of doing to see how close to optimal I am. The trouble then is I lose the auto-vectorization and instruction level parallelization I got from unrolling. Unrolled, the assembly listing is too long to analyze in this way.
The point is that there really isn't much to this code. However, due to the incredible complexity of the compiler and modern computer architecture, there is quite a bit of optimization to be had even at this level. But I don't know how small changes are going to affect the output of the compiled code. Let me give a couple of examples.
This first one is somewhat vague, but I'm sure I've seen it happen a few times. You make a small change and get a 10% improvement. You make another small change and get another 10% improvement. You undo the first change and get another 10% improvement. Huh? Compiler optimizations are neither linear, nor monotonic. It's possible, the second change required an additional register, which broke the first change by forcing the compiler to alter its register allocation algorithm. Maybe, the second optimization somehow occluded the compiler's ability to do optimizations which was fixed by undoing the first optimization. Who knows. Unless the compiler is introspective enough to dump its full analysis at every level of abstraction, you'll never really know how you ended up with the final assembly.
Here is a more specific example which happened to me recently. I was hand coding AVX intrinsics to speed up a filter operation. I thought I could unroll the outer loop to increase instruction level parallelism. So I did, and the result was that the code was twice as slow. What happened was there were not enough 256 bit registers to go around. So the compiler was temporarily saving results on the stack, which killed performance.
As I was alluding to in this post, which you commented on, it's best to tell the compiler what you want, but unfortunately, you often have no choice and are forced to hand tweak optimizations, usually via guess and check.
So I guess my question would be, in these scenarios (the code is effectively small until unrolled, each incremental performance change is small, and you're working at a very low level of abstraction), would it be better to have "precision of timing" or is call stack sampling better at telling me which code is superior?
I've faced a similar problem some time ago but that was on Linux which made it easier to tweak. Basically the noise introduced by OS (called "OS jitter") was as big as 5-10% in SPEC2000 tests (I can imagine it's much higher on Windows due to much bigger amount of bloatware).
I was able to bring deviation to below 1% by combination of the following:
disable dynamic frequency scaling (better do this both in BIOS and in Linux kernel as not all kernel versions do this reliably)
disable memory prefetching and other fancy settings like "Turbo boost", etc. (BIOS, again)
disable hyperthreading
enable high-performance process scheduler in kernel
bind process to core to prevent thread migration (use core 0 - for some reason it was more reliable on my kernel, go figure)
boot to single-user mode (in which no services are running) - this isn't as easy in modern systemd-based distros
disable ASLR
disable network
drop OS pagecache
There may be more to it but 1% noise was good enough for me.
I might put detailed instructions to github later today if you need them.
-- EDIT --
I've published my benchmarking script and instructions here.
Am I right that what you're doing is making an educated guess of what to fix, fixing it, and then trying to measure to see if it made any difference?
I do it a different way, which works especially well as the code gets large.
Rather than guess (which I certainly can) I let the program tell me how the time is spent, by using this method.
If the method tells me that roughly 30% is spent doing such-and-so, I can concentrate on finding a better way to do that.
Then I can run it and just time it.
I don't need a lot of precision.
If it's better, that's great.
If it's worse, I can undo the change.
If it's about the same, I can say "Oh well, maybe it didn't save much, but let's do it all again to find another problem,"
I need not worry.
If there's a way to speed up the program, this will pinpoint it.
And often the problem is not just a simple statement like "line or routine X spends Y% of the time", but "the reason it's doing that is Z in certain cases" and the actual fix may be elsewhere.
After fixing it, the process can be done again, because a different problem, which was small before, is now larger (as a percent, because the total has been reduced by fixing the first problem).
Repetition is the key, because each speedup factor multiplies all the previous, like compound interest.
When the program no longer points out things I can fix, I can be sure it is nearly optimal, or at least nobody else is likely to beat it.
And at no point in this process did I need to measure the time with much precision.
Afterwards, if I want to brag about it in a powerpoint, maybe I'll do multiple timings to get smaller standard error, but even then, what people really care about is the overall speedup factor, not the precision.

Can you know your max stack depth after you compile?

And is there a way to easily monitor your stack depth in a linux environment?
Consider the case of a basic app in C, compiled with gcc, in Ubuntu.
How about if you do NOT allow dynamic memory allocation (no malloc/free-ing)?
Can you know your max stack depth after you compile?
No. Consider a recursive function that might call itself any number of times, depending on input. You can't know how many times the function might be invoked, one inside the last, without knowing what the input to the program is.
I expect that it might be possible to determine the max stack depth for some programs, but you can't determine that for all programs.
And is there a way to easily monitor your stack depth in a linux environment?
I don't know about an easy way to monitor the stack depth continuously, but you can determine the stack depth at any point using gdb's -stack-info-depth command.
How about if you do NOT allow dynamic memory allocation (no malloc/free-ing)?
That doesn't make a difference. Consider a recursive Fibonacci function -- that wouldn't need to allocate any memory dynamically, but the number of stack frames will still vary depending on input.
It is possible to do a call-graph analysis. The longest path in the graph is the maximum stack depth. However, there are limitations.
With recursive functions, all bets are off, the depth of recursion depends on the run time input, it is not possible to deduce that in compile time analysis. [It is possible to detect the presence of recursive function by analyzing the call graph and looking for self edges, i.e. edges with same source and destination.]
Furthermore, the same issue is present if there are loops/cycles in the call graph. [As mentioned by #Caleb: a()->b()->c()->d()->e()->f()->g()->c()] [Using algorithms from graph theory, it is possible to detect the presence of cycles as well.]
References for call graph:
http://en.wikipedia.org/wiki/Call_graph
Tools to get a pictorial function call graph of code

gettimeofday/settimeofday for Making a Function Appear to Take No Time

I've got an auxiliary function that does some operations that are pretty costly.
I'm trying to profile the main section of the algorithm, but this auxiliary function gets called a lot within. Consequently, the measured time takes into account the auxillary function's time.
To solve this, I decided to set and restore the time so that the auxillary function appears to be instantaneous. I defined the following macros:
#define TIME_SAVE struct timeval _time_tv; gettimeofday(&_time_tv,NULL);
#define TIME_RESTORE settimeofday(&_time_tv,NULL);
. . . and used them as the first and last lines of the auxiliary function. For some reason, though, the auxiliary function's overhead is still included!
So, I know this is kind of a messy solution, and so I have since moved on, but I'm still curious as to why this idea didn't work.
Can someone please explain why?
If you insist on profiling this way, do not set the system clock. This will break all sorts of things, if you have permission to do it. Basically you should forget you ever heard of settimeofday. What you want to do is call gettimeofday both before and after the function you want to exclude from measurement, and compute the difference. You can then exclude the time spent in this function from the overall time.
With that said, this whole method of "profiling" is highly flawed, because gettimeofday probably (1) takes a significant amount of time compared to what you're trying to measure, and (2) probably involves a transition into kernelspace, which will do some serious damage to your program's cache coherency. This second problem, whereby in attempting to observe your program's performance characteristics you actually change them, is the most problematic.
What you really should do is forget about this kind of profiling (gettimeofday or even gcc's -pg/gmon profiling) and instead use oprofile or perf or something similar. These modern profiling techniques work based on statistically sampling the instruction pointer and stack information periodically; your program's own code is not modified at all, so it behaves as closely as possible to how it would behave with no profiler running.
There are a couple possibilities that may be occurring. One is that Linux tries to keep the clock accurate and adjustments to the clock may be 'smoothed' or otherwise 'fixed up' to try to keep a smooth sense of time within the system. If you are running NTP, it will also try to maintain a reasonable sense of time.
My approach would have been to not modify the clock but instead track time consumed by each portion of the process. The calls to the expensive part would be accumulated (by getting the difference between gettimeofday on entry and exit, and accumulating) and subtracting that from overall time. There are other possibilities for fancier approaches, I'm sure.

Self-modifying code for trace hooks?

I'm looking for the least-overhead way of inserting trace/logging hooks into some very performance-sensitive driver code. This logging stuff has to always be compiled in, but most of the time do nothing (but do nothing very fast).
There isn't anything much simpler than just having a global on/off word, doing an if(enabled){log()}. However, if possible I'd like to even avoid the cost of loading that word every time I hit one of my hooks. It occurs to me that I could potentially use self-modifying code for this -- i.e. everywhere I have a call to my trace function, I overwrite the jump with a NOP when I want to disable the hooks, and replace the jump when I want to enable them.
A quick google doesn't turn up any prior art on this -- has anyone done it? Is it feasible, are there any major stumbling blocks that I'm not foreseeing?
(Linux, x86_64)
Yes, this technique has been implemented within the Linux kernel, for exactly the same purpose (tracing hooks).
See the LWN article on Jump Labels for a starting point.
There's not really any major stumbling blocks, but a few minor ones: multithreaded processes (you will have to stop all other threads while you're enabling or disabling the code); incoherent instruction cache (you'll need to ensure the I-cache is flushed, on every core).
Does it matter if your compiled driver is suddenly twice as large?
Build two code paths -- one with logging, one without. Use a global function pointer(s) to jump into the performance-sensitive section(s), overwrite them as appropriate.
If there were a way to somehow declare a register global, you could load the register with the value of your word at every entry point into your driver from the outside and then just check the register. Of course, then you'd be denying the use of that register to the optimizer, which might have some unpleasant performance consequences.
I'm writing not so much on the issue of whether this is possible or not but if you gain anything significant.
On the one hand you don't want to test "logging enabled" every time a logging possibility presents itself and on the other need to test "logging enabled" and overwrite code with either the yes- or the no-case code. Or does your driver "remember" that it was no the last time and since no is requested this time nothing needs to be done?
The logic necessary does not appear to be trivial compared to testing every time.

Resources