Are global variables faster than local variables in C? [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I had a couple thoughts on this. The first is that allocating global variables may be faster, since they are only allocated once, at the time the program is first spawned, whereas local variables must be allocated every time a function is called. My second thought is that since local variables are on the stack, they are accessed through the base pointer register, so the value stored in the base pointer must be decremented every time a local variable is accessed; global variables are accessed directly through their static addresses in the data segment. Is my thinking accurate?

It's rather inaccurate.
If you study computer architecture, you'll find out that the fastest storage are the registers, followed by the caches, followed by the RAM. The thing about local variables is that the compiler optimizes them to be allocated from the registers if possible, or from the cache if not. This is why local variables are faster.
For embedded systems, sure it might be possible to compile to a tiny memory model in which case your data segment may possibly fit into a modern controller's SRAM cache. But in such cases, your local variable usage would also be very tight such that they are probably operating entirely on registers.
Conclusion: In most cases, local variables will be faster than global variables.

Related

Importance of using function [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
It has been on my mind for a while now.
How is space managed when we use a function.
Particularly this:
main()
{
printf("Helloworld");
}
and this:
void fun()
{
printf("Helloworld");
}
main()
{
fun();
}
So in terms of memory consumption are both of these the same? Or one of them is consuming lesser memory.
I understand that in a large program functions help us not repeating the same codes again and again AND also it releases its space every time it ends, But I want to know what happens in a small program where memory consumption is insignificantly small where the memory release of function after it ends has no significant effect.
What are the pro's and con's of function in this case
The C standard doesn't tell anything about memory consumption when using a function. An implementation (i.e. a specific compiler on a specific computer system) is free to do function calls the way it wants. It's even allowed to suppress function calls and put the functions code directly where the call was (called: inline). So there is no answer that will cover all systems in all situations.
Most systems uses a stack for handling function calls. A stack is a pre-allocated memory block that is assigned to the program at start up. The running program keeps track of the memory used within that block using a stack pointer. When a function is called, the stack pointer is changed according to the memory requirement for the function. When the function returns, the stack pointer is changed back to the original value. This allows for fast allocation and deallocation of variables local to the function and also any overhead memory used for the call itself (e.g. for storing a return address, cpu registers, etc.).
Since the stack is a pre-allocated fixed memory block, there is really no extra memory consumption involved in a function call. It's only a matter of using the already allocated memory.
However, if you do many nested function calls, you may run out of stack memory but that's another issue.

Embedded Software - Why is 'const' necessary in lookup table? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I was just watching this video on LinkedIn Learning that was talking about
Lookup tables and it mentioned that without the 'const' qualifier, the array will be allocated in RAM and initial assignment takes place during startup and the whole table would be stored twice - in RAM and ROM both.
Can someone explain this to me in a bit more detail? Why does it get stored twice? Does this mean that all variables/arrays without 'const' get stored twice? Would a switch case be better than lookup tables without const?
Thanks in advance.
Microcontrollers have usually (except the Flashless ones) much more FLASH than RAM. It would be a waste to place the constant data in the RAM.
When you use the const keyword most toolchains place the data in the .rodata section which is located in the read only memory - FLASH. Some uC types (AVRs for example) need to use special mechanisms to access this data, for most modern ones there is almost no difference (fast uC need to slow down read and write operations using wait states as FLASH is slower than SRAM)
you can also force the static const automatic variables to be placed in ROM by using attributes and pragmas
(gcc) static const char __attribute__((section(".rodata"))) x; (sections may may have different names - check your toolchain documentation)
But it works only with the global variables - most implementations place automatic const variables on the stack which is located in RAM
EDIT
The static const may be stored as well in the ROM only. But several years ago I had a bad experience with one of the uC gcc branches. To make sure - check what your toolchain is doing with this variables.
So the const is not necessary for lookup tables but it is logical to save the the (usually) very limited resource - the SRAM.

Finding unused memory in process memory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm looking for a reliable way to find unused memory in a C program's process since I need to "inject" some data into somewhere without it corrupting anything.
Whenever I find an area with only zeros in it, that's a good sign. However, no guarantees: It can still crash. All the non-zero memory is most likely being used for sure so it cannot be overwritten reliably (most memory has some kind of data in it).
I understand that you can't really know (without having the application's source code for instance) but are there any heuristics that make sense such as choosing certain segments or memory looking a certain way? Since the data can be 200KB this is rather large and finding an appropriate address range can be difficult/tedious.
Allocating memory via OS functions doesn't work in this context.
Without deep knowledge of a remote process you cannot know that any memory that is actually allocated to that process is 'unused'.
Just finding writable memory (regardless of current contents) is asking to crash the process or worse.
Asking the OS to allocate some more memory in the other process is the way to go, that way you know the memory is not used by the process and the process won't receive that address through an allocation of its own.

Loading characters from array in mips [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
When you load character from an array in mip does the data still exist at that position in the array ? if not, how can you loop thru the array and get each character within the array ? thanks (:
Though your question seem silly, it is actually a very legitimate question!
Form an outside perspective modern memories have a non-destructive readout.
This means that reading a memory location doesn't destroy the data held there.
So reading from an array won't destroy the item read.
Out of curiosity it is funny to note that internally, depending on the memory technology, reading may be a destructive operation (the common DRAM and the old Magnetic core memory are an example1) and that there exists (and existed) destructive memories.
MIPS could run in a system with destructive readout, that would be tricky however since MIPS is a Von Neumann architecture, instructions are read from the same memory where data are.
So reading an instruction would also destroy it.
Though one can arrange a mixed system where code is run from a non destructive memory and data is in a destructive one, such configuration is so unusual that you can safely assume that it wont never happen.
1 Read-only memory like ROM, PROM and in general non-volatile memories have non destructive reading (so do Flash ROMs). In general memory that stores "charges" have destructive readouts.

How Static function in C reduce memory footprints? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've recently learned that declare functions or/and variables as static can reduce footprints, but I can't figured out why. Most article online focus on the scope and readability but not mentioned any benefit about memory allocation. Does "static" really improve performance?
The static keyword is primarily about semantics, not about performance. If you want a variable that persists through multiple calls of the same function, declare it as static. If you don't want that, don't declare it as static. That said, static variables have a performance benefit, which is that they are only initialized once instead of every time the function is called. In other cases, static variables are slower as they are more likely to not be in the cache. Automatic variables are usually always cached as they are typically allocated on the stack, which is generally a hot area, caching-wise. For this reason, you should make lookup tables or constant variables static unless there is a special reason not to (e.g. some people use automatic constant variables as a token to pass to another function).
For functions, the same thing applies: Make a function static when you don't want to call it from other translation units. You should most definitely make all functions static for which this applies. On ABIs that need to preserve the ability for symbol interposition (i.e. the ability to exchange at load time the definition of a global symbol), compilers can only inline static functions. Also, the compiler can remove unused static functions from the binary. This is only possible if the entire translation unit is unused when the function is not static.
Whether using a static variable would improve efficiency depends on
many things, including the usage of the variable and the hardware
architecture of your processor. The C standard does not specify this.
If you need the variable to retain its value between one call of the
function and the next, it must have static storage duration.
Otherwise the only way to find out for sure is to code it both ways
and do timing tests.

Resources