Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If you go down to assembler level and basic CPU instructions, what is a function? A function is just some block of code that is surrounded by JMP (jump) instructions. So, instruction pointer (instruction that is currently executed) jumps from some other place in program to the start of a function, executes it's code, and then jumps somewhere else.
In this sense, a loop is definitely a function. Only slight difference is that it usually jumps to it's own beginning to check some condition and execute itself again, instead of jumping to some other place (usually place of its call).
The key aspect of a function (or procedure) is that when it is called, the address it is called from (return address) is recorded so the function can jump back to the caller when it finished. Many processors have special instructions to perform these two common tasks. For example, x86 has call and ret.
A loop does not generally do any of that. Hence, it is not a function in this sense.
I think you're minimizing the dynamic nature of returning to its place of call. I can agree that it's all branching, sure. However, loops are not functions.
We can't call a loop from multiple other code locations, but we can call a function from anywhere in the code and it will return to its caller, no matter who.
Loops can reference free/unbound variables, but functions can be parameterized, which is important as functions support multiple callers & call sites.
There's no such thing as a dynamic loop stack (loops nest statically), but there is a dynamic call stack and a dynamic call chain of arbitrary depth.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
It has been on my mind for a while now.
How is space managed when we use a function.
Particularly this:
main()
{
printf("Helloworld");
}
and this:
void fun()
{
printf("Helloworld");
}
main()
{
fun();
}
So in terms of memory consumption are both of these the same? Or one of them is consuming lesser memory.
I understand that in a large program functions help us not repeating the same codes again and again AND also it releases its space every time it ends, But I want to know what happens in a small program where memory consumption is insignificantly small where the memory release of function after it ends has no significant effect.
What are the pro's and con's of function in this case
The C standard doesn't tell anything about memory consumption when using a function. An implementation (i.e. a specific compiler on a specific computer system) is free to do function calls the way it wants. It's even allowed to suppress function calls and put the functions code directly where the call was (called: inline). So there is no answer that will cover all systems in all situations.
Most systems uses a stack for handling function calls. A stack is a pre-allocated memory block that is assigned to the program at start up. The running program keeps track of the memory used within that block using a stack pointer. When a function is called, the stack pointer is changed according to the memory requirement for the function. When the function returns, the stack pointer is changed back to the original value. This allows for fast allocation and deallocation of variables local to the function and also any overhead memory used for the call itself (e.g. for storing a return address, cpu registers, etc.).
Since the stack is a pre-allocated fixed memory block, there is really no extra memory consumption involved in a function call. It's only a matter of using the already allocated memory.
However, if you do many nested function calls, you may run out of stack memory but that's another issue.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What C code is more CPU expensive:
while(*pointer){
pointer++;
}
or
while(counter > 0){
pointer++;
counter--;
}
?
*pointer nominally requires a fetch from memory, and that is generally the most expensive of the operations shown in your code.
If we assume your code is compiled directly to the obvious assembly corresponding to the operations as they are described in C’s abstract machine, with no optimization, modern CPUs for desktop computers are typically capable of executing one loop iteration per cycle, except for the memory access. That is, they can increment a pointer or counter, test its value, and branch, with a throughput of one set of those per cycle.
When these operations are used in real programs, they will usually be dwarfed by the other operations being performed. Compilers are generally so good at optimization that the method used to express the loop iteration and termination has little effect on the performance—optimization will likely produce equivalent code regardless of variations in expression for differences like incrementing a counter versus iterating a pointer to some end value. (This excludes using a pointer to fetch a value from memory for testing. That does raise complications.)
If you already happen to know the size, I'd expect it to be faster to iterate for some known number of times rather than having to test a pointer each iteration to know whether or not to loop again.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to write a function in C language such that it must check millions of parameters and if all of them are true, function returns true as well, otherwise, it returns false.
However, estimating the time of this operation is important, meaning that we need to know it takes how many milliseconds. (An approximate time would be enough.) We need to know this time to know the throughput of this function.
Note: This parameters are read locally from a file and we use an ordinary computers.
Rather than estimating the time, measure it. Modern CPU architecture performs optimizations so complex that a simple change in the data ordering could increase the running time of your function by a factor of six or more.
In your case it is very important to run a realistic benchmark: all parameters that you check need to be placed in memory at the same positions as in the actual program, and your code should be checking them in the same order. This way you would see the effect of caching. Since your function is all-or-nothing, branch prediction would have almost no effect on your code, because prediction would fail at most once before the loop exits.
Since you are reading your parameters from a file, it is important to use the same API in the test program as you plan to use in the actual program. Depending on the platform, I/O APIs may exhibit significant difference in performance, so your test code should test what you plan to use in production.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
While implementing a communication protocol, we have an encoder that traverses some structs recursively and encodes them into a binary message.
So far so good, but now the buffer has to split out into multiple chunks of fixed size, e.g. the upper size of receiving buffer. Since allocating memory for the full message and cutting it consequently seems to be too wasteful (the size of the message is --in theory-- not bounded), the idea is now to implement a coroutine with means of setjmp/longjmp.
At the moment, I have a prototype with two jump buffers - one buffer for resuming the encode function and the second one for simulating the return behavior of the function to jump back to its caller.
Well, it seems to work, but the code looks like coming straight from hell. Are there any 'conventions' for implementing interruptible recursive functions, maybe a set of macros or something? I would like to use only standardized functions, no inline asm in order to stay portable.
Addition:
The prototype is here: https://github.com/open62541/open62541/compare/master...chunking_longjmp
The 'usage' is shown inside of the unit-test.
Currently, coroutine behavior is implemented for a non-recursive function Array_encodeBinary. However, the 'coroutine' behavior should be extended to the general recursive UA_encodeBinary function located here: https://github.com/open62541/open62541/blob/master/src/ua_types_encoding_binary.c#L1029
As pointed out by Olaf the easiest way would be to use an iterative algorithm. However, if for some reason this is difficult, you can always simulate the recursive algorithm with a stack container and a while loop. This at least makes the function easier to interrupt. Pretty good article of how to implement this can be found here. The article is written for c++, but it should not be difficult to convert it to c.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have the following recursion method. I get an error stack overflow. it stops at -9352. My questions, is stack overflow the same as an infinite loop? Because this will keep calling itself.
But if I do a infinite loop with while, until, do, etc it doesn't give me the same stack overflow error. It just keeps going until my system runs out of memory.
This is using Ruby
def recursion(n)
print n
recursion(n-1)
end
recursion(3)
output:
3
2
1
0
.
.
.
-9352 stack overflow stops
Recursion and looping are techniques that can solve similar problems in different ways (as mentioned in the comments, they are Turing equivalent, but this is not my field).
Each function call adds a frame to the call stack. This requires additional memory and as your call chain goes deeper, it requires more memory until a certain limit is crossed, which makes your stack overflow and your program to crash.
Your recursive code adds more and more frames to the call stack and, given finite amount of memory, will cause it to overflow. You need some way to tell the recursion when to stop, and do so before the memory is exhausted. Such condition is equivalent to the base case in the mathematical induction, therefore it is usually referred to as such.
Another option, pointed in the comments, is utilizing Tail call optimizations, which replace the current frame in the stack and therefore may prevent the stack from overflowing.
Your iterative solution only requires a fixed amount of memory.
Only the value of a counter or other predefined variables is changed, therefore it does not incur any memory overhead.
If you do not limit the output, it could theoretically go on indefinitely, but some other exhaustion or error will most likely kill it. However, this will not be the memory consumed by the variables used in the loop itself.