C: using a lot of structs can make a program slow? - c

I am coding a breakout clone. I had one version in which I only had one level deep of structures. This version runs at 70 fps.
For more clarity in the code I decided the code should have more abstractions and created more structs. Most of the times I have two two three level deep of structures. This version runs at 30 fps.
Since there are some other differences besides the structures, I ask you: Does using a lot of structs in C can slow down the code significantly?
For example on the second version, I am using:
struct Breakout
{
Ball ball;
Paddle paddle;
Level* levels;
}
struct Level
{
Bricks* bricks;
}
So, I am using lots of times breakout.levels[level_in_play].bricks[i].visible for example. Will this be a possible cause?
Thanks.

Doing a lot of pointer dereferences can be a performance hit. When you split a big struct up into smaller structs, two things happen:
Accessing a member of a sub-struct requires an additional pointer dereference and memory fetch, which is slightly slower, and
You can reduce the locality of reference, which causes more cache misses and page faults and can drastically reduce performance.
The locality of reference one is probably what is biting you here. If possible, try to allocate related structs in the same malloc block, which increases the likelihood that they will be cached together.

Adding extra layers of dereferencing can cause a little (very little) amount of slowdown overhead. The reason is, each -> that the compiler sees means it has to do an extra memory lookup and offset. For instance, c->b->a requires the compiler to load pointer c into memory, reference it, offset to b, dereference that, offset to a, dereference that, then load a into memory. That's quite a bit of memory work. Doing c.b.a requires the initial load of c, a single add, then direct load of a from memory. That is 2 loads vs 5.
Unless this type of work is being done a ton in small, tight loops, it won't amount to squat for time. If you are doing this in heavy inner loops though (and your compiler isn't helping you), then it could add up. For those cases, consider caching the lowest level struct pointer and working from there.
That said, anytime you bring up performance, step one is to profile. Without a profile, you are guessing. You have made an assertion that struct derefencing is the root of your performance, but without an up to date and valid profile (on a release build) you are guessing and probably wasting time.

In the first place, it's easy and tempting to guess what the problem is. The sneaky thing about guesses is - they are sometimes right. But why guess, when you can find out for drop-dead sure what's taking the time. I recommend this approach.
That said, here's my guess. malloc and free, if you single-step through them at the assembly language level, are probably doing a lot more than you thought. I only allocate memory for structures if I know I will not be doing it at particularly high frequency. If I must allocate/deallocate them dynamically, at high frequency, it helps to have a free list of used copies, so I can just grab them off the list rather than going to malloc all the time.
Nevertheless, take some stackshots. Chances are you can fix a series of problems and make it a whole lot faster.

Not "lots of structs" in an of itself, other that the potential for a greater number of memory accesses. "lots of indirection" however is more likely to be a cause. You have to consider how many memory accesses are generated in order to get to the actual data. Data proximity and size may also affect caching, but that is much harder to analyse.
Also since you mentioned in a comment that you are performing dynamic memory allocation, the time taken to find an allocate a block is non-deterministic and variable. If you are repeatedly allocating and freeing blocks during execution of the algorithm (rather than pre-allocating at initialisation for example), this can cause both degradation and variability in performance.
If you have a profiling tool, profile the code to see where the performance hit occurs.

Related

What is the optimal amount to malloc at a given time when the total needed is not known?

I've implemented a multi-level cache simulator that needs to store the values currently in the simulator. With current configurations, the maximum size of all values being stored could reach 2G. Obviously I'm not going to assume this worst case scenario and allocate all of that memory up-front. Instead, I have the program set to allocate memory as needed in chunks. The expense of this allocation is exacerbated by the fact that I'm callocing in order to provide 0 values when no write has occurred previously at the specified location.
My question is, is there a good heuristic for how much memory should be allocated each time more is needed? Currently I'm using an arbitrary value and I considered some solution that would use some ratio of the total system memory (I presume it's possible to dynamically detect this at compile and/or runtime), but even with the latter I'm using an arbitrary ratio with still doesn't sit well with me.
Any insight into best practices for this kind of situation would be appreciated!
A common rule of thumb is to grow geometrically, for example by doubling, on each reallocation.
It's best to understand allocation patterns of your program, if this is a problem you need to optimize for. This comes by understanding the program's implementation, the architecture(s) it runs within, and by observation (e.g. time and memory profiling).
The truth is, you can optimize from many perspectives, but things change over time (inputs change, environments change). In the user-land, your memory usage is already second guessed.
Given your allocation sizes, I assume you are already depending on a system which will default to a backing store as needed. As such, you don't have much control over what is paged or when. Peeking at available physical memory is not worth consideration in this case, and you will have to work hard to do better than the system's existing virtual memory implementation. Several of these systems try to use all available memory (e.g. "Unused RAM is wasted RAM").
Having said that and if those assumptions are correct: It's often better to just reduce your allocation sizes and working sets and do I/O yourself as needed.
Your OSs probably use disk caching as well; reads and writes are probably faster than you suspect for large blocks of memory.
Even deeper: Use virtual memory or memory mapped files for these large data sets. Your kernel will likely handle these cases very well.
Obviously I'm not going to assume this worst case scenario and allocate all of that memory up-front.
Then you will likely be surprised to learn that a 2 GB calloc alone may be better than other alternatives people come up with in some environments because a large calloc could just reserve a domain in virtual memory, loading/initializing pages only when you access them. Depending on your usage, this approach will be much better than some alternatives you may be given.
A good starting point for many problems when understanding a program or input's allocation patterns is to start out conservative, and then make the most beneficial adjustments based on observation. In many cases, you will need little more information than a) accurately determining how much to resize by when resizing is necessary b) reusing allocations where appropriate c) designing your data well for the problem at hand.

Is making smaller functions generally more efficient memory-wise since variables get deallocated more frequently?

Is dividing the work into 5 functions as opposed to one big function more memory efficient in C since at a given time there are fewer variables in memory, as the stack-frame gets deallocated more often? Does it depend on the compiler, and optimization? if so in what compilers is it faster?
Answer given there are a lot of local variables and the stack frames comes from a centralized main and not created on the top of each other.
I know other advantages of breaking out the function into smaller functions. Please answer this question, only in respect to memory usage.
It might reduce "high water mark" of stack usage for your program, and if so that might reduce the overall memory requirement of the program.
Yes, it depends on optimization. If the optimizer inlines the function calls, you might well find that all the variables of all the functions inlined are wrapped into one big stack frame. Any compiler worth using is capable of inlining[*], so the fact that it can happen doesn't depend on compiler. Exactly when it happens, will differ.
If your local variables are small, though, then it's fairly rare for your program to use more stack than has been automatically allocated to you at startup. Unless you go past what you're given initially, how much you use makes no difference to overall memory requirements.
If you're putting great big structures on the stack (multiple kilobytes), or if you're on a machine where a kilobyte is a lot of memory, then it might make a difference to overall memory usage. So, if by "a lot of local variables" you mean few dozen ints and pointers then no, nothing you do makes any significant difference. If by "a lot of local variables" you mean a few dozen 10k buffers, or if your function recurses very deep so that you have hundreds of levels of your few dozen ints, then it's a least possible it could make a difference, depending on the OS and configuration.
The model that stack and heap grow towards each other through general RAM, and the free memory in the middle can be used equally by either one of them, is obsolete. With the exception of a very few, very restricted systems, memory models are not designed that way any more. In modern OSes, we have so-called "virtual memory", and stack space is allocated to your program one page at a time. Most of them automatically allocate more pages of stack as it is used, up to a configured limit that's usually very large. A few don't automatically extend stack (Symbian last I used it, which was some years ago, didn't, although arguably Symbian is not a "modern" OS). If you're using an embedded OS, check what the manual says about stack.
Either way, the only thing that affects total memory use is how many pages of stack you need at any one time. If your system automatically extends stack, you won't even notice how much you're using. If it doesn't, you'll need to ensure that the program is given sufficient stack for its high-water mark, and that's when you might notice excessive stack use.
In short, this is one of those things that in theory makes a difference, but in practice that difference is almost always insignificant. It only matters if your program uses massive amounts of stack relative to the resources of the environment it runs in.
[*] People programming in C for PICs or something, using a C compiler that is basically a non-optimizing assembler, are allowed to be offended that I've called their compiler "not worth using". The stack on such devices is so different from "typical" systems that the answer is different anyway.
I think in most cases the area of memory allocated for the stack (for the entire program) remains constant. The amount in use will change based on the depth of call stack and that amount would be less when fewer variables are used (but note that function calls push the return address and stack pointer also).
Also it depends on how the functions are called. If two functions are called in series, for example, and the stack of the first is popped before the call to the second, then you'll be using less of the stack..but if the first function calls the second then you're back to where you were with one big function (plus the function call overhead).
There's no memory allocation on stack - just moving the stack pointer towards next value. While stack size itself is predefined. So there's no difference in memory usage (apart of situations when you get stack overflow).
Yes, in the same vein that using a finer coat of paint on a jet plane increases its aerodynamic properties. Ok, that's a bad analogy, but the point is that if there is ever a question of making things clear and telegraphic or trying to use more functions, go with telegraphic. In most cases these are not mutually exclusive anyway as the beginners tend to give subroutines or functions too much to do.
In terms of memory I think that if you are truly splitting up up work (f, then g, then h) then you will see some minute available memory increases but if these are interdependent then you will not.
As #Joel Burget says, memory management is not really a consideration in code structuring.
Just my take.
Splitting a huge function into smaller ones does have its benefits, among them is potentially more optimized memory usage.
Say, you have this function.
void huge_func(int input) {
char a[1024];
char b[1024];
// do something with input and a
// do something with input and b
}
And you split it to two.
void func_a(int input) {
char a[1024];
// do something with input and a
}
void func_b(int input) {
char b[1024];
// do something with input and b
}
Calling huge_func will take at least 2048 bytes of memory, and calling func_a then func_b achieves the same outcome with about half less memory. However, if inside func_a you call func_b, the amount of memory used is about the same as huge_func. Essentially, as what #sje397 wrote.
I might be wrong to say this but I do not think there is any compiler optimization that could help you reduce the usage of stack memory. I believe the layout of stack memory must ensure that sufficient memory is reserved for all declared variables, whether used or not.

Fortran global work array vs. local dynamically allocated arrays

I am working with an older F77 code that has been upgraded to F9X. It still has some older "legacy" code structure and I'm curious on the performance aspect towards adding code in the legacy way or modern way. We have a separate F9x code that we are trying to integrate into this older code and use as many of their procedures as possible instead of rewriting our own versions. Also note, assume that all of these procedures are NOT explicitly interfaced.
Specifically, the old code has one large rank-1 work array that is allocated in the main program and as this array is passed deeper into procedures, it is split apart and used where it is needed. Essentially there is one allocation/deallocation and the only overhead with this array involves finding the starting indices (trivial) of needed temporary arrays and passing these sections of the work array into the procedure.
Our new code generally uses lower level procedures from the old code in which multiple dummy arrays originated from the older code's global work array. Instead of the hassle of creating our own work array, finding starting indices, and passing all these array sections with their starting indices, I could just create dynamically allocated arrays where they are needed. However, these procedures can be called thousands (possibly millions for some lower level routines) of times during the code execution and I am concerned with the overhead of allocating and deallocating each time any of these procedures are used. Also, these temporary arrays could contain many millions of double precision elements.
I've also dabbled with automatic arrays but stopped when I started encountering stack overflow issues and now almost exclusively use dynamic arrays. I've heard different things about the stack and heap with regards to how memory for different kinds of arrays is stored but I really don't know the difference and which is better (performance, efficiency, etc.).
Long story short, are these dynamically allocated (or automatic) arrays going to be significantly less efficient due to overhead issues? I also realize that dynamically allocated arrays are more robust in the life span of the code but what I am really after is performance. A 5% performance gain could mean many hours saved in code execution.
I realize I might not get a definitive answer to this due to differences in compiler optimizations and other factors but I'm curious if anyone might have some knowledge/experience with anything similar. Thanks for your help.
I think that any answers are going to be guesses and speculation. My guess: array creation is going to be a very low CPU load. Unless these subroutines are doing a negligible amount of computations, the differing overhead of differing arrays types won't be noticeable. But the only way to be sure would be to try two different methods and to time them, e.g., with the Fortran intrinsic cpu_time.
Automatic arrays are usually placed on the stack, but some compilers place large automatic arrays on the heap. Some compilers have an option to change this behavior. Allocatable are probably on the heap.

In C, does using static variables in a function make it faster?

My function will be called thousands of times. If i want to make it faster, will changing the local function variables to static be of any use? My logic behind this is that, because static variables are persistent between function calls, they are allocated only the first time, and thus, every subsequent call will not allocate memory for them and will become faster, because the memory allocation step is not done.
Also, if the above is true, then would using global variables instead of parameters be faster to pass information to the function every time it is called? i think space for parameters is also allocated on every function call, to allow for recursion (that's why recursion uses up more memory), but since my function is not recursive, and if my reasoning is correct, then taking off parameters will in theory make it faster.
I know these things I want to do are horrible programming habits, but please, tell me if it is wise. I am going to try it anyway but please give me your opinion.
The overhead of local variables is zero. Each time you call a function, you are already setting up the stack for the parameters, return values, etc. Adding local variables means that you're adding a slightly bigger number to the stack pointer (a number which is computed at compile time).
Also, local variables are probably faster due to cache locality.
If you are only calling your function "thousands" of times (not millions or billions), then you should be looking at your algorithm for optimization opportunities after you have run a profiler.
Re: cache locality (read more here):
Frequently accessed global variables probably have temporal locality. They also may be copied to a register during function execution, but will be written back into memory (cache) after a function returns (otherwise they wouldn't be accessible to anything else; registers don't have addresses).
Local variables will generally have both temporal and spatial locality (they get that by virtue of being created on the stack). Additionally, they may be "allocated" directly to registers and never be written to memory.
The best way to find out is to actually run a profiler. This can be as simple as executing several timed tests using both methods and then averaging out the results and comparing, or you may consider a full-blown profiling tool which attaches itself to a process and graphs out memory use over time and execution speed.
Do not perform random micro code-tuning because you have a gut feeling it will be faster. Compilers all have slightly different implementations of things and what is true on one compiler on one environment may be false on another configuration.
To tackle that comment about fewer parameters: the process of "inlining" functions essentially removes the overhead related to calling a function. Chances are a small function will be automatically in-lined by the compiler, but you can suggest a function be inlined as well.
In a different language, C++, the new standard coming out supports perfect forwarding, and perfect move semantics with rvalue references which removes the need for temporaries in certain cases which can reduce the cost of calling a function.
I suspect you're prematurely optimizing, however, you should not be this concerned with performance until you've discovered your real bottlenecks.
Absolutly not! The only "performance" difference is when variables are initialised
int anint = 42;
vs
static int anint = 42;
In the first case the integer will be set to 42 every time the function is called in the second case ot will be set to 42 when the program is loaded.
However the difference is so trivial as to be barely noticable. Its a common misconception that storage has to be allocated for "automatic" variables on every call. This is not so C uses the already allocated space in the stack for these variables.
Static variables may actually slow you down as its some aggresive optimisations are not possible on static variables. Also as locals are in a contiguous area of the stack they are easier to cache efficiently.
There is no one answer to this. It will vary with the CPU, the compiler, the compiler flags, the number of local variables you have, what the CPU's been doing before you call the function, and quite possibly the phase of the moon.
Consider two extremes; if you have only one or a few local variables, it/they might easily be stored in registers rather than be allocated memory locations at all. If register "pressure" is sufficiently low that this may happen without executing any instructions at all.
At the opposite extreme there are a few machines (e.g., IBM mainframes) that don't have stacks at all. In this case, what we'd normally think of as stack frames are actually allocated as a linked list on the heap. As you'd probably guess, this can be quite slow.
When it comes to accessing the variables, the situation's somewhat similar -- access to a machine register is pretty well guaranteed to be faster than anything allocated in memory can possible hope for. OTOH, it's possible for access to variables on the stack to be pretty slow -- it normally requires something like an indexed indirect access, which (especially with older CPUs) tends to be fairly slow. OTOH, access to a global (which a static is, even though its name isn't globally visible) typically requires forming an absolute address, which some CPUs penalize to some degree as well.
Bottom line: even the advice to profile your code may be misplaced -- the difference may easily be so tiny that even a profiler won't detect it dependably, and the only way to be sure is to examine the assembly language that's produced (and spend a few years learning assembly language well enough to know say anything when you do look at it). The other side of this is that when you're dealing with a difference you can't even measure dependably, the chances that it'll have a material effect on the speed of real code is so remote that it's probably not worth the trouble.
It looks like the static vs non-static has been completely covered but on the topic of global variables. Often these will slow down a programs execution rather than speed it up.
The reason is that tightly scoped variables make it easy for the compiler to heavily optimise, if the compiler has to look all over your application for instances of where a global might be used then its optimising won't be as good.
This is compounded when you introduce pointers, say you have the following code:
int myFunction()
{
SomeStruct *A, *B;
FillOutSomeStruct(B);
memcpy(A, B, sizeof(A);
return A.result;
}
the compiler knows that the pointer A and B can never overlap and so it can optimise the copy. If A and B are global then they could possibly point to overlapping or identical memory, this means the compiler must 'play it safe' which is slower. The problem is generally called 'pointer aliasing' and can occur in lots of situations not just memory copies.
http://en.wikipedia.org/wiki/Pointer_alias
Using static variables may make a function a tiny bit faster. However, this will cause problems if you ever want to make your program multi-threaded. Since static variables are shared between function invocations, invoking the function simultaneously in different threads will result in undefined behaviour. Multi-threading is the type of thing you may want to do in the future to really speed up your code.
Most of the things you mentioned are referred to as micro-optimizations. Generally, worrying about these kind of things is a bad idea. It makes your code harder to read, and harder to maintain. It's also highly likely to introduce bugs. You'll likely get more bang for your buck doing optimizations at a higher level.
As M2tM suggests, running a profiler is also a good idea. Check out gprof for one which is quite easy to use.
You can always time your application to truly determine what is fastest. Here is what I understand: (all of this depends on the architecture of your processor, btw)
C functions create a stack frame, which is where passed parameters are put, and local variables are put, as well as the return pointer back to where the caller called the function. There is no memory management allocation here. It usually a simple pointer movement and thats it. Accessing data off the stack is also pretty quick. Penalties usually come into play when you're dealing with pointers.
As for global or static variables, they're the same...from the standpoint that they're going to be allocated in the same region of memory. Accessing these may use a different method of access than local variables, depends on the compiler.
The major difference between your scenarios is memory footprint, not so much speed.
Using static variables can actually make your code significantly slower. Static variables must exist in a 'data' region of memory. In order to use that variable, the function must execute a load instruction to read from main memory, or a store instruction to write to it. If that region is not in the cache, you lose many cycles. A local variable that lives on the stack will most surely have an address that is in the cache, and might even be in a cpu register, never appearing in memory at all.
I agree with the others comments about profiling to find out stuff like that, but generally speaking, function static variables should be slower. If you want them, what you are really after is a global. Function statics insert code/data to check if the thing has been initialized already that gets run every time your function is called.
Profiling may not see the difference, disassembling and knowing what to look for might.
I suspect you are only going to get a variation as much as a few clock cycles per loop (on average depending on the compiler, etc). Sometimes the change will be dramatic improvement or dramatically slower, and that wont necessarily be because the variables home has moved to/from the stack. Lets say you save four clock cycles per function call for 10000 calls on a 2ghz processor. Very rough calculation: 20 microseconds saved. Is 20 microseconds a lot or a little compared to your current execution time?
You will likely get more a performance improvement by making all of your char and short variables into ints, among other things. Micro-optimization is a good thing to know but takes lots of time experimenting, disassembling, timing the execution of your code, understanding that fewer instructions does not necessarily mean faster for example.
Take your specific program, disassemble both the function in question and the code that calls it. With and without the static. If you gain only one or two instructions and this is the only optimization you are going to do, it is probably not worth it. You may not be able to see the difference while profiling. Changes in where the cache lines hit could show up in profiling before changes in the code for example.

Code optimization

If I have a big structure(having lot of member variables). This structure pointer is passed to many functions in my code. Some member variables of this structure are used very often, in almost all functions.
If I put those frequently used member variables at the beginning in the structure declaration, will it optmize the code for MCPS - Million cycles per second(time consumed by the code). If i put frequently accessed members at time, will they be accessed efficiently/lesser time than if they are put randomly in the structure of at bottom of structure declaration? If yes what is the logic?
If I have a structure member being accessed in some function as follows:
structurepointer1->member_variable
Will it help in optimizing it in MCPS aspect if I assign it to a local variable and then access the local variable, as shown below?
local_variable = structurepointer1->member_variable;
If yes, then how does it help?
1) The position of a field in a structure should have no effect on its access time except to the extent that, if your structure is very large and spans multiple pages, it may be a good idea to position members that are often used in quick succession close together in order to increase locality of reference and try to decrease cache misses.
2) Maybe / maybe not. In fact it may make things slower. If the variable is not volatile, your compiler may be smart enough to store the field in a register anyway. Even if not, your processor will cache its value, but this may not help if is uses are somewhat far apart, with lots of other memory access in between. If the value would have either been stored in a register or would have stayed in your processor's cache, then assigning it to a local will only be unnecessary extra work.
Standard Optimizations Disclaimer: Always profile before optimizing. Make sure that what you are trying to optimize is worth optimizing. Always profile your attempted optimizations and make sure they actually made things faster (and not slower).
First, the obligatory disclaimer: for all performance questions, you must profile the code to see where improvements can be made.
In general though, anything you can do to keep your data in the processor cache will help. Putting the most commonly accessed items close together will facilitate this.
I know this is not really answering your question, but before you delve into super-optimizing your code, go through this presentation http://dl.fefe.de/optimizer-isec.pdf. I saw it live and it was a good eye opening experience showing compilers are getting far more advanced in optimization than we tend to think and readable code is more important than small optimizations.
On 2, you most likely are better off not declaring a local variable. The compiler is usually smart enough to figure out when and how variable is used and utilize registers to keep it around.
Also, I would second Mark Ransom's suggestion, profile the code before making assumptions about bottlenecks.
I think your question is related with data alignment and data structure padding. In modern compilers this is handled automatically the most of the times, trying to avoid the alignment faults that could happen on memory. You can read about this here. Of course, you can change the alignment for your data, but I think you would need to specify some compiler options to disable auto-alignment and rearrange the fields on the structure to match the architecture you are aiming to.
I would say this is a very low level optimization.
The location of the field in the structure is irrelevant as that will be calculated by the compiler. A more promising optimization is to make sure that your most-used fields are byte-aligned with the word size of your processor.
If you are using the variable local to a function, this should have no impact. If you are passing it to other functions (separate from the larger structure) than that might help a bit.
As with all of the other answers, you need to run a profile baseline before optimizing, to make sure changes are effective. If you're worried about execution time, profile your algorithms and optimize them before you worry about the code a compiler creates, more bang for the buck.
Also, if you want to know what is going to happen, you should consider compiling your c code into assembly output. This will give you an idea of what the compiler is going to do and how you may go about further "fine tuning".
Structure access is most always indexed indirect access. The assembly code will effectively pull memory knowing the pointer to the structure as the base plus and index to get the right field. This is usually an expensive operation, but for modern CPU's its probably not that slow.
This depends on the locality of the data being accessed. First and foremost accessing the structure the first time will be the most expensive. Accessing the data afterwards, can be quick if the data is already in a processor register, however, this may not be the case depending on the processor used. Storing to a local variable should be less expensive since the memory access instructions for such an operation is less expensive. Again, I think now days processors are fast enough that this optimization is minimal.
I still think that there are probably better places to optimize your code. It is good though that there is someone out there that thinks about this still, in a world of code bloat ;) Embedded computing, you still need to worry about these things.
This depends on the size of your fields and caching details. Look at using valgrind for profiling this.
If you doing this dereferencing a lot it would cost time. A decent optimizing compiler will effectively do the storing the pointer into the local variable optimization as you described. It will do a better job than you will and it will do it in an architecture-specific way.
What you want to do in this situation, overall, is make sure that you test the correctness and the performance of each optimization you are trying. Otherwise you are poking around in the dark.
Remember that fine optimizations at the C line level will virtually never trump higher-order algorithm/design optimizations.
Yes, it can help. But as people have already stated, it depends and can even be counter productive.
The reason why I think it can help, has to do with pointer aliasing. If you access your variables via a pointer, and the compiler can not guarantee that the structure was not changed elsewhere (via your pointer or another) he will generate code to reload or save the variable even if he could have hold the value in a register. Here an example to show what I mean:
calc = structurepointer1->member_variable * x + c;
/* Do something in function which doesn't involve member_variable; */
function(structurepointer1);
calc2 = structurepointer1->member_variable * y;
The compiler will make a memory access for both references to member_variable, because it can not be sure that the called function has modified that field.
If you're sure the function doesn't change that value, doing this would save 1 memory access
int temp = structurepointer1->member_variable;
calc = temp * x + something;
function(structurepointer1);
calc2 = temp * y;
There's also another reason you can use a local variable for your member variables, it can make the code much more readable.

Resources