I am in a process of restructuring an existing application code. One of the requirements of this restructuring is that I need to store a thread specific variable which would be referred pretty often for both read and write. I would be having approximately 50 such threads. The thread specific variable would basically be a pointer to a structure.
Here, I am not able to decide how exactly should I store this variable. Should I make it thread specific key which could be accessed by pthread_getspecific/pthread_setspecific? But I came across some posts which say that calls to these are pretty slow. Then another approach could be of having a global structure which store all these thread specific pointers in either a sorted array (to use binary search) or a hash table of elements in key-value form. Key would be mostly constant (thread_id) and value could be changed frequently. Again what would be the best approach here?
I know the fastest access to the required value would be to actually pass this pointer to each function and keep propagating it. But that would require a lot of code rewrite which I want to avoid.
Thanks in advance for you response.
If you are using the gcc toolchain (some other compilers as well), you have a third option. Use the __thread storage class specifier. This is very efficient. It works by isolating the thread local storage items into separate VM page(s), which get switched when a thread is scheduled. This way each thread is able to point to its own copy of the variables. The cost is just one operation per thread schedule, without the per key lookup cost for other approaches.
If your threads are static (that is, you launch them, and they do not exit unless the program is exiting), then you can simply use any mapping structure that you care about. The only trick is that the map needs to be populated before all the threads are allowed to run. So, you probably need a mutex and condition variable to block all the threads until the map is populated. After that, you can broadcast to all the waiting threads to go. Since the map will never change after that, each thread can read from it without any contention to retrieve their thread specific information.
If you are using GCC, then you can use a compiler specific extension. The __thread storage class extension places a global variable in a thread specific area, so that each thread has their own copy of that global.
__thread struct info_type *info;
Don't optimize prematurely, measure performance of the standard approach before you do anything. They shouldn't use more than some 100 clock cycles on average to provide you with the thread specific pointer. In many applications this is not much distinguishable from noise.
Then, I doubt that any portable solution that you can come with that goes through some sort of global variable or function can be faster than the POSIX functions. Basically they don't do much else than you propose, but are probably better optimized.
The best option that you have is to realize your data on the stack of each thread and pass a pointer to that data through to the functions that need it.
If you have a C11 compliant compiler (I think clang already implements that part) you can use the _Thread construct that provides you with exactly the type of variable that you want. Other compilers (pre-C11) have such features with extensions, e.g. the gcc family of compilers have it with __thread.
I don't understand. Is the structure meant to be thread specific? the one your pointer is pointing at?
If yes, then what is the problem in having a thread specific structure? if it is meant to be shared, (50 threads simultaneously!) you can have a global variable although synchronising might lead to problems as to which ones updating the value.
Why do you want a pointer to all thread-specific data?
Related
So looking around the internet, I couldn't find consistent and helpful information about this. So here's the issue:
Why are local static variables in C said to be thread-unsafe? I mean, static local variables are stored in the data segment, which is shared by all threads, but isn't internal linkage supposed to stop threads from stepping in each other's static variables?
This forum post seems to suggest that threads do in fact step in each other's data segment occasionally, but wouldn't such behavior clearly violate all c standards since the 90'? If such behavor were to be expected, wouldn't use of the data segment (i.e. all variables with static storage duration, including global variables) have been made deprecated long ago in the successive c standards?
I really don't get this, since everyone seems to have something against local static variables, but people can't seem to agree on why, and researching some of the argument shows them to be ill-conceived.
I, for one, think local static variables are a very good way to communicate information between function calls, that can really improve readability and limit scope (compared to, say, passing the information as arguments forth and writing it back on each function call).
As far as I can see, there are completely legitimate uses of local static variables. But maybe I am missing something? I would really like to know if that were the case.
[EDIT]: The answers here were pretty helpful. Thanks to everyone for the insight.
but isn't internal linkage supposed to stop threads from stepping in each other's static variables?
No, linkage has nothing to do with thread safety. It merely restricts functions from accessing variables declared in other scopes, which is a different and unrelated matter.
Lets assume you have a function like this:
int do_stuff (void)
{
static int x=0;
...
return x++;
}
and then this function is called by multiple threads, thread 1 and thread 2. The thread callback functions cannot access x directly, because it has local scope. However, they can call do_stuff() and they can do so simultaneously. And then you will get scenarios like this:
Thread 1 has executed do_stuff until the point return 0 to caller.
Thread 1 is about to write value 1 to x, but before it does..:
Context switch, thread 2 steps in and executes do_stuff.
Thread 2 reads x, it is still 0, so it returns 0 to the caller and then increases x by 1.
x is now 1.
Thread 1 gets focus again. It was about to store 1 to x so that's what it does.
Now x is still 1, although if the program had behaved correctly, it should have been 2.
This gets even worse when the access to x is done in multiple instructions, so that one thread reads "half of x" and then gets interrupted.
This is a "race condition" and the solution here is to protect x with a mutex or similar protection mechanism. Doing so will make the function thread-safe. Alternatively, do_stuff can be rewritten to not use any static storage variables or similar resources - it would then be re-entrant.
isn't internal linkage supposed to stop threads from stepping in each other's static variables?
Linkage has nothing to do with concurrency: internal linkage stops translation units, not threads, from seeing each other's variables.
I, for one, think local static variables are a very good way to communicate information between function calls, that can really improve readability and limit scope
Communicating information between calls through static variables is not too different from communicating information through globals, for the same reasons: when you do that, your function becomes non-reentrant, severely limiting its uses.
The root cause of the problem is that read/write use of variables with static linkage transforms a function form stateless to stateful. Without static variables any state controlled by the function must be passed to it from the outside; static variables, on the other hand, let functions keep "hidden" state.
To see the consequences of keeping a hidden state, consider strtok function: you cannot use it concurrently, because multiple threads would step on each other's state. Moreover, you cannot use it even from a single thread if you wish to parse each token from a string that is currently being parsed, because your second-level invocation would interfere with your own top-level invocation.
From my point of view, the base is wrong, or at least, it is as unsafe as any other bad design.
A bad software practice (or thread unsafe) may be sharing resources without criteria or kind of protection (there are different and great ways for communication between threads, such as queues, mailboxs, etc, or semaphores and mutexs if the resource has to be shared), but this is developers' fault, because they are not using the proper mechanisms.
Actually I cannot see your point, a static local variable, whose scope is well defined (and even better, for embedded applications is useful to avoid memory overflows) and cannot be accessed out of that, so I guess there is no relation between unsafe code and static local variables (or at least, not in a general meaning).
If you are talking about a static local variable which can be written/read/.. from two different tasks without protection (through a callback or whatever), that is a horrible design (and again, developers' fault), but no because the static local variables are (generally) unsafe.
The behaviour of simultaneously reading from and writing to any non-atomic object is undefined in C.
A static variable makes the possibility of this happening substantially greater than an automatic or dynamic variable. And that is the crux of the problem.
So if you don't control your threading (using mutual exclusion units for example), you could put your program into an undefined state.
A sort of half-way-house; thread local storage is available with some C compilers, but it has not yet been incorporated into the C standard (cf. thread_local of C++11). See, for example, https://gcc.gnu.org/onlinedocs/gcc-3.3/gcc/Thread-Local.html
isn't internal linkage supposed to stop threads from stepping in each other's static variables?
Your question is tagged c. There are no threads in the C programming language. If your program creates any new threads, it does so by calling in to some library at run-time. The C tool chain does not know what threads are, it has no way of knowing that the library routines you call create threads, and it has no way of knowing if you consider any particular static variable to be "owned" by one thread or another thread.
Every thread in your program runs in the same virtual address space as every other thread. Every thread potentially has access to all of the same variables that can be accessed by any other thread. If a variable in the program actually is used by more than one thread, it is the programmer's responsibility (not the tool chain's responsibility) to ensure that the threads use it in a safe way.
everyone seems to have something against local static variables,
Software developers who work in teams to develop large, long-lived software systems (think, tens of years and hundreds of thousands to tens of millions of lines of code) have some very well understood reasons to avoid using static variables. Not everyone works on systems like that, but you will meet some folk here who do.
people can't seem to agree on why
Not all software systems need to be maintained and upgraded for tens of years, and not all have tens of millions of lines of code. It's a big world. There are people out there writing code for many different reasons. They do not all have the same needs.
and researching some of the argument shows them to be ill-conceived
There are people out there writing code for many different reasons... What seems "ill-conceived" to you might be something that some other group of developers have thought long and hard about. Perhaps you do not fully understand their needs.
As far as I can see, there are completely legitimate uses of local static variables
Yes. That is why they exist. The C programming language, like many other programming languages, is a general tool that can be used in many different ways.
read about it here.
I need to implement a variation of such an interface, say we are given a large memory space to manage there should be getmem(size) and free(pointer to block) functions that has to make sure free(pointer to block) can actually free the memory if and only if all processes using that block are done using it.
What I was thinking about doing is to define a Collectable struct as pointer to block, size of it, and process using it count. then whenever a process using a Collectable struct instance for the first time it has to explicitly increment the count, and whenever the process free()'s it, the count is decremented.
The problem with this approach is that all processes must respond to that interface and make it explicitly work : whenever assigning collectable pointer to an instance the process must explicitly inc that counter, which does not satisfy me, I was thinking maybe there is a way to create a macro for this to happen implicitly in every assignment?
I'm seeking of ways to approach this problem for a while, so other approaches and ideas would be great...
EDIT : the above approach doesn't satisfy me not only because it doesn't look nice but mostly because I cant assume a running process's code would care for updating my count. I need a way to make sure its done without changing the process's code...
An early problem with reference counting is that it is relatively easy to count the initial reference by putting code in a custom malloc / free implementation, but it is quite a bit harder to determine if the initial recipient passes that address around to others.
Since C lacks the ability to override the assignment operator (to count the new reference), basically you are left with a limited number of options. The only one that can possibly override the assignment is macrodef, as it has the ability to rewrite the assignment into something that inlines the increment of the reference count value.
So you need to "expand" a macro that looks like
a = b;
into
if (b is a pointer) { // this might be optional, if lookupReference does this work
struct ref_record* ref_r = lookupReference(b);
if (ref_r) {
ref_r->count++;
} else {
// error
}
}
a = b;
The real trick will be in writing a macro that can identify the assignment, and insert the code cleanly without introducing other unwanted side-effects. Since macrodef is not a complete language, you might run into issues where the matching becomes impossible.
(jokes about seeing nails where you learn how to use a hammer have an interesting parallel here, except that when you only have a hammer, you had better learn how to make everything a nail).
Other options (perhaps more sane, perhaps not) is to keep track of all address values assigned by malloc, and then scan the program's stack and heap for matching addresses. If you match, you might have found a valid pointer, or you might have found a string with a luck encoding; however, if you don't match, you certainly can free the address; provided they aren't storing an address + offset calculated from the original address. (perhaps you can macrodef to detect such offsets, and add the offset as multiple addresses in the scan for the same block)
In the end, there isn't going to be a foolproof solution without building a referencing system, where you pass back references (pretend addresses); hiding the real addresses. The down side to such a solution is that you must use the library interface every time you want to deal with an address. This includes the "next" element in the array, etc. Not very C-like, but a pretty good approximation of what Java does with its references.
Semi-serious answer
#include "Python.h"
Python has a great reference counting memory manager. If I had to do this for real in production code, not homework, I'd consider embedding the python object system in my C program which would then make my C program scriptable in python too. See the Python C API documentation if you are interested!
Such a system in C requires some discipline on the part of the programmer but ...
You need to think in terms of ownership. All things that hold references are owners and must keep track of the objects to which it holds references, e.g. through lists. When a reference holding thing is destroyed it must loop its list of referred objects and decrement their reference counters and if zero destroy them in turn.
Functions are also owners and should keep track of referenced objects, e.g. by setting up a list at the start of the function and looping through it when returning.
So you need to determine in which situations objects should be transferred or shared with new owners and wrap the corresponding situations in macros/functions that add or remove owned objects to owning objects' lists of referenced objects (and adjust the reference counter accordingly).
Finally you need to deal with circular references somehow by checking for objects that are no longer reachable from objects/pointers on the stack. That could be done with some mark and sweep garbage collection mechanism.
I don't think you can do it automatically without overridable destructors/constructors.
You can look at HDF5 ref counting but those require explicit calls in C:
http://www.hdfgroup.org/HDF5/doc/RM/RM_H5I.html
I use 2 pthreads, where one thread "notifies" the other one of an event, and for that there is a variable ( normal integer ), which is set by the second thread.
This works, but my question is, is it possible that the update is not seen immediately by the first (reading) thread, meaning the cache is not updated directly? And if so, is there a way to prevent this behaviour, e.g. like the volatile keyword in java?
(the frequency which the event occurs is approximately in microsecond range, so more or less immediate update needs to be enforced).
/edit: 2nd question: is it possible to enforce that the variable is hold in the cache of the core where thread 1 is, since this one is reading it all the time. ?
It sounds to me as though you should be using a pthread condition variable as your signaling mechanism. This takes care of all the issues you describe.
It may not be immediately visible by the other processors but not because of cache coherence. The biggest problems of visibility will be due to your processor's out-of-order execution schemes or due to your compiler re-ordering instructions while optimizing.
In order to avoid both these problems, you have to use memory barriers. I believe that most pthread primitives are natural memory barriers which means that you shouldn't expect loads or stores to be moved beyond the boundaries formed by the lock and unlock calls. The volatile keyword can also be useful to disable a certain class of compiler optimizations that can be useful when doing lock-free algorithms but it's not a substitute for memory barriers.
That being said, I recommend you don't do this manually and there are quite a few pitfalls associated with lock-free algorithms. Leaving these headaches to library writters should make you a happier camper (unless you're like me and you love headaches :) ). So my final recomendation is to ignore everything I said and use what vromanov or David Heffman suggested.
The most appropriate way to pass a signal from one thread to another should be to use the runtime library's signalling mechanisms, such as mutexes, condition variables, semaphores, and so forth.
If these have too high an overhead, my first thought would be that there was something wrong with the structure of the program. If it turned out that this really was the bottleneck, and restructuring the program was inappropriate, then I would use atomic operations provided by the compiler or a suitable library.
Using plain int variables, or even volatile-qualified ones is error prone, unless the compiler guarantees they have the appropriate semantics. e.g. MSVC makes particular guarantees about the atomicity and ordering constraints of plain loads and stores to volatile variables, but gcc does not.
Better way to use atomic variables. For sample you can use libatomic. volatile keyword not enough.
In C I have a pointer that is declared volatile and initialized null.
void* volatile pvoid;
Thread 1 is occasionally reading the pointer value to check if it is non-null. Thread 1 will not set the value of the pointer.
Thread 2 will set the value of a pointer just once.
I believe I can get away without using a mutex or condition variable.
Is there any reason thread 1 will read a corrupted value or thread 2 will write a corrupted value?
To make it thread safe, you have to make atomic reads/writes to the variable, it being volatile is not safe in all timing situations. Under Win32 there are the Interlocked functions, under Linux you can build it yourself with assembly if you do not want to use the heavy weight mutexes and conditional variables.
If you are not against GPL then http://www.threadingbuildingblocks.org and its atomic<> template seems promising. The lib is cross platform.
In the case where the value fits in a single register, such as a memory aligned pointer, this is safe. In other cases where it might take more than one instruction to read or write the value, the read thread could get corrupted data. If you are not sure wether the read and write will take a single instruction in all usage scenarios, use atomic reads and writes.
Depends on your compiler, architecture and operating system. POSIX (since this question was tagged pthreads Im assuming we're not talking about windows or some other threading model) and C don't give enough constraints to have a portable answer to this question.
The safe assumption is of course to protect the access to the pointer with a mutex. However based on your description of the problem I wonder if pthread_once wouldn't be a better way to go. Granted there's not enough information in the question to say one way or the other.
Unfortunately, you cannot portably make any assumptions about what is atomic in pure C.
GCC, however, does provide some atomic built-in functions that take care of using the proper instructions for many architectures for you. See Chapter 5.47 of the GCC manual for more information.
Well this seems fine.. The only problem will happen in this case
let thread A be your checking thread and B the modifying one..
The thing is that checking for equality is not atomic technically first the values should be copied to registers then checked and then restored. Lets assume that thread A has copied to register, now B decides to change the value , now the value of your variable changes. So when control goes back to A it will say it is not null even though it SHUD be according to when the thread was called. This seems harmless in this program but MIGHT cause problems..
Use a mutex.. simple enuf.. and u can be sure you dont have sync errors!
On most platforms where a pointer value can be read/written in a single instruction, it either set or it isn't set yet. It can't be interrupted in the middle and contain a corrupted value. A mutex isn't needed on that kind of platform.
Apparently there's a lot of variety in opinions out there, ranging from, "Never! Always encapsulate (even if it's with a mere macro!)" to "It's no big deal – use them when it's more convenient than not."
So.
Specific, concrete reasons (preferably with an example)
Why global variables are dangerous
When global variables should be used in place of alternatives
What alternatives exist for those that are tempted to use global variables inappropriately
While this is subjective, I will pick one answer (that to me best represents the love/hate relationship every developer should have with globals) and the community will vote theirs to just below.
I believe it's important for newbies to have this sort of reference, but please don't clutter it up if another answer exists that's substantially similar to yours – add a comment or edit someone else's answer.
Variables should always have a smaller scope possible. The argument behind that is that every time you increase the scope, you have more code that potentially modifies the variable, thus more complexity is induced in the solution.
It is thus clear that avoiding using global variables is preferred if the design and implementation naturally allow that. Due to this, I prefer not to use global variables unless they are really needed.
I can not agree with the 'never' statement either. Like any other concept, global variables are something that should be used only when needed. I would rather use global variables than using some artificial constructs (like passing pointers around), which would only mask the real intent.
Some good examples where global variables are used are singleton pattern implementations or register access in embedded systems.
On how to actually detect excessive usages of global variables: inspection, inspection, inspection. Whenever I see a global variable I have to ask myself: Is that REALLY needed at a global scope?
The only way you can make global variables work is to give them names that assure they're unique.
That name usually has a prefix associated some some "module" or collection of functions for which the global variable is particularly focused or meaningful.
This means that the variable "belongs" to those functions -- it's part of them. Indeed, the global can usually be "wrapped" with a little function that goes along with the other functions -- in the same .h file same name prefix.
Bonus.
When you do that, suddenly, it isn't really global any more. It's now part of some module of related functions.
This can always be done. With a little thinking every formerly global variable can be assigned to some collection of functions, allocated to a specific .h file, and isolated with functions that allow you to change the variable without breaking anything.
Rather than say "never use global variables", you can say "assign the global variable's responsibilities to some module where it makes the most sense."
Global variables in C are useful to make code more readable if a variable is required by multiple methods (rather than passing the variable into each method). However, they are dangerous because all locations have the ability to modify that variable, making it potentially difficult to track down bugs. If you must use a global variable, always ensure it is only modified directly by one method and have all other callers use that method. This will make it much easier to debug issues relating to changes in that variable.
Consider this koan: "if the scope is narrow enough, everything is global".
It is still very possible in this age to need to write a very quick utility program to do a one-time job.
In such cases, the energy required to create safe access to variables is greater than the energy saved by debugging problems in such a small utility.
This is the only case I can think of offhand where global variables are wise, and it is relatively rare. Useful, novel programs so small they can be held completely within the brain's short-term memory are increasingly infrequent, but they still exist.
In fact, I could boldly claim that if the program is not this small, then global variables should be illegal.
If the variable will never change, then it is a constant, not a variable.
If the variable requires universal access, then two subroutines should exist for getting and setting it, and they should be synchronized.
If the program starts small, and might be larger later, then code as if the program is large today, and abolish global variables. Not all programs will grow! (Although of course, that assumes the programmer is willing to throw away code at times.)
When you're not worried about thread-safe code: use them wherever it makes sense, in other words wherever it makes sense to express something as a global state.
When your code may be multi-threaded: avoid at all costs. Abstract global variables into work queues or some other thread-safe structure, or if absolutely necessary wrap them in locks, keeping in mind that these are likely bottlenecks in the program.
I came from the "never" camp, until I started working in the defense industry. There are some industry standards that require software to use global variables instead of dynamic (malloc in the C case) memory. I'm having to rethink my approach to dynamic memory allocation for some of the projects that I work on. If you can protect "global" memory with the appropriate semaphores, threads, etc. then this can be an acceptable approach to your memory management.
Code complexity is not the only optimization of concern. For many applications, performance optimization has a far greater priority. But more importantly, use of global variables can drastically REDUCE code complexity in many situations. There are many, perhaps specialized, situations in which global variables are not only an acceptable solution, but preferred. My favorite specialized example is their use to provide communication between the main thread of an application with an audio callback function running in a real-time thread.
It is misleading to suggest that global variables are a liability in multi-threaded applications as ANY variable, regardless of scope, is a potential liability if it is exposed to change on more than one thread.
Use global variables sparingly. Data structures should be used whenever possible to organize and isolate the use of the global namespace.
Variable scope avails programmers very useful protection -- but it can have a cost. I came to write about global variables tonight because I am an experienced Objective-C programmer who often gets frustrated with the barriers object-orientation places on data access. I would argue that anti-global zealotry comes mostly from younger, theory-steeped programmers experienced principally with object-oriented APIs in isolation without a deep, practical experience of system level APIs and their interaction in application development. But I have to admit that I get frustrated when vendors use the namespace sloppily. Several linux distros had "PI" and "TWOPI" predefined globally, for example, which broke much of my personal code.
When Not to Use: Global variables are dangerous because the only way to ever know how the global variable changed is to trace the entire source code within the .c file within which they are declared (or, all .c files if it is extern as well). If your code goes buggy, you have to search your entire source file(s) to see which functions change it, and when. It is a nightmare to debug when it goes wrong. We often take for granted the ingenuity behind the concept of local variables gracefully going out of scope - it's easy to trace
When to Use: Global variables should be used when its utilization is not excessively masked and where the cost of using local variables is excessively complex to the point where it compromises readability. By this, I mean the necessary of having to add an additional parameter to function arguments and returns and passing pointers around, amongst other things. Three classic examples: When I use the pop and push stack - this is shared between functions. Of-course I could use local variables but then I would have to pass pointers around as an additional parameter. Second classic example can be found in K&R's "The C Programming Language" where they define a getch() and ungetch() functions which share a global character buffer array. Once again, we don't need to make it global, but is the added complexity worth it when its pretty hard to mess up the use of the buffer? Third example is something you'll find in the embedded space amongst Arduino hobbyists. Alot of functions within the main loop function all share the millis() function which is the instantaneous time of when the function is invoked. Because clock speed isn't infinite, the millis() will differ within a single loop. To make it constant, take a snapshot of time prior to every loop and save it in a global variable. The time snapshot will now be the same as when accessed by the many functions.
Alternatives: Not much. Stick to local scoping as much as possible, especially in the beginning of the project, rather than vice versa. As the project grow's and if you feel complexity can be lowered using global variables, then do so, but only if it meets the requirements of point two. And remember, using local scope and having more complicated code is the lesser evil compared to irresponsibly using global variables.
You need to consider in what context the global variable will be used as well. In the future will you want this code to duplicate.
For example if you are using a socket within the system to access a resource. In the future will you want to access more than one of these resources, if the answer is yes I would stay away from globals in the first place so a major refactor will not be required.
Global variables should be used when multiple functions need to access the data or write to an object. For example, if you had to pass data or a reference to multiple functions such as a single log file, a connection pool, or a hardware reference that needs to be accessed across the application. This prevents very long function declarations and large allocations of duplicated data.
You should typically not use global variables unless absolutely necessary because global variables are only cleaned up when explicitly told to do so or your program ends. If you are running a multi-threaded application, multiple functions can write to the variable at the same time. If you have a bug, tracking that bug down can be more difficult because you don't know which function is changing the variable. You also run into the problem of naming conflicts unless you use a naming convention that explicitly gives global variables a unique name.
It's a tool like any other usually overused but I don't think they are evil.
For example I have a program that really acts like an online database. The data is stored in memory but other programs can manipulate it. There are internal routines that act much like stored procedures and triggers in a database.
This program has a hundreds of global variables but if you think about it what is a database but a huge number of global variables.
This program has been in use for about ten years now through many versions and it's never been a problem and I'd do it again in a minute.
I will admit that in this case the global vars are objects that have methods used for changing the object's state. So tracking down who changed the object while debugging isn't a problem since I can always set a break point on the routine that changes the object's state. Or even simpler I just turn on the built in logging that logs the changes.
When you declare constants.
I can think of several reasons:
debugging/testing purposes (warning - haven't tested this code):
#include <stdio.h>
#define MAX_INPUT 46
int runs=0;
int fib1(int n){
++runs;
return n>2?fib1(n-1)+fib1(n-2):1;
};
int fib2(int n,int *cache,int *len){
++runs;
if(n<=2){
if(*len==2)
return 1;
*len=2;
return cache[0]=cache[1]=1;
}else if(*len>=n)
return cache[n-1];
else{
if(*len!=n-1)
fib2(n-1,cache,len);
*len=n;
return cache[n-1]=cache[n-2]+cache[n-3];
};
};
int main(){
int n;
int cache[MAX_INPUT];
int len=0;
scanf("%i",&n);
if(!n||n>MAX_INPUT)
return 0;
printf("fib1(%i)==%i",n,fib1(n));
printf(", %i run(s)\n",runs);
runs=0;
printf("fib2(%i)==%i",n,fib2(n,&cache,&len));
printf(", %i run(s)\n",runs);
main();
};
I used scoped variables for fib2, but that's one more scenario where globals might be useful (pure mathematical functions which need to store data to avoid taking forever).
programs used only once (eg for a contest), or when development time needs to be shortened
globals are useful as typed constants, where a function somewhere requires *int instead of int.
I generally avoid globals if I intend to use the program for more than a day.
I believe we have an edge case in our firm, which prevents me from entering the "never use global variables camp".
We need to write an embedded application which works in our box, that pulls medical data from devices in hospital.
That should run infinitely, even when medical device is plugged off, network is gone, or settings of our box changes. Settings are read from a .txt file, which can be changed during runtime with preferably no trouble.
That is why Singleton pattern is no use to me. So we go back from time to time (after 1000 data is read) and read settings like so:
public static SettingForIncubator settings;
public static void main(String[] args) {
while(true){
SettingsForIncubator settings = getSettings(args);
int counter=0;
while(medicalDeviceIsGivingData && counter < 1000){
readData(); //using settings
//a lot of of other functions that use settings.
counter++;
}
}
}
Global constants are useful - you get more type safety than pre-processor macros and it's still just as easy to change the value if you decide you need to.
Global variables have some uses, for example if the operation of many parts of a program depend on a particular state in the state machine. As long as you limit the number of places that can MODIFY the variable tracking down bugs involving it isn't too bad.
Global variables become dangerous almost as soon as you create more than one thread. In that case you really should limit the scope to (at most) a file global (by declaring it static) variable and getter/setter methods that protect it from multiple access where that could be dangerous.
I'm in the "never" camp here; if you need a global variable, at least use a singleton pattern. That way, you reap the benefits of lazy instantiation, and you don't clutter up the global namespace.