How to manage a large number of variables in C? - c

In an implementation of the Game of Life, I need to handle user events, perform some regular (as in periodic) processing and draw to a 2D canvas. The details are not particularly important. Suffice it to say that I need to keep track of a large(-ish) number of variables. These are things like: a structure representing the state of the system (live cells), pointers to structures provided by the graphics library, current zoom level, coordinates of the origin and I am sure a few others.
In the main function, there is a game loop like this:
// Setup stuff
while (!finished) {
while (get_event(&e) != 0) {
if (e.type == KEYBOARD_EVENT) {
switch (e.key.keysym) {
case q:
case x:
// More branching and nesting follows
The maximum level of nesting at the moment is 5. It quickly becomes unmanageable and difficult to read, especially on a small screen. The solution then is to split this up into multiple functions. Something like:
while (!finished {
while (get_event(&e) !=0) {
handle_event(state, origin_x, origin_y, &canvas, e...) //More parameters
This is the crux of the question. The subroutine must necessarily have access to the state (represented by the origin, the canvas, the live cells etc.) in order to function. Passing them all explicitly is error prone (which order does the subroutine expect them in) and can also be difficult to read. Aside from that, having functions with potentially 10+ arguments strikes me as a symptom of other design flaws. However the alternatives that I can think of, don't seem any better.
To summarise:
Accept deep nesting in the game loop.
Define functions with very many arguments.
Collate (somewhat) related arguments into structs - This really only hides the problem, especially since the arguments are only loosely related.
Define variables that represent the application state with file scope (static int origin_x; for example). If it weren't for the fact that it has been drummed into me that global variable are usually a terrible idea, this would be my preferred option. But if I want to display two views of the same instance of the Game of Life in the future, then the file scope no longer looks so appealing.
The question also applies in slightly more general terms I suppose: How do you pass state around a complicated program safely and in a readable way?
EDIT:
My motivations here are not speed or efficiency or performance or anything like this. If the code takes 20% longer to run as a result of the choice made here that's just fine. I'm primarily interested in what is less likely to confuse me and cause the least headache in 6 months time.

I would consider the canvas as one variable, containing a large 2D array...
consider static allocation
bool canvas[ROWS][COLS];
or dynamic
bool *canvas = malloc(N*M*sizeof(int));
In both cases you can refer to the cell at position i,j as canvas[i][j]
though for dynamic allocation, do not forget to free(canvas) at the end. You can then use a nested loop to update your state.
Search for allocating/handling a 2d array in C and examples or tutorials... Possibly check something like this or similar? Possibly this? https://www.geeksforgeeks.org/nested-loops-in-c-with-examples/
Also consider this Fastest way to zero out a 2d array in C?

Related

Why do filter method implementations create another array instead of modifying the current array?

From what I know, many popular implementations of filter collection methods, e.g. JavaScript's Array#filter method, tend to create a new array rather than modifying it. (As #Berthur mentioned, this is also generally useful in terms of functional programming as well).
However, from what I've seen in homemade methods of filter implementations, sometimes the author chooses to use a while / for loop on a dynamically allocated array (e.g. an ArrayList in Java) and remove elements instead.
I have a general idea of why this is the case (since removing elements requires the rest of the array's elements afterwards to be shifted over, which is O(n) while adding elements is O(1)), but I also know that in the same case, if an element is added to the end of an array when the array is full, it requires memory to be allocated, which requires, in the case for Java, the array to be copied.
Thus, is there some mathematical reason of why creating a new array for filtering is (generally) faster than removing & moving elements over, or is it just for the guaranteed immutability over the original array that it guarantees?
It's not generally faster, and it's not done for performance reasons. It's more of a programming paradigm, as well as being a convenient tool.
While in-place algorithms are often faster for performance and/or memory critical applications, they need to know about the underlying implementation of the data structure, and become more specific. This immutable approach allows for more general functionality, apart from being convenient. The approach is common in functional programming. As you say, it guarantees immutability, which makes it compatible with this way of thinking.
In your Javascript example, for instance, notice that you can call filter on a regular array, but you could also call it on a TypedArray. Now, typed arrays cannot be resized, so performing an in-place filter would not be possible in the first place. But the filter method behaves in the same way through their common interface, following the principles of polymorphism.
Ultimately, these functions are just available to you and while they can be very convenient for many cases, it is up to you as a programmer to decide whether they cover your specific need or whether you must implement your own custom algorithm.

Is it better to create new variables or using pointers in C? [duplicate]

In Go there are various ways to return a struct value or slice thereof. For individual ones I've seen:
type MyStruct struct {
Val int
}
func myfunc() MyStruct {
return MyStruct{Val: 1}
}
func myfunc() *MyStruct {
return &MyStruct{}
}
func myfunc(s *MyStruct) {
s.Val = 1
}
I understand the differences between these. The first returns a copy of the struct, the second a pointer to the struct value created within the function, the third expects an existing struct to be passed in and overrides the value.
I've seen all of these patterns be used in various contexts, I'm wondering what the best practices are regarding these. When would you use which? For instance, the first one could be ok for small structs (because the overhead is minimal), the second for bigger ones. And the third if you want to be extremely memory efficient, because you can easily reuse a single struct instance between calls. Are there any best practices for when to use which?
Similarly, the same question regarding slices:
func myfunc() []MyStruct {
return []MyStruct{ MyStruct{Val: 1} }
}
func myfunc() []*MyStruct {
return []MyStruct{ &MyStruct{Val: 1} }
}
func myfunc(s *[]MyStruct) {
*s = []MyStruct{ MyStruct{Val: 1} }
}
func myfunc(s *[]*MyStruct) {
*s = []MyStruct{ &MyStruct{Val: 1} }
}
Again: what are best practices here. I know slices are always pointers, so returning a pointer to a slice isn't useful. However, should I return a slice of struct values, a slice of pointers to structs, should I pass in a pointer to a slice as argument (a pattern used in the Go App Engine API)?
tl;dr:
Methods using receiver pointers are common; the rule of thumb for receivers is, "If in doubt, use a pointer."
Slices, maps, channels, strings, function values, and interface values are implemented with pointers internally, and a pointer to them is often redundant.
Elsewhere, use pointers for big structs or structs you'll have to change, and otherwise pass values, because getting things changed by surprise via a pointer is confusing.
One case where you should often use a pointer:
Receivers are pointers more often than other arguments. It's not unusual for methods to modify the thing they're called on, or for named types to be large structs, so the guidance is to default to pointers except in rare cases.
Jeff Hodges' copyfighter tool automatically searches for non-tiny receivers passed by value.
Some situations where you don't need pointers:
Code review guidelines suggest passing small structs like type Point struct { latitude, longitude float64 }, and maybe even things a bit bigger, as values, unless the function you're calling needs to be able to modify them in place.
Value semantics avoid aliasing situations where an assignment over here changes a value over there by surprise.
Passing small structs by value can be more efficient by avoiding cache misses or heap allocations. In any case, when pointers and values perform similarly, the Go-y approach is to choose whatever provides the more natural semantics rather than squeeze out every last bit of speed.
So, Go Wiki's code review comments page suggests passing by value when structs are small and likely to stay that way.
If the "large" cutoff seems vague, it is; arguably many structs are in a range where either a pointer or a value is OK. As a lower bound, the code review comments suggest slices (three machine words) are reasonable to use as value receivers. As something nearer an upper bound, bytes.Replace takes 10 words' worth of args (three slices and an int). You can find situations where copying even large structs turns out a performance win, but the rule of thumb is not to.
For slices, you don't need to pass a pointer to change elements of the array. io.Reader.Read(p []byte) changes the bytes of p, for instance. It's arguably a special case of "treat little structs like values," since internally you're passing around a little structure called a slice header (see Russ Cox (rsc)'s explanation). Similarly, you don't need a pointer to modify a map or communicate on a channel.
For slices you'll reslice (change the start/length/capacity of), built-in functions like append accept a slice value and return a new one. I'd imitate that; it avoids aliasing, returning a new slice helps call attention to the fact that a new array might be allocated, and it's familiar to callers.
It's not always practical follow that pattern. Some tools like database interfaces or serializers need to append to a slice whose type isn't known at compile time. They sometimes accept a pointer to a slice in an interface{} parameter.
Maps, channels, strings, and function and interface values, like slices, are internally references or structures that contain references already, so if you're just trying to avoid getting the underlying data copied, you don't need to pass pointers to them. (rsc wrote a separate post on how interface values are stored).
You still may need to pass pointers in the rarer case that you want to modify the caller's struct: flag.StringVar takes a *string for that reason, for example.
Where you use pointers:
Consider whether your function should be a method on whichever struct you need a pointer to. People expect a lot of methods on x to modify x, so making the modified struct the receiver may help to minimize surprise. There are guidelines on when receivers should be pointers.
Functions that have effects on their non-receiver params should make that clear in the godoc, or better yet, the godoc and the name (like reader.WriteTo(writer)).
You mention accepting a pointer to avoid allocations by allowing reuse; changing APIs for the sake of memory reuse is an optimization I'd delay until it's clear the allocations have a nontrivial cost, and then I'd look for a way that doesn't force the trickier API on all users:
For avoiding allocations, Go's escape analysis is your friend. You can sometimes help it avoid heap allocations by making types that can be initialized with a trivial constructor, a plain literal, or a useful zero value like bytes.Buffer.
Consider a Reset() method to put an object back in a blank state, like some stdlib types offer. Users who don't care or can't save an allocation don't have to call it.
Consider writing modify-in-place methods and create-from-scratch functions as matching pairs, for convenience: existingUser.LoadFromJSON(json []byte) error could be wrapped by NewUserFromJSON(json []byte) (*User, error). Again, it pushes the choice between laziness and pinching allocations to the individual caller.
Callers seeking to recycle memory can let sync.Pool handle some details. If a particular allocation creates a lot of memory pressure, you're confident you know when the alloc is no longer used, and you don't have a better optimization available, sync.Pool can help. (CloudFlare published a useful (pre-sync.Pool) blog post about recycling.)
Finally, on whether your slices should be of pointers: slices of values can be useful, and save you allocations and cache misses. There can be blockers:
The API to create your items might force pointers on you, e.g. you have to call NewFoo() *Foo rather than let Go initialize with the zero value.
The desired lifetimes of the items might not all be the same. The whole slice is freed at once; if 99% of the items are no longer useful but you have pointers to the other 1%, all of the array remains allocated.
Copying or moving the values might cause you performance or correctness problems, making pointers more attractive. Notably, append copies items when it grows the underlying array. Pointers to slice items from before the append may not point to where the item was copied after, copying can be slower for huge structs, and for e.g. sync.Mutex copying isn't allowed. Insert/delete in the middle and sorting also move items around so similar considerations can apply.
Broadly, value slices can make sense if either you get all of your items in place up front and don't move them (e.g., no more appends after initial setup), or if you do keep moving them around but you're confident that's OK (no/careful use of pointers to items, and items are small or you've measured the perf impact). Sometimes it comes down to something more specific to your situation, but that's a rough guide.
If you can (e.g. a non-shared resource that does not need to be passed as reference), use a value. By the following reasons:
Your code will be nicer and more readable, avoiding pointer operators and null checks.
Your code will be safer against Null Pointer panics.
Your code will be often faster: yes, faster! Why?
Reason 1: you will allocate less items in the heap. Allocating/deallocating from stack is immediate, but allocating/deallocating on Heap may be very expensive (allocation time + garbage collection). You can see some basic numbers here: http://www.macias.info/entry/201802102230_go_values_vs_references.md
Reason 2: especially if you store returned values in slices, your memory objects will be more compacted in memory: looping a slice where all the items are contiguous is much faster than iterating a slice where all the items are pointers to other parts of the memory. Not for the indirection step but for the increase of cache misses.
Myth breaker: a typical x86 cache line are 64 bytes. Most structs are smaller than that. The time of copying a cache line in memory is similar to copying a pointer.
Only if a critical part of your code is slow I would try some micro-optimization and check if using pointers improves somewhat the speed, at the cost of less readability and mantainability.
Three main reasons when you would want to use method receivers as pointers:
"First, and most important, does the method need to modify the receiver? If it does, the receiver must be a pointer."
"Second is the consideration of efficiency. If the receiver is large, a big struct for instance, it will be much cheaper to use a pointer receiver."
"Next is consistency. If some of the methods of the type must have pointer receivers, the rest should too, so the method set is consistent regardless of how the type is used"
Reference : https://golang.org/doc/faq#methods_on_values_or_pointers
Edit : Another important thing is to know the actual "type" that you are sending to function. The type can either be a 'value type' or 'reference type'.
Even as slices and maps acts as references, we might want to pass them as pointers in scenarios like changing the length of the slice in the function.
A case where you generally need to return a pointer is when constructing an instance of some stateful or shareable resource. This is often done by functions prefixed with New.
Because they represent a specific instance of something and they may need to coordinate some activity, it doesn't make a lot of sense to generate duplicated/copied structures representing the same resource -- so the returned pointer acts as the handle to the resource itself.
Some examples:
func NewTLSServer(handler http.Handler) *Server -- instantiate a web server for testing
func Open(name string) (*File, error) -- return a file access handle
In other cases, pointers are returned just because the structure may be too large to copy by default:
func NewRGBA(r Rectangle) *RGBA -- allocate an image in memory
Alternatively, returning pointers directly could be avoided by instead returning a copy of a structure that contains the pointer internally, but maybe this isn't considered idiomatic:
No such examples found in the standard libraries...
Related question: Embedding in Go with pointer or with value
Regarding to struct vs. pointer return value, I got confused after reading many highly stared open source projects on github, as there are many examples for both cases, util I found this amazing article:
https://www.ardanlabs.com/blog/2014/12/using-pointers-in-go.html
"In general, share struct type values with a pointer unless the struct type has been implemented to behave like a primitive data value.
If you are still not sure, this is another way to think about. Think of every struct as having a nature. If the nature of the struct is something that should not be changed, like a time, a color or a coordinate, then implement the struct as a primitive data value. If the nature of the struct is something that can be changed, even if it never is in your program, it is not a primitive data value and should be implemented to be shared with a pointer. Don’t create structs that have a duality of nature."
Completedly convinced.

Code quality question about handling multiple functions with same signature in C

My program answers on incoming messages and do some logic based on ID`s and data included in messages.
I have a different function for each ID.
The project is pure C.
To make the code easy to work with I have adjusted all functions to the same style (same return and parameters).
I also want to evade the long switch-case constructions and make code easier to edit later, so I have created the following function:
AnswerStruct IDHandler(Request Message)
{
struct AnswerStruct ANS;
SIDHandler = IDfunctions[Message.ID];
ANS = SIDHandler(Message);
return ANS;
}
AnswerStruct is struct for answer messages.
Request is struct for incoming messages.
IDfunctions is array of pointers to functions which looks like this -
AnswerStruct func1(Request);
AnswerStruct func4(Request);
...
typedef AnswerStruct(*f)(Request);
AnswerStruct (*SIDHandler)(Request);
static f IDfunctions[IDMax] = {0, *func1, 0, 0, *func4, ...};
Function pointers placed in the array cells equal to their id`s, for example:
func1 related to message with ID=1.
func4 related to message with ID=4.
I think, that by using this array I make my life much easier.
I can call function which I need in one step (just go to the IDfunctions[ID]).
Also, adding new functions becomes a two step operation (just add function to the IDfunctions and write logic).
I doubt the efficiency of the selected solution, it seems clunky to me.
The question is - Is this a good architecture?
If no, how can I edit my solution to make it better?
Thanks.
I doubt the efficiency of the selected solution, it seems clunky to
me.
It can be less efficient to call a function via a function pointer than to call it directly by name, because the former denies the compiler any opportunity to optimize the call. But you have to consider whether that actually matters. In a system that dispatches function calls based on messages received from an external source, the I/O involved in receiving the messages is likely to be much more expensive than the indirect function calls, so the difference in call performance is unlikely to be significant.
On the other hand, your approach affords simpler logic and many fewer lines of code, which is a different and potentially more valuable kind of efficiency.
The question is - Is this a good architecture?
The general approach is perfectly good, and I don't see much to complain about in the implementation sketch provided.
Personally, I would declare array IDFunctions to be const (supposing, of course, that you don't intend to replace any of its members after their initialization), but that's a minor safety / performance detail, where again the performance dimension is probably irrelevant.

Julia: Best practice for mutating type-stable arrays

This is my first try at Julia, so please forgive me if this sound trivial to you. My Julia code is already performing way better than my Python code, but I am left with a question concerning typing.
For a scientific program I am working with a multitude of arrays that are type-stable and of fixed dimensionality.
The program aims to update these arrays by a mathematically non-trivial scheme in order to minimize an energy function. I have defined these arrays in the global scope by
const A = Array{Complex{Float32}}(dim)
where dim is the dimensionality. I noticed adding the const caused a my calculations to speed up considerably (x3 faster). Subsequently, the contents of these arrays are initialized and updated in functions by A[:] =....
Is defining type-stable arrays of fixed dimensionality as consts globally, and updating by accessing them as A[:] considered bad practice?
My best shot at an alternative method would be typing the input arguments of all my functions and passing a lot of variables around. Would this be more desirable?
My (subjective) opinion is that defining them as const and then mutating the contents is, by itself, not necessarily a bad practice. It's clear in Julia that the const declaration is about the variable-value binding, not the internals of the value.
However, if the same variable A is used to hold disparate unconnected values (and not different forms of the same matrix, for eg. reduced forms), that's certainly bad practice. A[:] .= A .* 2 is fine, A[:] .= X is not.
Also, having multiple global variables that are mutated in different places is usually a code smell, and often leads to subtle and not-so-subtle bugs. It also makes the code hard to reason about.
How about encapsulating the variables in a single struct type, for eg.
struct ArrayVars
A::Array{Complex{Float32}, dim}
B::Array{Float64, dim}
...
end
and creating an instance of that in an init style function? (Hopefully you can come up with a better name for the type than ArrayVars, taking into account the semantics of the arrays involved.) Then, you could pass this single variable of this type to functions and manipulate the arrays inside it, instead of passing around a lot of variables to each function.

Does using lists of structs make sense in cocoa?

This question has spawned out of this one. Working with lists of structs in cocoa is not simple. Either use NSArray and encode/decode, or use a C type array and lose the commodities of NSArray. Structs are supposed to be simple, but when a list is needed, one would tend to build a class instead.
When does using lists of structs make sense in cocoa?
I know there are already many questions regarding structs vs classes, and I've read users argue that it's the same answer for every language, but at least cocoa should have its own specific answers to this, if only because of KVC or bindings (as Peter suggested on the first question).
Cocoa has a few common types that are structs, not objects: NSPoint, NSRect, NSRange (and their CG counterparts).
When in doubt, follow Cocoa's lead. If you find yourself dealing with a large number of small, mostly-data objects, you might want to make them structs instead for efficiency.
Using NSArray/NSMutableArray as the top-level container, and wrapping the structs in an NSValue will probably make your life a lot easier. I would only go to a straight C-type array if you find NSArray to be a performance bottleneck, or possibly if the array is essentially read-only.
It is convenient and useful at times to use structs, especially when you have to drop down to C, such as when working with an existing library or doing system level stuff. Sometimes you just want a compact data structure without the overhead of a class. If you need many instances of such structs, it can make a real impact on performance and memory footprint.
Another way to do an array of structs is to use the NSPointerArray class. It takes a bit more thought to set up but it works pretty much just like an NSArray after that and you don't have to bother with boxing/unboxing or wrapping in a class so accessing the data is more convenient, and it doesn't take up the extra memory of a class.
NSPointerFunctions *pf = [[NSPointerFunctions alloc] initWithOptions:NSPointerFunctionsMallocMemory |
NSPointerFunctionsStructPersonality |
NSPointerFunctionsCopyIn];
pf.sizeFunction = keventSizeFunction;
self.pending = [[NSPointerArray alloc] initWithPointerFunctions:pf];
In general, the use of a struct implies the existence of a relatively simple data type that has no logic associated with it nor should have any logic associated with it. Take an NSPoint for instance - it is merely a (x,y) representation. Given this, there are also some issues that arise from it's use. In general, this is OK for this type of data as we usually observe for a change in the point rather than the y-coordinate of a point (fundamentally, (0,1) isn't the same as (1,1) shifted down by 1 unit). If this is an undesirable behavior, it may be a better idea to use a class.

Resources