Threading OR multiple flow of control in Pacman - c

I am planning to write a Pacman game in C language, right from scratch. The most basic challenge that I am facing is how to maintain multiple flows of control at the same time.
I mean how does the Pacman move, the ghosts move, the score being updated -- all at the same time. In general it is very common for all games. Is any kind of threading involved here?
If so can anyone please tell as to how to make your program do many things at the same time (it will be helpful if you tell for C language).
Thanks in advance

One of the fundamental principle in real time game development is the game tick. It represents a small unit of time for things to happen in. So you might have a tick every 0.100 seconds. The smaller the tick, the finer control you have.
You can think of them as really fast turns with a time limit on them. If you don't do anything on that turn you forfeit the turn.

I think it's pretty unlikely that the original version of Pac-Man was multithreaded in the sense we use the term today. It was more likely implemented as a simple loop with some kind of interrupt support. You can do the same to implement rudimentary multithreading - write your program in a while (1) or for (;;) loop, and set up a timer to interrupt your loop at regular intervals to perform the screen updates.

Related

About Dijkstra omp

Recently I've download a source code from internet of the OpenMP Dijkstra.
But I found that the parallel time will always larger than when it is run by one thread (whatever I use two, four or eight threads.)
Since I'm new to OpenMP I really want to figure out what happens.
The is due to the overheard of setting up the threads. The execution time of the work itself is theoretically the same, but the system has to set up the threads that manage the work (even if there's only one). For little work, or for only one thread, this overhead time makes your time-to-solution slower than the serial time-to-solution.
Alternatively, if you see the time increasing dramatically as you increase the thread-count, you could only be using 1 core on your computer and tricking it into thinking its 2,4,8, etc threads.
Finally, it's possible that the way you're implementing dijkstra's method is largely serial. But without looking at your code it would be too hard to say.

Understanding pthreads a little more in C

So I only very recently heard about these pthreads and my understanding of them is very limited so far but I just wanted to know if it would be able to do what I want before I get real into learning about them.
I have written a program that generates two output pulses from a micro-controller which happen with different frequencies, periods and duty cycles. At the moment the functions to output the pulses are happening in a loop and it works well because the timings I am using are multiples of each other so stopping one while not interrupting the other is not too much hassle.
However I want it be a lot more dynamic so I can change the duty cycles or periods easily without having to make some complicated loop specific for those timings... Below shows a quick sketch of what I am trying to achieve and I hope you can understand it...
So basically my question is, is something like this possible with pthreads in C, ie do they run simultaneously so one could be pulsing on and off while the one is waiting for a delay to finish?
If not is there anything that I could use for this instead?
In general, it's not worth using threads for such functionality on a uC. The cost of extra stacks etc. for such limited operations is not worth it, tempting it might be from a simplicity POV.
A hardware timer, interrupt and a delta-queue of events is probably the best you could do.

Implementing general timeouts

I'm porting some code from C# to C. In the C# code there are three timers that fire if particular events take too long and they set flags that are checked next time a thread runs a bit of housekeeping.
The C is pure C, not C++, and will eventually be used on both Linux and in embedded targets, so I can't use any OS oriented stuff- simple soft timers. I started off using just an "enabled" flag and a due time for each timer, in ms, and when I call the housekeeping function I'll pass the current ms timer value to it. Then I started thinking of the wraparound issue and decided I wanted the start time as well, so if the present time isn't between the start time and the due time I know it's expired. And I want the default duration to be there as well, so it ends up being worth making a structure to represent a timer. And then making functions that work with pointers to these structures. And then it started me thinking I may be reinventing the wheel.
I don't see anything in the standard libraries that looks like this. Am I missing something? Is this just something that's just easier to do than to look for? :)
Ta for commenting. That's the way I went, just wanted to make sure I wasn't wasting work. Yeah embedded stuff tends to have a timer interrupt, but three is probably asking a bit much and adds hardware dependencies- I'm just passing the current ms timer value to my code and then it doesn't have to care about where that value's coming from. – Craig Graham

Parallel Demonstration Program

An assignment that I've just now completed requires me to create a set of scripts that can configure random Ubuntu machines as nodes in an MPI computing cluster. This has all been done and the nodes can communicate with one another properly. However, I would now like to demonstrate the efficiency of said MPI cluster by throwing a parallel program at it. I'm just looking for a straight brute force calculation that can divide up work among the number of processes (=nodes) available: if one node takes 10 seconds to run the program, 4 nodes should only take about 2.5.
With that in mind I looked for a prime calculation programs written in C. For any purists, the program is not actually part of my assignment as the course I'm taking is purely systems management. I just need anything that will show that my cluster is working. I have some programming experience but little in C and none with MPI. I've found quite a few sample programs but none of those seem to actually run in parallel. They do distribute all the steps among my nodes so if one node has a faster processor the overall time will go down, but adding additional nodes does nothing to speed up the calculation.
Am I doing something wrong? Are the programs that I've found simply not parallel? Do I need to learn C programming for MPI to write my own program? Are there any other parallel MPI programs that I can use to demonstrate my cluster at work?
EDIT
Thanks to the answers below I've managed to get several MPI scripts working, among which the sum of the first N natural numbers (which isn't very useful as it runs into data type limits), the counting and generating of prime numbers and the Monte Carlo calculation of Pi. Interestingly only the prime number programs realise a (sometimes dramatic) performance gain with multiple nodes/processes.
The issue that caused most of my initial problems with getting scripts working was rather obscure and apparently due to issues with hosts files on the nodes. Running mpiexec with the -disable-hostname-propagation parameter solved this problem, which may manifest itself in a variety of ways: MPI(R) barrier errors, TCP connect errors and other generic connection failures. I believe it may be necessary for all nodes in the cluster to know one another by hostname, which is not really an issue in classic Beowulf clusters that have DHCP/DNS running on the server node.
The usual proof of concept application in parallel programming is simple raytracing.
That being said, I don't think that raytracing is a good example to show off the power of OpenMPI. I'd put the emphasis on scatter/gather or even better scatter/reduce, because that's where MPI gets the true power :)
the most basic example for that would be calculating the sum over the first N integers. You'll need to have a master thread, that fits value ranges to sum over into an array, and scatter these ranges over the number of workers.
Then you'll need to do a reduction and check your result against the explicit formula, to get a free validation test.
If you're looking for a weaker spot of MPI, a parallel grep might work, where IO is the bottleneck.
EDIT
You'll have to keep in mind that MPI is based on a shared nothing architecture where the nodes communicate using messages, and that the number of nodes is fixed. these two factors set a very tight frame for the programs that run on it. To make a long story short, this kind of parallelism is great for data-parallel applications, but sucks for task-parallel applications, because you can usually distribute data better than tasks if the number of nodes changes.
Also, MPI has no concept of implicit work-stealing. if a node is finished working, it just sits around waiting for the other nodes to finish. that means, you'll have to figure out weakest-link handling yourself.
MPI is very customizable when it comes to performance details, there are numerous different variants of MPI_SEND, for example. That leaves much room for performance tweaking, which is important for high performance computing, for which MPI was designed, but is mostly confusing "ordinary" programmers, leading to programs that actually get slower when run parallel. maybe your examples just suck :)
And on the scaleup / speedup problem, well...
I suggest that you read into Amdahl's Law, and you'll see that it's impossible to get linear speedup by just adding more nodes :)
I hope that helped. If you still have questions, feel free to drop a comment :)
EDIT2
maybe the best scaling problem that integrates perfectly with MPI is the empiric estimation of Pi.
Imaging a quarter circle with the radius 1, inside a square with sides of length 1, then you can estimate Pi by firing random points into the square and calculate if they're inside of the quarter circle.
note: this is equal to generating tuples (x,y) with x,y in [0, 1] and measuring how many of these have x² + y² <= 1.
Pi is then roughly equal to
4 * Points in Circle / total Points
In MPI you'd just have to gather the ratios generated from all threads, which is very little overhead and thus gives a perfect proof of concept problem for your cluster.
Like with any other computing paradigm, there are certain well established patterns in use with distributed memory programming. One such pattern is the "bag of jobs" or "controller/worker" (previously known as "master/slave", but now the name is considered politically incorrect). It is best suited for your case because:
under the right conditions it scales with the number of workers;
it is easy to implement;
it has built-in load balancing.
The basic premises are very simple. The "controller" process has a big table/queue of jobs and practically executes one big loop (possibly an infinite one). It listens for messages from "worker" processes and responds back. In the simplest case workers send only two types of messages: job requests or computed results. Consequently, the controller process sends two types of messages: job descriptions or termination requests.
And the canonical non-trivial example of this pattern is colouring the Mandelbrot set. Computing each pixel of the final image is done completely independent from the other pixels, so it scales very well even on clusters with high-latency slow network connects (e.g. GigE). In the extreme case each worker can compute a single pixel, but that would result in very high communication overhead, so it is better to split the image in small rectangles. One can find many ready-made MPI codes that colour the Mandelbrot set. For example this code uses row decomposition, i.e. a single job item is to fill one row of the final image. If the number of MPI processes is big, one would have to have fairly large image dimensions, otherwise the load won't balance well enough.
MPI also has mechanisms that allow spawning additional processes or attaching externally started jobs in client/server fashion. Implementing them is not rocket science, but still requires some understanding of advanced MPI concepts like intercommunicators, so I would skip that for now.

Make a play within time limit in C

Well, as the title suggests, I am coding a game which is called Morabaraba. It will be a computer vs human game. I will use game trees and alpha-beta cut-offs in order to generate the computer plays and every computer play should be made in a specific amount of time (let's say 15 seconds).
What is the best way to count the elapsed seconds since the begin of its turn and validate that it still has not exceeded the time limit without overloading the system? As is known, time is precious when generating game trees. Exceeding time limit will result in forfeit.
I want to know how this could be done with lightweight algorithm. How can I validate that time limit has not been reached every x seconds? Is the time of this validation negligible?
Thanks in advance.
Yes, time of this validation will be mostly negligible just because you will poll the amount of time passed at discrete intervals and compare it to the start time to know how much time is elapsed so far.
I see two solutions here:
embed the time check in the same thread that is computing the alpha-beta pruning and stop it accordingly returning the best solution found so far
place the AI code on a separate thread and interrupt it when time went over a threshold, by ensuring that the best solution so far is already stored somewhere so that you can pick it up
Second approach may sound scary but in certain situations it's just unpractical to check elapsed time within the algorithm itself because the code cannot be easily adapted and interrupted from the callee (for example if you have a non modular algorithm made of many different steps, which is not your case though)
Start by just checking the time after every "step" or every N steps, in some way that you believe you are checking at least once a second. If the nature of your computation makes this impractical, you can use something like POSIX timers to signal your process when a given amount of time has passed.

Resources