I'm currently working on a mini game project, and I have a problem with this specific mechanic:
void monsterMove(){
//monster moves randomly
}
void playerMove(){
//accepting input as player movement using W, A, S, D
}
However, the project requires the monsters to keep moving at all times, even when the player is not moving.
After some research, I figured out that multithreading is needed to implement this mech since both monsterMove() and playerMove() needs to run concurrently even when playerMove() hasn't received any input from user.
Specifically there are 2 questions I want to address:
Which function needs to be made as a thread?
How to build the thread?
Sadly, I found no resource with specifically this type of question because everything on the Internet just seem to point out how multithreading can work as parallelism in a way, but not in such implementations.
I was thinking that monsterMove() will be run recursively while playerMove() will be made as the thread, since monsterMove() will need to run every n seconds even when playerMove() thread is not finished yet (no input yet). Although I might be wrong for the most part.
Thank you in advanced!
(P.S. Just to avoid misunderstandings, I am specifically asking about how thread and multithreading works, not the logic of the mech.)
Edit: Program is now working! However, any answer/code related to how this program is done with multithreading is immensely appreciated. :)
You don't need multithreading for this:
The basic structure is depicted in this pseudocode:
while (1)
{
check if input is available using kbhit
if (input available)
read user input using getch
move player depending on user input
move monsters
}
You could use multithreading using the CreatdThread function, but your code will just become overly complex.
Related
I asked this question on EE forum. You guys on StackOverflow know more about coding than we do on EE so maybe you can give more detail information about this :)
When I learned about micrcontrollers, teachers taught me to always end the code with while(1); with no code inside that loop.
This was to be sure that the software get "stuck" to keep interruption working. When I asked them if it was possible to put some code in this infinite loop, they told me it was a bad idea. Knowing that, I now try my best to keep this loop empty.
I now need to implement a finite state machine in a microcontroller. At first view, it seems that that code belong in this loop. That makes coding easier.
Is that a good idea? What are the pros and cons?
This is what I plan to do :
void main(void)
{
// init phase
while(1)
{
switch(current_State)
{
case 1:
if(...)
{
current_State = 2;
}
else(...)
{
current_State = 3;
}
else
current_State = 4;
break;
case 2:
if(...)
{
current_State = 3;
}
else(...)
{
current_State = 1;
}
else
current_State = 5;
break;
}
}
Instead of:
void main(void)
{
// init phase
while(1);
}
And manage the FSM with interrupt
It is like saying return all functions in one place, or other habits. There is one type of design where you might want to do this, one that is purely interrupt/event based. There are products, that go completely the other way, polled and not even driven. And anything in between.
What matters is doing your system engineering, thats it, end of story. Interrupts add complication and risk, they have a higher price than not using them. Automatically making any design interrupt driven is automatically a bad decision, simply means there was no effort put into the design, the requirements the risks, etc.
Ideally you want most of your code in the main loop, you want your interrupts lean and mean in order to keep the latency down for other time critical tasks. Not all MCUs have a complicated interrupt priority system that would allow you to burn a lot of time or have all of your application in handlers. Inputs into your system engineering, may help choose the mcu, but here again you are adding risk.
You have to ask yourself what are the tasks your mcu has to do, what if any latency is there for each task from when an event happens until they have to start responding and until they have to finish, per event/task what if any portion of it can be deferred. Can any be interrupted while doing the task, can there be a gap in time. All the questions you would do for a hardware design, or cpld or fpga design. except you have real parallelism there.
What you are likely to end up with in real world solutions are some portion in interrupt handlers and some portion in the main (infinite) loop. The main loop polling breadcrumbs left by the interrupts and/or directly polling status registers to know what to do during the loop. If/when you get to where you need to be real time you can still use the main super loop, your real time response comes from the possible paths through the loop and the worst case time for any of those paths.
Most of the time you are not going to need to do this much work. Maybe some interrupts, maybe some polling, and a main loop doing some percentage of the work.
As you should know from the EE world if a teacher/other says there is one and only one way to do something and everything else is by definition wrong...Time to find a new teacher and or pretend to drink the kool-aid, pass the class and move on with your life. Also note that the classroom experience is not real world. There are so many things that can go wrong with MCU development, that you are really in a controlled sandbox with ideally only a few variables you can play with so that you dont have spend years to try to get through a few month class. Some percentage of the rules they state in class are to get you through the class and/or to get the teacher through the class, easier to grade papers if you tell folks a function cant be bigger than X or no gotos or whatever. First thing you should do when the class is over or add to your lifetime bucket list, is to question all of these rules. Research and try on your own, fall into the traps and dig out.
When doing embedded programming, one commonly used idiom is to use a "super loop" - an infinite loop that begins after initialization is complete that dispatches the separate components of your program as they need to run. Under this paradigm, you could run the finite state machine within the super loop as you're suggesting, and continue to run the hardware management functions from the interrupt context as it sounds like you're already doing. One of the disadvantages to doing this is that your processor will always be in a high power draw state - since you're always running that loop, the processor can never go to sleep. This would actually also be a problem in any of the code you had written however - even an empty infinite while loop will keep the processor running. The solution to this is usually to end your while loop with a series of instructions to put the processor into a low power state (completely architecture dependent) that will wake it when an interrupt comes through to be processed. If there are things happening in the FSM that are not driven by any interrupts, a normally used approach to keep the processor waking up at periodic intervals is to initialize a timer to interrupt on a regular basis to cause your main loop to continue execution.
One other thing to note, if you were previously executing all of your code from the interrupt context - interrupt service routines (ISRs) really should be as short as possible, because they literally "interrupt" the main execution of the program, which may cause unintended side effects if they take too long. A normal way to handle this is to have handlers in your super loop that are just signalled to by the ISR, so that the bulk of whatever processing that needs to be done is done in the main context when there is time, rather than interrupting a potentially time critical section of your main context.
What should you implement is your choice and debugging easiness of your code.
There are times that it will be right to use the while(1); statement at the end of the code if your uC will handle interrupts completely (ISR). While at some other application the uC will be used with a code inside an infinite loop (called a polling method):
while(1)
{
//code here;
}
And at some other application, you might mix the ISR method with the polling method.
When said 'debugging easiness', using only ISR methods (putting the while(1); statement at the end), will give you hard time debugging your code since when triggering an interrupt event the debugger of choice will not give you a step by step event register reading and following. Also, please note that writing a completely ISR code is not recommended since ISR events should do minimal coding (such as increment a counter, raise/clear a flag, e.g.) and being able to exit swiftly.
It belongs in one thread that executes it in response to input messages from a producer-consumer queue. All the interrupts etc. fire input to the queue and the thread processes them through its FSM serially.
It's the only way I've found to avoid undebuggable messes whilst retaining the low latencty and efficient CPU use of interrupt-driven I/O.
'while(1);' UGH!
I am a bit of a novice with pthreads, and I was hoping someone could help with a problem I've been having. Say you have a collection of threads, all being passed the same function, which looks something like this:
void *func(void *args) {
...
while(...) {
...
switch(...)
{
case one:
do stuff;
break;
case two:
do other stuff;
break;
case three:
do more stuff;
break;
}
....
}
}
In my situation, if "case one" is triggered by ANY of the threads, I need for all of the threads to exit the switch and return to the start of the while loop. That said, none of the threads are ever waiting for a certain condition. If it happens that only "case two" and "case three" are triggered as each thread runs through the while loop, the threads continue to run independently without any interference from each other.
Since the above is so vague, I should probably add some context. I am working on a game server that handles multiple clients via threads. The function above corresponds to the game code, and the cases are various moves a player can make. The game has a global and a local component -- the first case corresponds to the global component. If any of the players choose case (move) one, it affects the game board for all of the players. In between the start of the while loop and the switch is the code that visually updates the game board for a player. In a two player game, if one player chooses move one, the second player will not be able to see this move until he/she makes a move, and this impacts the gameplay. I need the global part of the board to dynamically update.
Anyway, I apologize if this question is trivial, but some preliminary searching on the internet didn't produce anything valuable. It may just be that I need to change the whole structure of the code, but I'm kind of clinging to this because it's so close to working.
You need to have an atomic variable that acts as a counter for switch-case.
Atomic variables are guarantied to perform math operations atomically which is what multihreaded environments require.
Init the atomic variable at 1 in the main/dispatcher thread and pass it via Args or make it global.
Atmic variables are
volatile LONG counter = 1;
For windows use InterlockedAdd. The return value is the previous value.
Each thread does:
LONG val = InterlockedAdd(&counter, 1);
switch(val)
...
For GCC:
LONG val = __sync_fetch_and_add(&counter, 1);
switch(val)
...
May I also suggest, you inform yourself about standard task synchronization schemes first?
You might get an idea, how to change your program structure to be more flexible.
Task synchronization basics: (binary semaphores, mutexes)
http://www.chibios.org/dokuwiki/doku.php?id=chibios:articles:semaphores_mutexes
(if you are interested in InterProcessCommuncation (IPC) in more detail, like message passing, queues, ... just ask!)
Furthermore I recommend reading about implementation of state machines, which could help making your player code more flexible!
Bit complex (I know easier resources only in German - maybe a native speaker can help):
http://johnsantic.com/comp/state.html
Is there a typical state machine implementation pattern?
If you want to stick with what you have, a global variable, which can be changed & read by any other task will do.
Regards, Florian
How would be the correct way to prevent a soft lockup/unresponsiveness in a long running while loop in a C program?
(dmesg is reporting a soft lockup)
Pseudo code is like this:
while( worktodo ) {
worktodo = doWork();
}
My code is of course way more complex, and also includes a printf statement which gets executed once a second to report progress, but the problem is, the program ceases to respond to ctrl+c at this point.
Things I've tried which do work (but I want an alternative):
doing printf every loop iteration (don't know why, but the program becomes responsive again that way (???)) - wastes a lot of performance due to unneeded printf calls (each doWork() call does not take very long)
using sleep/usleep/... - also seems like a waste of (processing-)time to me, as the whole program will already be running several hours at full speed
What I'm thinking about is some kind of process_waiting_events() function or the like, and normal signals seem to be working fine as I can use kill on a different shell to stop the program.
Additional background info: I'm using GWAN and my code is running inside the main.c "maintenance script", which seems to be running in the main thread as far as I can tell.
Thank you very much.
P.S.: Yes I did check all other threads I found regarding soft lockups, but they all seem to ask about why soft lockups occur, while I know the why and want to have a way of preventing them.
P.P.S.: Optimizing the program (making it run shorter) is not really a solution, as I'm processing a 29GB bz2 file which extracts to about 400GB xml, at the speed of about 10-40MB per second on a single thread, so even at max speed I would be bound by I/O and still have it running for several hours.
While the posed answer using threads might possibly be an option, it would in reality just shift the problem to a different thread. My solution after all was using
sleep(0)
Also tested sched_yield / pthread_yield, both of which didn't really help. Unfortunately I've been unable to find a good resource which documents sleep(0) in linux, but for windows the documentation states that using a value of 0 lets the thread yield it's remaining part of the current cpu slice.
It turns out that sleep(0) is most probably relying on what is called timer slack in linux - an article about this can be found here: http://lwn.net/Articles/463357/
Another possibility is using nanosleep(&(struct timespec){0}, NULL) which seems to not necessarily rely on timer slack - linux man pages for nanosleep state that if the requested interval is below clock granularity, it will be rounded up to clock granularity, which on linux depends on CLOCK_MONOTONIC according to the man pages. Thus, a value of 0 nanoseconds is perfectly valid and should always work, as clock granularity can never be 0.
Hope this helps someone else as well ;)
Your scenario is not really a soft lock up, it is a process is busy doing something.
How about this pseudo code:
void workerThread()
{
while(workToDo)
{
if(threadSignalled)
break;
workToDo = DoWork()
}
}
void sighandler()
{
signal worker thread to finish
waitForWorkerThreadFinished;
}
void main()
{
InstallSignalHandler;
CreateSemaphore
StartThread;
waitForWorkerThreadFinished;
}
Clearly a timing issue. Using a signalling mechanism should remove the problem.
The use of printf solves the problem because printf accesses the console which is an expensive and time consuming process which in your case gives enough time for the worker to complete its work.
I have a program written in C and running on Linux which acquires streaming data from a serial port device every 16 or so ms. This is a time critical piece of code that works fine. Another piece of code plots this data, also in real time, but its timely execution is less important to me than the data acquisition part. That is, I don't want to wait until all the plotting and drawing functions have finished before polling the serial port again. So I was thinking of having a separate thread do the plotting part of the application, or perhaps have the data acquisition part be the separate thread. I really have next to no experience when it comes to low-level programming, so could someone point me in the right direction? The pseudo-code with which I am working looks something like this:
int xyz; // global variable
int main() {
do_some_preliminary_stuff();
while 1 {
poll_serial_port_and_fill_xyz_with_new_position_and_repeat();
}
while 1 {
plot_xyz();
}
return 0;
}
Obviously as written, the code will be stuck in the first while loop, so yeah, threads?
Thanks.
Take care! Can your plotting routine keep up, on average, with the rate at which your data arrives on the serial port? If not, what should happen to xyz? Should un-plotted values be overwritten, or something else? If you can't keep up, this question needs to be answered first.
If you can keep up on average, then as you say you have little experience in low-level (i.e. threaded) programming, you might consider using two processes connected by a shell pipe:
poll_for_serial_data | plot_data
The first process is your while loop, writing the polled data to stdout in some convenient format. The second process reads dat from stdin and plots it. This achieves the same end as the multithreaded approach, but is simpler and easier to write as the OS handles the synchronisation and protection problems for you. And on Linux it's pretty efficient.
If this isn't performant enough for you, it could still act as a model for a multithreaded version.
Yup, that is the way to go. Have non-main thread be the data acquisition thread, which posts the buffered response to the main/UI thread which does the plotting. Main thread should consume this data for plotting.
I found tsc2007 driver and modified according to our needs. Our firm is producing its own TI DM365 board. In this board we used TSC2007 and connected PENIRQ pin to GPIO0 of DM365. It is being seen OK on driver. when i touch to touchscreen cursor is moving but at the same time i am getting
BUG: scheduling while atomic: swapper /0x00000103/0, CPU#0
warning and embedded Linux is being crashed. there are 2 files that i modified and uploaded to http://www.muhendislikhizmeti.com/touchscreen.zip one is with timer the other is not. it is giving this error in any case.
I found a solution on web that i need to use work queue and call with using schedule_work() API. but they are blur for me now. Is anybody have any idea how to solve this problem and can give me some advice where to start to use work queue.
"Scheduling while atomic" indicates that you've tried to sleep somewhere that you shouldn't - like within a spinlock-protected critical section or an interrupt handler.
Common examples of things that can sleep are mutex_lock(), kmalloc(..., GFP_KERNEL), get_user() and put_user().
Exactly as said in 1st answer, scheduling while atomic happens when the scheduler gets confused and therefore unable to work properly and this because the scheduler tried to perform a "schedule()" in a section that contains a schedulable code inside of a non schedulable one.
For example using sleeps inside of a section protected by a spinlock. Trying to use another lock(semaphores,mutexes..) inside of a spinlock-proteced code may also disturb the scheduler. In addition using spinlocks in user space can drive the scheduler to behave as such. Hope this helps
For anyone else with a similar error - I had this problem because I had a function, called from an atomic context, that used kzalloc(..., GFP_KERN) when it should have used GFP_NOWAIT or GFP_ATOMIC.
This is just one example of a function sleeping when you don't want to, which is something you have to be careful of in kernel programming.
Hope this is useful to somebody else!
Thanks for the former two answers, in my case it was enough to disable the preemption:
preempt_disable();
// Your code with locks and schedule()
preempt_enable();