Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
pseudo code for what I want to accomplish:
//gets current running digital singal processor
int dsp_id = get_dsp_id();
if (dsp_id == 0) {
//code run only once
//irq start all other dsps including dsp_id 0
} else {
//code run multiple times
}
problem is when i send start irq to all dsps including id 0 i get in the if statetment over and over, i tried to flag it with a global static bool but that did not work.
You have a race condition. I imagine that the other threads that you kick off hit the if statement before your global variable is set. You need to protect the lock with a mutex. In pseudo code this would be something like
if (dsp_id == 0) {
get mutex lock
if (!alreadyRun)
{
//code run only once
//irq start all other dsps including dsp_id 0
set alreadyRun to true
}
release mutex lock
} else {
//code run multiple times
}
where alreadyRun is your boolean variable. You cannot, by the way, just write alreadyRun = true because there is no guarantee that other processors will see the change if the cache of the processor setting it has not been flushed back to main memory. Your threading library will have appropriate functions to do the mutex locking and safely set alreadyRun. For example, C11 defines atomic types and operations in stdatomic.h for your flag and mutex functions in threads.h
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
First of all, I don't even know how to adress this issue nor how to google a solution to it so if you have a better title that describes better my issue, that would be great.
Intro: I'm currently developing C code for a project in school, the project is not really ambitious but I would like to it the right way. The project is a about a Hardware password manager.
One of the parts of the project is the feature of the device locking itself if no input is recieved in X seconds. I'm planning to achieve this through timers and interruptions (since I'm working on a microchip) but this has little to do with the real question here.
My initial design was, in the main() function call the function lock() so the device boots locked, once the correct PIN is provided, the function MainMenu() is called (inside the lock() scope). However, the MainMenu() has a while(1) { some code } which never returns anything because it doesn't need to. However, the next time the user is AFK, a interruption will trigger and lock() will be called, effectively saving the Program Counter of the MainMenu(). Once the user inputs the correct PIN, lock() will call MainMenu(). In other words, function A will call function B which will again call A and so on.
See the problem? I will be eternally saving local variables that will never be used (PC at the least). I have solved the problemwith some tweaks in the design. However, the question persists.
Is there a way to call a function without saving the current envirionment on which I am?
Is having a cyclic model of functions a real problem or can be solved by designing a good one and then implementing it? If so, what kind of solutions do developers use?
Edit1: One comment suggested break. This is not what I want because the problem is not inside a loop, but in 2 functions calling each other. Another comment suggested setjmp() and longjmp(). I think this functions are useful if you want to save the environment on which you are currently running. However, in my case, it's precisely the opposite, I do not want to save the environment on which I'm.
A state machine sounds nice and sophisticated. The following design is a bit simpler. This is how I would do it:
/* locks screen
*
* Function does not return until valid keys for unlock have been entered.
*/
void lockScreen();
/* prints menu (may consider choice to highlight resp. item)
*/
void printMenu(int choice);
/* reads from keyboard
*
* returns: < 0 ... timeout
* >= 0 ... selected menu item
*/
int keyInput();
/* main function */
int main()
{
int lastChoice = -1;
for (bool lock = 1;;) { /* never ending run-time loop */
if (lock) { lockScreen(); lock = 0; }
printMenu(lastChoice);
int choice = keyInput();
switch (choice) {
/* regular choices */
/* exit required?
case 0: return 0;
*/
case 1: /* do 1st command */ break;
case 2: /* do 2nd command */ break;
default: /* should not happen */ break;
/* timeout */
case -1: lock = 1;
}
if (choice >= 0) lastChoice = choice;
}
}
Notes:
Please, consider this as sketch (not as MCVE). As the questioner didn't expose code or specific details of the intended platform, I tried to answer this "generically".
lockScreen() could use keyInput() as well. Thereby, keys could be collected until they form a complete password (which might be correct or wrong). The timeout could be used to reset an incomplete password.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have small doubt regarding synchronization in the linux kernel i.e., what kind of locking technique is suitable between interrupt context mode and process context to protect the critical region .
Thanks in Advance .....
hiya by definition solutions like: semaphors(or mutex), testandset and ofc spinlocks provide protection of critical code:
it's either by garunteeing an atomic operation (means locking for example will take exactly 1 operation to complete and aquire the lock, protocols such as the bakery protocol and disable preeamption of the proccess (and that's what you want) -once locked no one else can enter that critical code(let's say you used shared memory or something like that) so even if there IS a context switch and two threads run together we're Promised that only one can access that code, the thing is it's assumed all the threads that use that memory or w/e and have a critical region have the same type of lock aquire
for more info on spinlock (which disable peeamption of the CPU)
refer to this: http://www.linuxinternals.org/blog/2014/05/07/spinlock-implementation-in-linux-kernel/
notice that spinlock does a "busy wait" - means while the preeamption is enabled and you didn't aquire the lock the cpu "Wasting" calculation time on running in useless loop
you can also use the irq\preempt commands directly but that's pretty dangerous
eg:
preempt_enable() decrement the preempt counter
preempt_disable() increment the preempt counter
preempt_enable_no_resched() decrement, but do not immediately preempt
preempt_check_resched() if needed, reschedule
preempt_count() return the preempt counter
since i don't really know what you're trying to achieve it's kinda hard to go specific and answer your needs but i really dig sleepy semaphores:
http://www.makelinux.net/books/lkd2/ch09lev1sec4
unlike the rest of the solution i've offered they won't do busy wait which is saving calculation time.
i really hope i helped in this... gl!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I want to implement something in an ARM Cortex-M3 processor (with NVIC). I have limited knowledge about embedded systems, but I know, that an ISR routine should be as simple as possible.
Now I have the following problem: I have an interrupt routine which invokes when a CAN message is received. In one case I have to calculate some time-consuming task with the CAN message which task cannot be interrupted by another CAN message, but other interrupts can be served during this operation. My idea is the following:
CAN message received, ISR started, do some simple jobs (e.g. setting flags)
Disable CAN interrupts in the CAN ISR routine. Also save the CAN message into a global variable so it can be reached in the main loop.
In the main loop, do the time consuming task.
After this task, enable the CAN interrupt handling again.
Is it a good (or not totally bad) idea or should I do this in another way?
It is generally not a good idea to disable all (CAN) interrupts. It seems that what you want to protect yourself against is the same message arriving a second time, before you are done serving the first.
In that case you should take advantage of the ISR itself being non-interruptable. You can create a simple semaphore with a bool variable, and since the interrupt that sets it is non-interruptable, you don't even have to worry about atomic access to that boolean. C-like pseudo-code for the can handler:
typedef struct
{
bool busy;
can_msg_t msg;
} special_msg_t;
// must be volatile, to prevent optimizer-related bugs:
volatile static special_msg_t special = { false, {0} };
interrupt void can_isr (void)
{
// non-interruptable ISR, no other interrupt can fire
if(id==SPECIAL && !special.busy)
{
special.busy = true;
// right here you can open up for more interrupts if you want
memcpy(&special.msg, &new_msg, size);
}
}
result_t do_things_with_special (void) // called from main loop
{
if(!special.busy)
return error; // no new message received, no can do
// do things with the message
special.busy = false; // flag that processing is done
return ok;
}
an ISR routine should be as simple as possible.
Exactly.
There is a Linux kernel concept called bottom half. i.e Keeping ISR simple as much as possible. Rest of interrupt handling operations are deferred to later time.
There are many ways to implement bottom half like task-let, work-queues,etc
Suggest to read the following links
http://www.makelinux.net/ldd3/chp-10-sect-4
http://www.ibm.com/developerworks/library/l-tasklets/
Interrupt disabled for longer time could lead to interrupt missing(Obviously some data loss). After interrupt flag is set, create a deferred work and come out of interrupt context.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
select() returns -1 on error, and 0 on timeout, and the number of the descriptors in the set on success.
Suppose we have the following pseudocode:
while(1){
int s = select(..., &timeout); //timeout = 5 sec
if (s < 0) { perror(...); }
else if(s == 0) { //timeout }
else {
//wait for some recv event or STDIN
}
}
I recognized that the process waits either until timeout, or until some recv event occurs.
I need to have it keep counting for the specified time while receiving from an arbitrary number of peers only using select().
How can I achieve this?
On linux, the select system call decrements the timeout value by the amount of time elapsed. Posix allows but does not require this behaviour, which makes it hard to rely on; portable code should assume that timeout's contents are unspecified when the select call returns.
The only really portable solution is to start by computing the absolute time you want the timeout to expire, and then check the time before each subsequent call to select in order to compute the correct timeout value. Beware of clocks which might run backwards (or skip forwards); CLOCK_MONOTONIC is usually your best bet.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I need a case where established TCP connection give some errors , like either sendto() failed or recieve() but socket connection should remain in place.
this way i want to check if in my application any data sending and recieving failes for one or twice , then how it will behave.
Initially, i have tested it by harcoding these values but now i want to see it in real time scenario.
Thanks in Advance.
I don't think you can make send/receive act as what you exactly think, but there may be a workaround.
You can define a global flag, and setup a signal handler to change the flag value. Then in shell you can send the signal to your app to change the flag value. By judging the flag value, your can make your program enters the error test case in real time scenario:
The global flag and the signal handler:
int link_error = 0;
static void handler(int sig)
{
link_error = 1; /* indicating error happens */
}
In main() setup a signal, such as SIGUSR1(a macro with the value 10 in LINUX X86),
struct sigaction sa = {0};
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = handler;
if(sigaction(SIGUSR1, &sa, NULL) == -1)
return -1;
Then redefine the to be tested function such as send() to judging the flag value:
int send_test(...)
{
/* Link error happens */
if(link_error) {
link_error --;
return -1;
}
return send(...);
}
When your program is running, you can do the test by kill -s 10 xxx(xxx is your program pid) at any time.
I'm not entirely sure I follow you but...
Try unplugging the network cable from the device you're talking to, not from the machine you're running your code on. It's one failure case. You could also write some test app for the other end that deliberately stalls or shuts down wr or rd only; changing the size of the tx & rx buffers for the socket will allow you to quickly fill them and see stalls as a result. You could probably also do other things like make your MTU very small, that usually tests a bunch of assumptions in code. You could also stuff something like WanEm in the mix to stress your code.
There are a lot of failure cases in networking that need testing, there's no simple answer to this.
If you get any error on a socket connection other than a read timeout, the connection is broken. It does not 'remain in place'. Ergo you cannot induce such a condition in your application. All you can do is hold up the sending end for long enough to induce read timeouts.