Preventing memory leak in dependent functions [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
First of all, I don't even know how to adress this issue nor how to google a solution to it so if you have a better title that describes better my issue, that would be great.
Intro: I'm currently developing C code for a project in school, the project is not really ambitious but I would like to it the right way. The project is a about a Hardware password manager.
One of the parts of the project is the feature of the device locking itself if no input is recieved in X seconds. I'm planning to achieve this through timers and interruptions (since I'm working on a microchip) but this has little to do with the real question here.
My initial design was, in the main() function call the function lock() so the device boots locked, once the correct PIN is provided, the function MainMenu() is called (inside the lock() scope). However, the MainMenu() has a while(1) { some code } which never returns anything because it doesn't need to. However, the next time the user is AFK, a interruption will trigger and lock() will be called, effectively saving the Program Counter of the MainMenu(). Once the user inputs the correct PIN, lock() will call MainMenu(). In other words, function A will call function B which will again call A and so on.
See the problem? I will be eternally saving local variables that will never be used (PC at the least). I have solved the problemwith some tweaks in the design. However, the question persists.
Is there a way to call a function without saving the current envirionment on which I am?
Is having a cyclic model of functions a real problem or can be solved by designing a good one and then implementing it? If so, what kind of solutions do developers use?
Edit1: One comment suggested break. This is not what I want because the problem is not inside a loop, but in 2 functions calling each other. Another comment suggested setjmp() and longjmp(). I think this functions are useful if you want to save the environment on which you are currently running. However, in my case, it's precisely the opposite, I do not want to save the environment on which I'm.

A state machine sounds nice and sophisticated. The following design is a bit simpler. This is how I would do it:
/* locks screen
*
* Function does not return until valid keys for unlock have been entered.
*/
void lockScreen();
/* prints menu (may consider choice to highlight resp. item)
*/
void printMenu(int choice);
/* reads from keyboard
*
* returns: < 0 ... timeout
* >= 0 ... selected menu item
*/
int keyInput();
/* main function */
int main()
{
int lastChoice = -1;
for (bool lock = 1;;) { /* never ending run-time loop */
if (lock) { lockScreen(); lock = 0; }
printMenu(lastChoice);
int choice = keyInput();
switch (choice) {
/* regular choices */
/* exit required?
case 0: return 0;
*/
case 1: /* do 1st command */ break;
case 2: /* do 2nd command */ break;
default: /* should not happen */ break;
/* timeout */
case -1: lock = 1;
}
if (choice >= 0) lastChoice = choice;
}
}
Notes:
Please, consider this as sketch (not as MCVE). As the questioner didn't expose code or specific details of the intended platform, I tried to answer this "generically".
lockScreen() could use keyInput() as well. Thereby, keys could be collected until they form a complete password (which might be correct or wrong). The timeout could be used to reset an incomplete password.

Related

Make code run once with multi processors [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
pseudo code for what I want to accomplish:
//gets current running digital singal processor
int dsp_id = get_dsp_id();
if (dsp_id == 0) {
//code run only once
//irq start all other dsps including dsp_id 0
} else {
//code run multiple times
}
problem is when i send start irq to all dsps including id 0 i get in the if statetment over and over, i tried to flag it with a global static bool but that did not work.
You have a race condition. I imagine that the other threads that you kick off hit the if statement before your global variable is set. You need to protect the lock with a mutex. In pseudo code this would be something like
if (dsp_id == 0) {
get mutex lock
if (!alreadyRun)
{
//code run only once
//irq start all other dsps including dsp_id 0
set alreadyRun to true
}
release mutex lock
} else {
//code run multiple times
}
where alreadyRun is your boolean variable. You cannot, by the way, just write alreadyRun = true because there is no guarantee that other processors will see the change if the cache of the processor setting it has not been flushed back to main memory. Your threading library will have appropriate functions to do the mutex locking and safely set alreadyRun. For example, C11 defines atomic types and operations in stdatomic.h for your flag and mutex functions in threads.h

What is wrong from the performance standpoint? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I just wrote a simple code to display time in hh:mm:ss format. The code is
#include <stdio.h>
#include <time.h>
int main()
{
time_t curtime;
int h, m, s, ps;
struct tm *x = localtime(&curtime);
time(&curtime);
ps = (*x).tm_sec;
while(1)
{
time(&curtime);
x = localtime(&curtime);
h = (*x).tm_hour;
m = (*x).tm_min;
s = (*x).tm_sec;
if(s != ps)
{
ps = s;
printf("%02d:%02d:%02d\n", h, m, s);
}
}
return(0);
}
The code compiles and runs as expected. However the cpu usage seems to go really high. When I use 'top' to see cpu usage, it shows cpu% as 96-100% (and I can hear the computer fan loud). How can I improve the code from a performance angle keeping the code simple and short?
The reason is that your loop hardly contain anything to be waited on (the only thing is the printfs, but I assume that you redirect that or that printf for some other reason completes quickly). This means that the program always is eligible to run.
All other programs that run on your computer on the other hand often wait for something: user input, network messages or whatever. That means that they are not eligible to run most of the time.
What the operating system does then is because your program has work to do, but no other process has (currently) it will schedule your program to run most of the time (96-100%). Consequently it will consume that much CPU.
This is normally not a bad thing. If your program has work to do it should be given the opportunity to do so if it's the only program that has. It's not really about performance - or put in another way, it's about performance it is that the OS will give your program the opportunity to finish as fast as possible (although it has no idea that it will not finish at all in this case).
One thing one often does with these kind of processes (ie those that are CPU bound) is to lower their priority. This may seem counterintuitive at first, but in effect it will tell the OS to assign to that process all processing power that's not used for anything else, it would mean that there will be processing power available whenever some other program has to process a mouse click, or keyboard input (which means that you wouldn't notice that much that there's a CPU heavy computation going on). Some OSes tries to do this automatically (Linux for example will give precedence to processes that wait a lot).
Generally speaking, since you have an infinite loop, your program will use all processor power to execute itself as fast as possible. Most simple c programs terminate within seconds, so this isn't a problem. Yours however, doesn't.
To at least curb the CPU usage heavily, you can leave a sleep() instruction after every iteration of the loop, to give the system time to do other things in between.
Here is an example:
#include <stdio.h>
#include <unistd.h>
int main(void) {
while(1) {
printf("Aha");
sleep(1); // 1s sleep
// Windows:
// ::Sleep(500); // 500ms
}
return 0;
}

Is it a good embedded programming practice to disable an interrupt in an interrupt? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I want to implement something in an ARM Cortex-M3 processor (with NVIC). I have limited knowledge about embedded systems, but I know, that an ISR routine should be as simple as possible.
Now I have the following problem: I have an interrupt routine which invokes when a CAN message is received. In one case I have to calculate some time-consuming task with the CAN message which task cannot be interrupted by another CAN message, but other interrupts can be served during this operation. My idea is the following:
CAN message received, ISR started, do some simple jobs (e.g. setting flags)
Disable CAN interrupts in the CAN ISR routine. Also save the CAN message into a global variable so it can be reached in the main loop.
In the main loop, do the time consuming task.
After this task, enable the CAN interrupt handling again.
Is it a good (or not totally bad) idea or should I do this in another way?
It is generally not a good idea to disable all (CAN) interrupts. It seems that what you want to protect yourself against is the same message arriving a second time, before you are done serving the first.
In that case you should take advantage of the ISR itself being non-interruptable. You can create a simple semaphore with a bool variable, and since the interrupt that sets it is non-interruptable, you don't even have to worry about atomic access to that boolean. C-like pseudo-code for the can handler:
typedef struct
{
bool busy;
can_msg_t msg;
} special_msg_t;
// must be volatile, to prevent optimizer-related bugs:
volatile static special_msg_t special = { false, {0} };
interrupt void can_isr (void)
{
// non-interruptable ISR, no other interrupt can fire
if(id==SPECIAL && !special.busy)
{
special.busy = true;
// right here you can open up for more interrupts if you want
memcpy(&special.msg, &new_msg, size);
}
}
result_t do_things_with_special (void) // called from main loop
{
if(!special.busy)
return error; // no new message received, no can do
// do things with the message
special.busy = false; // flag that processing is done
return ok;
}
an ISR routine should be as simple as possible.
Exactly.
There is a Linux kernel concept called bottom half. i.e Keeping ISR simple as much as possible. Rest of interrupt handling operations are deferred to later time.
There are many ways to implement bottom half like task-let, work-queues,etc
Suggest to read the following links
http://www.makelinux.net/ldd3/chp-10-sect-4
http://www.ibm.com/developerworks/library/l-tasklets/
Interrupt disabled for longer time could lead to interrupt missing(Obviously some data loss). After interrupt flag is set, create a deferred work and come out of interrupt context.

Forcing communication between threads

I am a bit of a novice with pthreads, and I was hoping someone could help with a problem I've been having. Say you have a collection of threads, all being passed the same function, which looks something like this:
void *func(void *args) {
...
while(...) {
...
switch(...)
{
case one:
do stuff;
break;
case two:
do other stuff;
break;
case three:
do more stuff;
break;
}
....
}
}
In my situation, if "case one" is triggered by ANY of the threads, I need for all of the threads to exit the switch and return to the start of the while loop. That said, none of the threads are ever waiting for a certain condition. If it happens that only "case two" and "case three" are triggered as each thread runs through the while loop, the threads continue to run independently without any interference from each other.
Since the above is so vague, I should probably add some context. I am working on a game server that handles multiple clients via threads. The function above corresponds to the game code, and the cases are various moves a player can make. The game has a global and a local component -- the first case corresponds to the global component. If any of the players choose case (move) one, it affects the game board for all of the players. In between the start of the while loop and the switch is the code that visually updates the game board for a player. In a two player game, if one player chooses move one, the second player will not be able to see this move until he/she makes a move, and this impacts the gameplay. I need the global part of the board to dynamically update.
Anyway, I apologize if this question is trivial, but some preliminary searching on the internet didn't produce anything valuable. It may just be that I need to change the whole structure of the code, but I'm kind of clinging to this because it's so close to working.
You need to have an atomic variable that acts as a counter for switch-case.
Atomic variables are guarantied to perform math operations atomically which is what multihreaded environments require.
Init the atomic variable at 1 in the main/dispatcher thread and pass it via Args or make it global.
Atmic variables are
volatile LONG counter = 1;
For windows use InterlockedAdd. The return value is the previous value.
Each thread does:
LONG val = InterlockedAdd(&counter, 1);
switch(val)
...
For GCC:
LONG val = __sync_fetch_and_add(&counter, 1);
switch(val)
...
May I also suggest, you inform yourself about standard task synchronization schemes first?
You might get an idea, how to change your program structure to be more flexible.
Task synchronization basics: (binary semaphores, mutexes)
http://www.chibios.org/dokuwiki/doku.php?id=chibios:articles:semaphores_mutexes
(if you are interested in InterProcessCommuncation (IPC) in more detail, like message passing, queues, ... just ask!)
Furthermore I recommend reading about implementation of state machines, which could help making your player code more flexible!
Bit complex (I know easier resources only in German - maybe a native speaker can help):
http://johnsantic.com/comp/state.html
Is there a typical state machine implementation pattern?
If you want to stick with what you have, a global variable, which can be changed & read by any other task will do.
Regards, Florian

How i can deny data flowing over a established TCP connection but socket connection should remain in valid state. [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I need a case where established TCP connection give some errors , like either sendto() failed or recieve() but socket connection should remain in place.
this way i want to check if in my application any data sending and recieving failes for one or twice , then how it will behave.
Initially, i have tested it by harcoding these values but now i want to see it in real time scenario.
Thanks in Advance.
I don't think you can make send/receive act as what you exactly think, but there may be a workaround.
You can define a global flag, and setup a signal handler to change the flag value. Then in shell you can send the signal to your app to change the flag value. By judging the flag value, your can make your program enters the error test case in real time scenario:
The global flag and the signal handler:
int link_error = 0;
static void handler(int sig)
{
link_error = 1; /* indicating error happens */
}
In main() setup a signal, such as SIGUSR1(a macro with the value 10 in LINUX X86),
struct sigaction sa = {0};
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = handler;
if(sigaction(SIGUSR1, &sa, NULL) == -1)
return -1;
Then redefine the to be tested function such as send() to judging the flag value:
int send_test(...)
{
/* Link error happens */
if(link_error) {
link_error --;
return -1;
}
return send(...);
}
When your program is running, you can do the test by kill -s 10 xxx(xxx is your program pid) at any time.
I'm not entirely sure I follow you but...
Try unplugging the network cable from the device you're talking to, not from the machine you're running your code on. It's one failure case. You could also write some test app for the other end that deliberately stalls or shuts down wr or rd only; changing the size of the tx & rx buffers for the socket will allow you to quickly fill them and see stalls as a result. You could probably also do other things like make your MTU very small, that usually tests a bunch of assumptions in code. You could also stuff something like WanEm in the mix to stress your code.
There are a lot of failure cases in networking that need testing, there's no simple answer to this.
If you get any error on a socket connection other than a read timeout, the connection is broken. It does not 'remain in place'. Ergo you cannot induce such a condition in your application. All you can do is hold up the sending end for long enough to induce read timeouts.

Resources