Keep counting for timeout with select() while receiving messages? [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
select() returns -1 on error, and 0 on timeout, and the number of the descriptors in the set on success.
Suppose we have the following pseudocode:
while(1){
int s = select(..., &timeout); //timeout = 5 sec
if (s < 0) { perror(...); }
else if(s == 0) { //timeout }
else {
//wait for some recv event or STDIN
}
}
I recognized that the process waits either until timeout, or until some recv event occurs.
I need to have it keep counting for the specified time while receiving from an arbitrary number of peers only using select().
How can I achieve this?

On linux, the select system call decrements the timeout value by the amount of time elapsed. Posix allows but does not require this behaviour, which makes it hard to rely on; portable code should assume that timeout's contents are unspecified when the select call returns.
The only really portable solution is to start by computing the absolute time you want the timeout to expire, and then check the time before each subsequent call to select in order to compute the correct timeout value. Beware of clocks which might run backwards (or skip forwards); CLOCK_MONOTONIC is usually your best bet.

Related

Preventing memory leak in dependent functions [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
First of all, I don't even know how to adress this issue nor how to google a solution to it so if you have a better title that describes better my issue, that would be great.
Intro: I'm currently developing C code for a project in school, the project is not really ambitious but I would like to it the right way. The project is a about a Hardware password manager.
One of the parts of the project is the feature of the device locking itself if no input is recieved in X seconds. I'm planning to achieve this through timers and interruptions (since I'm working on a microchip) but this has little to do with the real question here.
My initial design was, in the main() function call the function lock() so the device boots locked, once the correct PIN is provided, the function MainMenu() is called (inside the lock() scope). However, the MainMenu() has a while(1) { some code } which never returns anything because it doesn't need to. However, the next time the user is AFK, a interruption will trigger and lock() will be called, effectively saving the Program Counter of the MainMenu(). Once the user inputs the correct PIN, lock() will call MainMenu(). In other words, function A will call function B which will again call A and so on.
See the problem? I will be eternally saving local variables that will never be used (PC at the least). I have solved the problemwith some tweaks in the design. However, the question persists.
Is there a way to call a function without saving the current envirionment on which I am?
Is having a cyclic model of functions a real problem or can be solved by designing a good one and then implementing it? If so, what kind of solutions do developers use?
Edit1: One comment suggested break. This is not what I want because the problem is not inside a loop, but in 2 functions calling each other. Another comment suggested setjmp() and longjmp(). I think this functions are useful if you want to save the environment on which you are currently running. However, in my case, it's precisely the opposite, I do not want to save the environment on which I'm.
A state machine sounds nice and sophisticated. The following design is a bit simpler. This is how I would do it:
/* locks screen
*
* Function does not return until valid keys for unlock have been entered.
*/
void lockScreen();
/* prints menu (may consider choice to highlight resp. item)
*/
void printMenu(int choice);
/* reads from keyboard
*
* returns: < 0 ... timeout
* >= 0 ... selected menu item
*/
int keyInput();
/* main function */
int main()
{
int lastChoice = -1;
for (bool lock = 1;;) { /* never ending run-time loop */
if (lock) { lockScreen(); lock = 0; }
printMenu(lastChoice);
int choice = keyInput();
switch (choice) {
/* regular choices */
/* exit required?
case 0: return 0;
*/
case 1: /* do 1st command */ break;
case 2: /* do 2nd command */ break;
default: /* should not happen */ break;
/* timeout */
case -1: lock = 1;
}
if (choice >= 0) lastChoice = choice;
}
}
Notes:
Please, consider this as sketch (not as MCVE). As the questioner didn't expose code or specific details of the intended platform, I tried to answer this "generically".
lockScreen() could use keyInput() as well. Thereby, keys could be collected until they form a complete password (which might be correct or wrong). The timeout could be used to reset an incomplete password.

Make code run once with multi processors [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
pseudo code for what I want to accomplish:
//gets current running digital singal processor
int dsp_id = get_dsp_id();
if (dsp_id == 0) {
//code run only once
//irq start all other dsps including dsp_id 0
} else {
//code run multiple times
}
problem is when i send start irq to all dsps including id 0 i get in the if statetment over and over, i tried to flag it with a global static bool but that did not work.
You have a race condition. I imagine that the other threads that you kick off hit the if statement before your global variable is set. You need to protect the lock with a mutex. In pseudo code this would be something like
if (dsp_id == 0) {
get mutex lock
if (!alreadyRun)
{
//code run only once
//irq start all other dsps including dsp_id 0
set alreadyRun to true
}
release mutex lock
} else {
//code run multiple times
}
where alreadyRun is your boolean variable. You cannot, by the way, just write alreadyRun = true because there is no guarantee that other processors will see the change if the cache of the processor setting it has not been flushed back to main memory. Your threading library will have appropriate functions to do the mutex locking and safely set alreadyRun. For example, C11 defines atomic types and operations in stdatomic.h for your flag and mutex functions in threads.h

Ignoring incoming bytes on a Linux TCP socket [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to connect to a server, and synchronously write(2) to it.
At some point, buffers are filling up and I need to read(2) to let me continue writing.
read(2) is of course copying lots of bytes unnecessarily, and it's blocking if I don't know how many bytes to expect.
How can I discard incoming bytes on a TCP socket?
I've tried ioctl(sockfd, I_SRDOPT, RMSGD) but it's returning errno EFAULT Bad address.
You could use the socket in the non-blocking mode to periodically consume incoming data without blocking. To quote a tutorial:
If you call recv() in non-blocking mode, it will return any data that the system has in it's read buffer for that socket. But, it won't wait for that data. If the read buffer is empty, the system will return from recv() immediately saying "Operation Would Block!".

How i can deny data flowing over a established TCP connection but socket connection should remain in valid state. [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I need a case where established TCP connection give some errors , like either sendto() failed or recieve() but socket connection should remain in place.
this way i want to check if in my application any data sending and recieving failes for one or twice , then how it will behave.
Initially, i have tested it by harcoding these values but now i want to see it in real time scenario.
Thanks in Advance.
I don't think you can make send/receive act as what you exactly think, but there may be a workaround.
You can define a global flag, and setup a signal handler to change the flag value. Then in shell you can send the signal to your app to change the flag value. By judging the flag value, your can make your program enters the error test case in real time scenario:
The global flag and the signal handler:
int link_error = 0;
static void handler(int sig)
{
link_error = 1; /* indicating error happens */
}
In main() setup a signal, such as SIGUSR1(a macro with the value 10 in LINUX X86),
struct sigaction sa = {0};
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = handler;
if(sigaction(SIGUSR1, &sa, NULL) == -1)
return -1;
Then redefine the to be tested function such as send() to judging the flag value:
int send_test(...)
{
/* Link error happens */
if(link_error) {
link_error --;
return -1;
}
return send(...);
}
When your program is running, you can do the test by kill -s 10 xxx(xxx is your program pid) at any time.
I'm not entirely sure I follow you but...
Try unplugging the network cable from the device you're talking to, not from the machine you're running your code on. It's one failure case. You could also write some test app for the other end that deliberately stalls or shuts down wr or rd only; changing the size of the tx & rx buffers for the socket will allow you to quickly fill them and see stalls as a result. You could probably also do other things like make your MTU very small, that usually tests a bunch of assumptions in code. You could also stuff something like WanEm in the mix to stress your code.
There are a lot of failure cases in networking that need testing, there's no simple answer to this.
If you get any error on a socket connection other than a read timeout, the connection is broken. It does not 'remain in place'. Ergo you cannot induce such a condition in your application. All you can do is hold up the sending end for long enough to induce read timeouts.

C program that waits for user input for a specific number of seconds [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
How can you create C program that waits for user input for a specific number of seconds? After the time limit, the program closes, with or without input. Sample code please, using fork() and sleep(). Sorry I'm new at this.
Whoa. Sorry guys. This is not my post. Seems like someone used my account. And I can't delete it.
If you just want to sit the program down and wait... Make a loop checking for the input, use save the start time of the clock in a variable. Update the end time. Check in the loop if the time (here 5 seconds) has expired.
begin_t = clock();
// do-loop
/* read user input*/
end_t = clock();
// while( end_t - begin_t < 5 * CLOCKS_PER_SEC )
In my opinion usng fork() and sleep() is not the best way to achieve such a result. It's much better to use select() call which allows to wait for data with a timeout.
See the unix manual page on select() for some exemplary code.
The correct way to do this is to select STDIN for reading and set the timeout for however long you want. Select will either return STDIN as available for reading or return nothing, which indicates a timeout.
http://linux.die.net/man/2/select

Resources