Ignoring incoming bytes on a Linux TCP socket [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to connect to a server, and synchronously write(2) to it.
At some point, buffers are filling up and I need to read(2) to let me continue writing.
read(2) is of course copying lots of bytes unnecessarily, and it's blocking if I don't know how many bytes to expect.
How can I discard incoming bytes on a TCP socket?
I've tried ioctl(sockfd, I_SRDOPT, RMSGD) but it's returning errno EFAULT Bad address.

You could use the socket in the non-blocking mode to periodically consume incoming data without blocking. To quote a tutorial:
If you call recv() in non-blocking mode, it will return any data that the system has in it's read buffer for that socket. But, it won't wait for that data. If the read buffer is empty, the system will return from recv() immediately saying "Operation Would Block!".

Related

TCP: Socket send/recv order [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I am wondering if you need to set up the server and client sockets so that they always go
send recv send recv ...
Because I am getting an issue where I send a message, and then the initial send() receives it twice.
I send the message upload foo.c
Server displays: Message received: upload foo.c
But then the server prints the actual file contents, which should have been passed to another recv() socket call (since only the first socket in the while loop has it's contents printed)
Message received: This is some text from
the file foo.c
text hello ending
So I get the feeling it's "overflowing" into the next recv iteration.
I'm guessing you use TCP? Then you have to remember that TCP is a streaming protocol, without message boundaries and without any start or end (except connection established and closed).
A single recv call may receive less or more than what was sent in a single send call.
You need to come up with a higher-level protocol which incorporates message boundaries explicitly, for example by sending the length of the data to be received. Then you have to use loops to receive the correct amount of bytes.

UDP: Read frame a NEW frame from client every x seconds [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I have a udp client (I have no control over the source code) that is constantly sending data frames, one frame per 500ms, and I have a udp server that checks the last frame every 5 seconds.
The problem is that this udp server doesn't read the last frame but only the next frame in the udp buffer from the operating system.
n = recvfrom(server_sockfd, buf, BUFSIZE, 0, (struct sockaddr *) &new_dax[eqpID].clientaddr,
&new_dax[eqpID].clientlen);
With this code if my udpclient is sending :
FRAME 1 ->500ms
FRAME 2->500ms
FRAME 3->500ms
FRAME X->500ms
My udp server receives firstly FRAME 1, and then after 5 seconds when I try to read the frame from the client the server receveives FRAME 2 instead of FRAME X.
How do I get the last frame received? I tried closing the server socket and opening it again when I want to receive the last frame but this is consuming to much resources. Is it possible without closing the server socket?
Thanks!
You can use recvmmsg() to receive a whole bunch of messages at once. So in your case, you expect to receive about 10 messages per read, so set up buffers for 12-15 messages and just call recvmmsg() once, then ignore all but the last message.
You'll want to use the MSG_WAITFORONE flag, so that recvmmsg() doesn't block until all 12-15 messages are received--you only expect to receive 9-11 or so.

Linux Socket System call "accept" never returning? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I'm running into a strange issue trying to test a simple socket program. When I call the "accept" function here, my program seems to hang... it prints out "SENPAI PLS" but never "SADDASSDA".
I was getting past this part of my code last night. For context, this is running on a large server with quite a few other students probably trying to do the same project as me, and I'm sure some of them are leaving their server programs running.
Could the service being busy or full cause accept to just never finish?
do{
printf("SENPAI PLS\n");
clientFD=accept(serverFD, (struct sockaddr *) &clientAddress, &clientAddressSize);
printf("SADDASSDA\n");
if(clientFD==-1){
sleep(1);
}
}while (clientFD==-1);
accept will not return until a connection is accepted (unless the listening socket is in nonblocking mode).

Keep counting for timeout with select() while receiving messages? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
select() returns -1 on error, and 0 on timeout, and the number of the descriptors in the set on success.
Suppose we have the following pseudocode:
while(1){
int s = select(..., &timeout); //timeout = 5 sec
if (s < 0) { perror(...); }
else if(s == 0) { //timeout }
else {
//wait for some recv event or STDIN
}
}
I recognized that the process waits either until timeout, or until some recv event occurs.
I need to have it keep counting for the specified time while receiving from an arbitrary number of peers only using select().
How can I achieve this?
On linux, the select system call decrements the timeout value by the amount of time elapsed. Posix allows but does not require this behaviour, which makes it hard to rely on; portable code should assume that timeout's contents are unspecified when the select call returns.
The only really portable solution is to start by computing the absolute time you want the timeout to expire, and then check the time before each subsequent call to select in order to compute the correct timeout value. Beware of clocks which might run backwards (or skip forwards); CLOCK_MONOTONIC is usually your best bet.

How i can deny data flowing over a established TCP connection but socket connection should remain in valid state. [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I need a case where established TCP connection give some errors , like either sendto() failed or recieve() but socket connection should remain in place.
this way i want to check if in my application any data sending and recieving failes for one or twice , then how it will behave.
Initially, i have tested it by harcoding these values but now i want to see it in real time scenario.
Thanks in Advance.
I don't think you can make send/receive act as what you exactly think, but there may be a workaround.
You can define a global flag, and setup a signal handler to change the flag value. Then in shell you can send the signal to your app to change the flag value. By judging the flag value, your can make your program enters the error test case in real time scenario:
The global flag and the signal handler:
int link_error = 0;
static void handler(int sig)
{
link_error = 1; /* indicating error happens */
}
In main() setup a signal, such as SIGUSR1(a macro with the value 10 in LINUX X86),
struct sigaction sa = {0};
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = handler;
if(sigaction(SIGUSR1, &sa, NULL) == -1)
return -1;
Then redefine the to be tested function such as send() to judging the flag value:
int send_test(...)
{
/* Link error happens */
if(link_error) {
link_error --;
return -1;
}
return send(...);
}
When your program is running, you can do the test by kill -s 10 xxx(xxx is your program pid) at any time.
I'm not entirely sure I follow you but...
Try unplugging the network cable from the device you're talking to, not from the machine you're running your code on. It's one failure case. You could also write some test app for the other end that deliberately stalls or shuts down wr or rd only; changing the size of the tx & rx buffers for the socket will allow you to quickly fill them and see stalls as a result. You could probably also do other things like make your MTU very small, that usually tests a bunch of assumptions in code. You could also stuff something like WanEm in the mix to stress your code.
There are a lot of failure cases in networking that need testing, there's no simple answer to this.
If you get any error on a socket connection other than a read timeout, the connection is broken. It does not 'remain in place'. Ergo you cannot induce such a condition in your application. All you can do is hold up the sending end for long enough to induce read timeouts.

Resources