Send UDP packet over specific period of time - c

I have a UDP client that send a number of packets to a server, I need to set a period of time between every packet, in other words I want to control the sending time of each packet.
How can I dot it?
Help!

You cannot ask the socket to send the data at a certain point in time. All control you have about the sending time is by not calling send/sendto() until you want the sending to happen - even then, the TCP/IP stack is free to delay the actual packet sending, so you can only hope for the best. Basically, you get the current time from the OS, put the packet into the socket to be sent, sleep until the next packet is due, put the next packet into the socket, and so on.

Related

Streaming audio udp C

I'm writing a program with a client and a server I almost achieved it.
At the moment I can execute the server on a port. The client in the same port with the IP adress and the name of the .wav file that I want to read.
Now what I'd like to do is making a timeout between each sendto() so that the client receives the packet and read them well. without that the client receives many packets at once and it losts many of them.
So could someone tell me how it works in UDP, and how to do that ?
making a timeout between each sendto()
I believe that you are asking how to put a small delay between each sendto(). If you open raw wav file and send bytes, there is a good chance that the data will be getting to the client much faster than it can play it. If you want to stream data at the same rate as it is played, send data in chunks, then let the client request the next chunk.
If that is not an option, you can send a chunk of data (i.e. 20ms). Then let the thread sleep for a little less than 20ms then send the next chunk. Sleeps are kind of a hack. Some sort of audio callback would be best on the server. Bottom line is that your client buffer has to be big enough to consume the the amount of data your server is sending.
without that the client receives many packets at once and it losts many of them
I believe that you are asking how to deal with the variety of packet inter arrival rates and the packet losses and out of order packets received. It sounds like you were just sending packets at too fast a rate that your client could handle. You might need a larger buffer on the client.
In any case, with UDP/IP, you have the following scenarios
lost packets
packets arriving out of order
packets arriving in bursts: (each packet will not arrive exactly X ms apart)
To deal with this, you have to minimally have what is know as a dejitter buffer. This is a buffer that collects packets as they arrive and inserts them typically in a ring buffer. The buffer will have to be large enough to buffer up packets that your server is sending. Your client is potentially consuming the packets from the buffer slower than the server is sending them (or vice versa). In order to get packets in the right order and deal with losses, you have to detect it. You can detect losses and out of order arrivals by simply numbering each packet that is sent. As packets arrive you can put them into the buffer into the correct location. If a packet is lost, you need to deal with that with some sort of loss concealment (playing silence, estimating the lost packet, etc.) which is beyond the scope of this question,
The RTP protocol is designed for streaming and is an application protocol that work over UDP.
Since you're using UDP, which is connectionless, you don't really have a way to control the flow of packets unless you implement some kind of acknowledgement mechanism... at which point you might as well be using TCP because it already has that built in.
Although I don't have much experience in network programming, this looks a bit more complicated than it might seem at first glance. So UDP is connectionless. That speeds things up a lot, but there is a price to pay -- off the top of my head, packets can get lost or arrive out of order.
Those are situations you need to handle on the client end. Your client needs to be designed so that it accepts packets as they arrive at an arbitrary rate, skips over those that fail to arrive within a certain time (for live streaming, for buffered that doesn't matter) and takes order into consideration, which means that each packet needs to contain information about its place relative to previous packets.

Can LoadRunner Receive Data By UDP Packets?

We want to receive packets from udp sockets, the udp packets have variable length and we don't know how long they really are until we receive them (parts of them exactly, length were written in the sixth byte).
We tried the function lrs_set_receive_option with MarkerEnd only to find it has no help on this issue. The reason why we want to receive by packets is that we need to respond some packet by sending back user-defined udp packets.
Is there anybody knows how achieve that?
UPDATE
The LR version seems to be v10 or v11.
We need respond an incoming udp packet by sending back a udp packet immediately.
The udp packet may be like this
| orc code | packet length | Real DATA |
Issue is we can't let loadrunner return data for each packets, sometimes it returns many packets in a buffer, sometimes it waits until timeout though when there has been an incoming packet in the socket buffer. While in the c programming language world, when calling recvfrom(udp socket) we are returned only one udp packet per time (per call) which is want we really want.
If you need raw socket support to intercept at the packet level then you are likely going to have to jump to a DLL virtual user in Visual Studio with the raw socket support.
As to your question on UDP support: Yes, a Winsock user supports both core transport types, UDP and TCP. TCP being the more common variant as connection oriented. However, packet examination is at layer 3 of the OSI model for the carrier protocol IP. The ACK should come before you receive the dataflow for your use in the script. You are looking at assembled data flows in the data.ws when you jump to the TCP and UDP level.
Now, you are likely receiving a warning on receive buffer size mismatch which is taking you down this path with a mismatch to the recording size. There is an easy way to address this. If you take your send buffer and construct it using the lrs_set_send_buffer() function, then anything that returns will be taken as correct, ignoring the previously recorded buffer size and not having to wait for a match or timeout before continuing.

Maximum size of Receive Buffer/Queue in udp socket

I have developed a udp server/client application in which server has one socket at which it continuously receives data from 40 clients.
Now I want to know that what happens if all of the 40 Clients send data at a time?
According to my understanding, data must be queued in receive buffer and next time when I call recvfrom() the data queued in the buffer is received i.e. I shall have to call recvfrom() 40 times to receive data of all the 40 Clients even if all the Clients sent data simultaneously.
Also, I want to know that all of the data of 40 Clients will be queued in receive buffer or some of the data will be discarded too?
Also, what is the maximum buffer size in which data can be queued in receive buffer and after what limit is data dropped?
I shall have to call recvfrom() 40 times to receive data of all the 40 Clients even if all the Clients sent data simultaneously.
In other words, you're asking whether separate UDP datagrams can be combined by the network stack. The answer is: no, they'll arrive as separate datagrams requiring separate calls to recvfrom().
Also, I want to know that all of the data of 40 Clients will be queued in receive buffer or some of the data will be discarded too?
UDP does not guarantee delivery. Packets can be dropped anywhere along the route: by the sending host, by any devices along the route and by the receiving host.
Also, what is the maximum buffer size in which data can be queued in receive buffer and after what limit is data dropped?
This is OS-dependent and is usually configurable. Bear in mind that packets might get dropped before even reaching the receiving host.
Indeed it all depends on the size of your OS socket buffers. In linux its fairly easy to configure, so all you probably need to do it to google how to change it for your Windows system.

Configurable rate and pattern when implement tcp ping in C

i am trying to implement a tcp ping function. And I hope to make the rate and pattern of sending messages configurable. For example, send 5000 msg in 5 seconds, burst 2000 at first then 3 msg/ms for 1000ms.
Any idea how to make it happen? Thanks in advance.
ps, I am using c socket programming, write and read to send and receive msg.
I may be missing something but isn't all you have to do is have a loop where you send 2000 messages and then put the thread on Sleep() for 1ms and send 3 packets each time until you have sent the rest 3000 packets.
One thing you should know is that measuring how long time it takes for code to execute is hard. Since you are using TCP, which uses buffers, send will block if there is not enough buffer space until it can send the next message depending on amount of data, size and network status.

C socket: does send wait for recv to end?

I use blocking C sockets on Windows.
I use them to send updates of a data from the server to the client and vice versa. I send updates at a high frequency (every 100ms). Does the send() function will wait for the recipient recv() to receive the data before ending ?
I assume not if I understand well the man page:
"Successful completion of send() does not guarantee delivery of the message."
So what will happen if one is running 10 send() occurences while the other has only complete 1 recv() ?
Do I need to use so some sort of acknowledgement system ?
Lets assume you are using TCP. When you call send, the data that you are sending is immediately placed on the outgoing queue and send then completes successfully. If however, send is unable to place the data on the outgoing queue, send will return with an error.
Since Tcp is a guaranteed delivery protocol, the data on the outgoing queue can only be removed once acknowledgement has been received by the remote end. This is because the data may need to be resent if no ack has been received in time.
If the remote end is sluggish, the outgoing queue will fill up with data and send will then block until there is space to place the new data on the outgoing queue.
The connection can however fail is such a way that there is no way any further data can be sent. Although once a TCP connection has been closed, any further sends will result in an error, the user has no way of knowing how much data did actually make it to the other side. (I know of no way of retrieving TCP bookkeeping from a socket to the user application). Therefore, if confirmation of receipt of data is required, you should probably implement this on application level.
For UDP, I think it goes without saying that some way of reporting what has or has not been received is a must.
send() blocks until the operating system (kernel) has taken the data and put it into a buffer of outgoing data. It does not wait until the other end has received the data.
If you're sending by TCP, you get guaranteed delivery1 and the other end will receive the data in the order sent. That might, however, be coalesced together so what you sent as 10 separate updates could be received as a single large packet (or vice versa -- a single update could be broken up across an arbitrary number of packets). This means, among other things, that any ACK of any data implicitly acknowledges receipt of all previous data.
If you're using UDP, none of that is true -- data can arrive out of order, or be dropped and never delivered at all. If you care about all the data being received, you just about need to build some sort of acknowledgement system of your own on top of UDP itself.
1 Of course, there's a limit on the guarantee -- if a network cable gets cut (or whatever) packets won't be delivered, but you'll at least get an error message telling you that the connection was lost.
If you're using TCP, you get the acknowledgements for free as that is part of what the protocol does under the hood. But sounds like for this type of application you would probably want to use UDP. In either case though send() will not block until the client has successfully recv().
If it's crucial that the client receive every message, then use TCP. If it's ok for the client to miss one or more messages, then use UDP.
TCP guarantees delivery at a lower TCP stack level. It retries delivery until the receiving part acknowledges that the data was received, but your application may never know about that fact.
Let's say that you are sending chunks of data and you need to place those chunks of data somewhere according to some logic. If your application is not prepared to know where each individual block has to be placed, receiving it at the TCP level may be useless. The original post was about the application level logic.

Resources