Unicast(one to one) UDP communication, each time the packet received is not same; if I am sending 1000 packets within interval of 500ms I get 9 packets missed. I am working on windows platform VCC 6.0; using sendto system call to send the Ethernet packet. In the host side I miss the packets by checksum error or header error.
Please let me know if you need more details. My agenda is that I should not miss any packets in target side.
Any help regarding this issue will be highly appreciated.
{
//Initialize local variables
MAINAPP(pAppPtr);
int iResult = 0;
int sRetVal = 0;
static char cTransmitBuffer[1024];
unsigned long ulTxPacketLength =0;
int in_usTimeOut = 0;
unsigned short usTimeout = 0;
S_QJB_POWER_CNTRL S_Out_QJB_Power_Cntrl = {0};
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_ucHeader[0] = QJB_TCP_HEADER_BYTE1;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_ucHeader[1] = QJB_TCP_HEADER_BYTE2;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usCmdID = QJB_ETH_POWER_ON;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usCmdResults = 0;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usDataSize = sizeof(S_QJB_POWER_CNTRL);
//Fill the controls & delay
sRetVal = PowerCntrlStructFill(&pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.U_Tcp_Msg.S_QJB_PowerCntrl,&usTimeout);
if(sRetVal)
{
return sRetVal;
}
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usReserved = 0;
pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.m_usChecksum = 0;
//Perform Endian Swap
pAppPtr->objEndianConv.EndianSwap(&pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg.U_Tcp_Msg.S_QJB_PowerCntrl, &S_Out_QJB_Power_Cntrl);
//Frame the transmission packet
QJB_Frame_TXBuffer(cTransmitBuffer, &(pAppPtr->S_Tcp_Handle.Tcp_Tx_Msg), &ulTxPacketLength,(void *)&S_Out_QJB_Power_Cntrl);
//Send the data to the target
iResult = sendto(pAppPtr->sktConnectSocket,cTransmitBuffer,ulTxPacketLength,0,(struct sockaddr *)&pAppPtr->g_dest_sin, sizeof(pAppPtr->g_dest_sin));
if(iResult == SOCKET_ERROR)
{
return QJB_TARGET_DISCONNECTED;
}
memset(&pAppPtr->S_Tcp_Handle.Tcp_Rx_Msg,0,sizeof(S_QJB_ETHERNET_PKT));// 1336
//Send the Command and obtain the response
sRetVal = QJB_ETHResRev(pAppPtr->sktConnectSocket,&pAppPtr->S_Tcp_Handle.Tcp_Rx_Msg,3);
return sRetVal;
}
Sathishkumar.
Unfortunately, since UDP makes no guarantees about delivery, the network stack can drop your sent packets at any time for any reason. It is worth noting there is no order gurantee of the orders the packets will arrive in as well.
If ordering and delivery is prime to your application, which I think it is, consider switching to TCP.
Related
if H1 opens TCP connection to H2, then H1 sends a small packet ( < MSS ) to H2 (H2 is not to respond with data), how long will it take H2 to send a delayed ACK?
How TCP ensures that RTO timer at H1 wouldn't expire before receiving the delayed ACK?
Is it correct understanding that Linux has minimal RTO of 200ms by default, so if network is fast (RTO stays at minimal 200ms) and the data packet is lost on the way from H1 to H2, then H1 will retry in 200ms? If network is slow, then H1 may wait well longer than 200ms?
About delayed ACK timing, RFC 1122 says:
A TCP SHOULD implement a delayed ACK, but an ACK should not
be excessively delayed; in particular, the delay MUST be
less than 0.5 seconds, and in a stream of full-sized
segments there SHOULD be an ACK for at least every second
segment.
So it depend on implementation and of course if could reduce application performance.
In linux kernel they do not send delayed ACK by timer or fixed interval as you can see in following code they behave differently by conditions.
As you can see in net/ipv4/tcp_input.c, they says in comment:
There is something which you must keep in mind when you analyze the
behavior of the tp->ato delayed ack timeout interval. When a
connection starts up, we want to ack as quickly as possible. The
problem is that "good" TCP's do slow start at the beginning of data
transmission. The means that until we send the first few ACK's the
sender will sit on his end and only queue most of his data, because
he can only send snd_cwnd unacked packets at any given time. For
each ACK we send, he increments snd_cwnd and transmits more of his
queue. -DaveM
static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb)
{
struct tcp_sock *tp = tcp_sk(sk);
struct inet_connection_sock *icsk = inet_csk(sk);
u32 now;
inet_csk_schedule_ack(sk);
tcp_measure_rcv_mss(sk, skb);
tcp_rcv_rtt_measure(tp);
now = tcp_time_stamp;
if (!icsk->icsk_ack.ato) {
/* The _first_ data packet received, initialize
* delayed ACK engine.
*/
tcp_incr_quickack(sk);
icsk->icsk_ack.ato = TCP_ATO_MIN;
} else {
int m = now - icsk->icsk_ack.lrcvtime;
if (m <= TCP_ATO_MIN / 2) {
/* The fastest case is the first. */
icsk->icsk_ack.ato = (icsk->icsk_ack.ato >> 1) + TCP_ATO_MIN / 2;
} else if (m < icsk->icsk_ack.ato) {
icsk->icsk_ack.ato = (icsk->icsk_ack.ato >> 1) + m;
if (icsk->icsk_ack.ato > icsk->icsk_rto)
icsk->icsk_ack.ato = icsk->icsk_rto;
} else if (m > icsk->icsk_rto) {
/* Too long gap. Apparently sender failed to
* restart window, so that we send ACKs quickly.
*/
tcp_incr_quickack(sk);
sk_mem_reclaim(sk);
}
}
icsk->icsk_ack.lrcvtime = now;
tcp_ecn_check_ce(tp, skb);
if (skb->len >= 128)
tcp_grow_window(sk, skb);
}
I am using WinUSB on the windows host side to communicate with my WINUSB USB device.
My USB device is Full speed device.
I am able to get the device handle and do the OUT and IN data transfers.
I am facing an issue with the Bulk IN transfer on FS WinUSB device. When i do a loop back of data from PC to device and back to PC, the sizes from 1 to 64 are working fine. When i transfer 65 bytes, first 64 bytes am able to read back in PC. But the last byte is missing.
Can anybody facing the same kind of issue or can suggest some solution?
Regards,
Nisheedh
First you should read out MAXIMUM_TRANSFER_SIZE. For sending, WinUSB "divides the buffer into appropriately sized chunks, if necessary" (source).
Also check the remarks of WinUsb_ReadPipe:
If the data returned by the device is greater than a maximum transfer
length, WinUSB divides the request into smaller requests of maximum
transfer length and submits them serially. If the transfer length is
not a multiple of the endpoint's maximum packet size (retrievable
through the WINUSB_PIPE_INFORMATION structure's MaximumPacketSize
member), WinUSB increases the size of the transfer to the next
multiple of MaximumPacketSize.
USB packet size does not factor into the transfer for a read request.
If the device responds with a packet that is too large for the client
buffer, the behavior of the read request corresponds to the type of
policy set on the pipe. If policy type for the pipe is
ALLOW_PARTIAL_READS, WinUSB adds the remaining data to the beginning
of the next transfer. If ALLOW_PARTIAL_READS is not set, the read
request fails. For more information about policy types, see WinUSB
Functions for Pipe Policy Modification.
Check your settings and whether the last Byte is send with a second transfer.
You also should test how many bytes have been actually written / read.
Recently I meet this issue when I program my STM32F4 discovery board as an USB device and use WINUSB to have an application being able to transmit loop back messages through USB BULK.
Three things I did to have more than 64 bytes packet sent by PC and received from device as the loop back messaging way:
1.For application, please set pipe policy to allow partial read:
BOOL policy_allow_partial = true;
if (WinUsb_SetPipePolicy(deviceData.WinusbHandle, pipe_id.PipeInId, ALLOW_PARTIAL_READS,sizeof(UCHAR), &policy_allow_partial)) {
printf("WinUsb_SetPipePolicy for ALLOW_PARTIAL_READS OK\n");
}
else {
printf("WinUsb_SetPipePolicy for ALLOW_PARTIAL_READS failed:%s\n", GetLastErrorAsString().c_str());
}
2.For firmware, please just let the USB receive handler do its read task only and maintain the read pointer offset well.
static int8_t CDC_Receive_FS(uint8_t* Buf, uint32_t *Len) {
/* USER CODE BEGIN 6 */
extern void on_CDC_Receive_FS(uint32_t len);
extern volatile int8_t usb_rxne;
USBD_CDC_SetRxBuffer(&hUsbDeviceFS, &Buf[0]);
USBD_CDC_ReceivePacket(&hUsbDeviceFS);
uint32_t len = *Len;
if ((ir + len) >= RX_DATA_LEN ) {
len = RX_DATA_LEN - 1 - ir;
if (len > 0 ) {
memcpy(usb_rx+ir, Buf, len);
}
ir = RX_DATA_LEN - 1;
}
else {
memcpy(usb_rx+ir, Buf, len);
ir += len;
}
usb_rxne = SET;
on_CDC_Receive_FS(len);
return (USBD_OK); }
void on_CDC_Receive_FS(uint32_t len) {
extern int8_t CDC_is_busy(void);
if (CDC_is_busy()) return;
//USB loopback method 2, transmit later but can support more than 64 bytes
if (iw < 512) {
memcpy(usb_tx+iw, usb_rx, len);
iw += len;
tx_len += len;
}
else {
memset(usb_tx, 0, 512);
iw = 0;
tx_len = 0;
}
}}
3.Let the transmit task running in the main loop.
int main(void) {
while (1) {
if (iw != 0) {
usb_txe = RESET;
ret_tran = CDC_Transmit_FS(usb_tx, tx_len);
iw = 0;
tx_len = 0;
usb_txe = SET;
}
else {
usb_txe = SET;
}}
For sharing more details for this question, here is my source code at my GitHub project .
Using MPLAB X 1.70 with a dsPIC33FJ128GP802 microcontroller.
I've got an application which is collecting data from two sensors at different sampling rates (one at 50Hz, the other at 1000Hz), both sensor packets are also different sizes (one is 5 bytes, the other is 21 bytes). Up until now I've used manual UART transmision as seen below:
void UART_send(char *txbuf, char size) {
// Loop variable.
char i;
// Loop through the size of the buffer until all data is sent. The while
// loop inside checks for the buffer to be clear.
for (i = 0; i < size; i++) {
while (U1STAbits.UTXBF);
U1TXREG = *txbuf++;
}
}
The varying sized arrays (5 or 21 bytes) were sent to this function, with their size, and a simple for loop looped through each byte and outputted it through the UART tx register U1TXREG.
Now, I want to implement DMA to relieve some pressure on the system when transmitting the large amount of data. I've used DMA for my UART receive and ADC, but having trouble with transmit. I've tried both ping pong mode on and off, and one-shot and continuous mode, but whenever it comes to sending the 21 byte packet it messes up with strange values and zero value padding.
I'm initialising the DMA as seen below.
void UART_TX_DMA_init() {
DMA2CONbits.SIZE = 0; // 0: word; 1: byte
DMA2CONbits.DIR = 1; // 0: uart to device; 1: device to uart
DMA2CONbits.AMODE = 0b00;
DMA2CONbits.MODE = 1; // 0: contin, no ping pong; 1: oneshot, no ping pong; 2: contin, ping pong; 3: oneshot, ping pong.
DMA2PAD = (volatile unsigned int) &U1TXREG;
DMA2REQ = 12; // Select UART1 Transmitter
IFS1bits.DMA2IF = 0; // Clear DMA Interrupt Flag
IEC1bits.DMA2IE = 1; // Enable DMA interrupt
}
The DMA interrupt I'm just clearing the flag. To build the DMA arrays I've got the following function:
char TXBufferADC[5] __attribute__((space(dma)));
char TXBufferIMU[21] __attribute__((space(dma)));
void UART_send(char *txbuf, char size) {
// Loop variable.
int i;
DMA2CNT = size - 1; // x DMA requests
if (size == ADCPACKETSIZE) {
DMA2STA = __builtin_dmaoffset(TXBufferADC);
for (i = 0; i < size; i++) {
TXBufferADC[i] = *txbuf++;
}
} else if (size == IMUPACKETSIZE) {
DMA2STA = __builtin_dmaoffset(TXBufferIMU);
for (i = 0; i < size; i++) {
TXBufferIMU[i] = *txbuf++;
}
} else {
NOTIFICATIONLED ^= 1;
}
DMA2CONbits.CHEN = 1; // Re-enable DMA2 Channel
DMA2REQbits.FORCE = 1; // Manual mode: Kick-start the first transfer
}
This example is with ping pong turned off. I'm using the same DMA2STA register but changing the array depending on which packet type I have. I'm determining the packet type from the data to be sent, changing the DMA bytes to be sent (DMA2CNT), building the array same as before with a for loop, then forcing the first transfer along with re-enabling the channel.
It takes much longer to process the data for the large data packet and I'm starting to think the DMA is missing these packets and sending null/weird packets in its place. It seems to be polling before I build the buffer and force the first transfer. Perhaps the force isn't necessary for every poll; I don't know...
Any help would be great.
After a few days of working on this I think I've got it.
The main issue I experienced was that the DMA interrupt was being polled faster than previous transmission, therefore I was only getting segments of packages before the next package overwrote the previous. This was solved with simply waiting for the end of UART transmission with:
while (!U1STAbits.TRMT);
I managed to avoid the redundancy of recreating a new DMA with the package data by simply making the original data array the one recognised by the DMA.
In the end the process was pretty minimal, the function called every time a package was created is:
void sendData() {
// Check that last transmission has completed.
while (!U1STAbits.TRMT);
DMA2CNT = bufferSize - 1;
DMA2STA = __builtin_dmaoffset(data);
DMA2CONbits.CHEN = 1; // Re-enable DMA0 Channel
DMA2REQbits.FORCE = 1; // Manual mode: Kick-start the first transfer
}
Regardless of what the size of the package, the DMA changes the amount it sends using the DMA2CNT register, then it's simply re-enabling the DMA and forcing the first bit.
Setting up the DMA was:
DMA2CONbits.SIZE = 1;
DMA2CONbits.DIR = 1;
DMA2CONbits.AMODE = 0b00;
DMA2CONbits.MODE = 1;
DMA2PAD = (volatile unsigned int) &U1TXREG;
DMA2REQ = 12; // Select UART1 Transmitter
IFS1bits.DMA2IF = 0; // Clear DMA Interrupt Flag
IEC1bits.DMA2IE = 1; // Enable DMA interrupt
Which is one-shot, no ping-pong, byte transfer, and all the correct parameters for UART1 TX.
Hope this helps someone in the future, the general principle can be applied to most PIC microcontrollers.
I have a question regarding how to bonding driver takes RX packets from enslaved interfaces. I found that bonding use dev_add_pack() to set handlers for LACPDU and ARP packets, but I didn't found other handlers (for other packets types).
Could you please help me to solve this problem?
Bonding drivers registers its own Rx handler, when a slave interface is enslaved to a bond master, in bond_enslave() you can see:
res = netdev_rx_handler_register(slave_dev, bond_handle_frame,
new_slave);
So in bond_handle_frame(), it will hijack the packets received by the slave interface, so that the bond master will receive the packets instead:
static rx_handler_result_t bond_handle_frame(struct sk_buff **pskb)
{
struct sk_buff *skb = *pskb;
struct slave *slave;
struct bonding *bond;
int (*recv_probe)(const struct sk_buff *, struct bonding *,
struct slave *);
int ret = RX_HANDLER_ANOTHER;
skb = skb_share_check(skb, GFP_ATOMIC);
if (unlikely(!skb))
return RX_HANDLER_CONSUMED;
*pskb = skb;
slave = bond_slave_get_rcu(skb->dev);
bond = slave->bond;
if (bond->params.arp_interval)
slave->dev->last_rx = jiffies;
recv_probe = ACCESS_ONCE(bond->recv_probe);
if (recv_probe) {
ret = recv_probe(skb, bond, slave);
if (ret == RX_HANDLER_CONSUMED) {
consume_skb(skb);
return ret;
}
}
if (bond_should_deliver_exact_match(skb, slave, bond)) {
return RX_HANDLER_EXACT;
}
skb->dev = bond->dev;
if (bond->params.mode == BOND_MODE_ALB &&
bond->dev->priv_flags & IFF_BRIDGE_PORT &&
skb->pkt_type == PACKET_HOST) {
if (unlikely(skb_cow_head(skb,
skb->data - skb_mac_header(skb)))) {
kfree_skb(skb);
return RX_HANDLER_CONSUMED;
}
memcpy(eth_hdr(skb)->h_dest, bond->dev->dev_addr, ETH_ALEN);
}
return ret;
}
I checked code of bonding and found that driver didn't inspect incoming RX packets without some of types (LACPDU, ARP) in cases, when driver works in these modes. Driver set handlers for this packets with use of dev_add_pack() function.
For set global hook in practic you can use nf_register_hook(), which provide interface for setup own net filter procedure for intercept packets.
Seems that nf_register_hook() is more powerful than dev_add_pack(), but I think that need more care when working nf_register_hook(), because it can drop a lot of packets in case of wrong conditions in hook.
I'm writing network analyzer and I need to filter packets saved in file, I have written some code to filter http packets but I'm not sure if it work as it should because when I use my code on a pcap dump the result is 5 packets but in wireshark writing http in filter gives me 2 packets and if I use:
tcpdump port http -r trace-1.pcap
it gives me 11 packets.
Well, 3 different results, that's a little confusing.
The filter and the packet processing in me code is:
...
if (pcap_compile(handle, &fcode, "tcp port 80", 1, netmask) < 0)
...
while ((packet = pcap_next(handle,&header))) {
u_char *pkt_ptr = (u_char *)packet;
//parse the first (ethernet) header, grabbing the type field
int ether_type = ((int)(pkt_ptr[12]) << 8) | (int)pkt_ptr[13];
int ether_offset = 0;
if (ether_type == ETHER_TYPE_IP) // ethernet II
ether_offset = 14;
else if (ether_type == ETHER_TYPE_8021Q) // 802
ether_offset = 18;
else
fprintf(stderr, "Unknown ethernet type, %04X, skipping...\n", ether_type);
//parse the IP header
pkt_ptr += ether_offset; //skip past the Ethernet II header
struct ip_header *ip_hdr = (struct ip_header *)pkt_ptr;
int packet_length = ntohs(ip_hdr->tlen);
printf("\n%d - packet length: %d, and the capture lenght: %d\n", cnt++,packet_length, header.caplen);
}
My question is why there are 3 different result when filtering the http? And/Or if I'm filtering it wrong then how can I do it right, also is there a way to filter http(or ssh, ftp, telnet ...) packets using something else than the port numbers?
Thanks
So I have figured it out. It took a little search and understanding but I did it.
Wireshark filter set to http filter packets that have set in tcp port 80 and also flags set to PSH, ACK. After realizing this, the tcpdump command parameters which result in the same numbers of packets was easy to write.
So now the wireshark and tcpdump gives the same results
What about my code? well I figured that I actually had an error in my question, the filter
if (pcap_compile(handle, &fcode, "tcp port 80", 1, netmask) < 0)
indeed gives 11 packets (src and dst port set to 80 no matter what tcp flags are)
Now to filter the desired packets is a question of good understanding the filter syntax
or setting to filter only port 80 (21,22, ...) and then in callback function or in while loop get the tcp header and from there get the flags and use mask to see if it is the correct packet (PSH, ACK, SYN ...) the flags number are for example here