On an STM32H7, I run LwIP, to learn more about raw API (NOSYS = 1).
Now, UDP (and TCP) server and client work just fine.
I'm next setting up an NTP client.
I initialize it this way (in sntp_client.c):
#define SNTP_CONF_IPADDR0 185
#define SNTP_CONF_IPADDR1 255
#define SNTP_CONF_IPADDR2 55
#define SNTP_CONF_IPADDR3 20
sntp_setoperatingmode(SNTP_OPMODE_POLL);
sntp_setservername(0, (char*)"pool.ntp.org");
sntp_init();
After start, I get the error
Assertion "sys_timeout: timeout != NULL, pool MEMP_SYS_TIMEOUT is empty"
failed at line 212 in
..\..\..\Middlewares\Third_Party\LwIP\src\core\timeouts.c
In Wireshark, I see no NTP packages are traced.
In sntp_init, sntp_request(NULL); is called, after which the time-out shows up.
What do I need to check next?
My source is here, the important part in in sntp_client.c:
https://github.com/bkht/LAN8742A_KSZ8851SNL_LwIP
Related
I'm using a canable CAN-to-USB adapter to communicate with a CAN bus. To initialize the SocketCan device, I'm using the following command:
slcand -c -o -f -s6 /dev/ttyACM0
and
ifconfig slcan0 up
The CAN bus is operating at 500KHz. I can transmit and read messages fine for a while. But after sending and receiving for a few minutes, I can no longer send messages but can still receive.
This is the output of ip -details -statistics link show slcan0:
469: slcan0: <NOARP,UP,LOWER_UP> mtu 16 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 10
link/can promiscuity 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
RX: bytes packets errors dropped overrun mcast
179974 24437 0 0 0 0
TX: bytes packets errors dropped carrier collsns
136 17 0 0 0 0
If I disconnect the device, plug it back in, and run the slcand command again, it works.
I have similar problems with my CANAble. I've been having better luck with the CandleLight Firmware, see CANAble Getting Started about halfway down the page for "Alternative Firmware".
I still have problems with my CANAble not sending messages (receive still works) every once in a while, but it's much less frequent after the firmware upgrade. When that happens, I just need to take down the interface and bring it back up, no need to unplug.
Also, you can try increasing the txqueulen to something like 10,000. That seems to work for some people, but I still get occasional lockups.
sudo ifconfig slcan0 txqueuelen 10000
I generated a code for "stm32f103c8t6" with CubeMX for USB VCP, when I add "CDC_Transmit_FS" command to send data, the port isn't recognized by windows10!
what should I do? Here is the code which is compiled without error:
#include "stm32f1xx_hal.h"
#include "usb_device.h"
#include "usbd_cdc_if.h"
int main(void)
{
uint8_t Text[] = "Hello\r\n";
while (1)
{
CDC_Transmit_FS(Text,6); /*when commented the port is recognized*/
HAL_Delay(1000);
}
}
There are three things you need to check in my experience:
startup_stm32f405xx.s --> Increase the Heap size. I use heap size 800 and stack size 800 as well.
usbd_cdc_if.c --> APP_RX_DATA_SIZE 64 and APP_TX_DATA_SIZE 64
usbd_cdc_if.c --> add below code to the CDC_Control_FS() function
Code:
case CDC_SET_LINE_CODING:
tempbuf[0]=pbuf[0];
tempbuf[1]=pbuf[1];
tempbuf[2]=pbuf[2];
tempbuf[3]=pbuf[3];
tempbuf[4]=pbuf[4];
tempbuf[5]=pbuf[5];
tempbuf[6]=pbuf[6];
break;
case CDC_GET_LINE_CODING:
pbuf[0]=tempbuf[0];
pbuf[1]=tempbuf[1];
pbuf[2]=tempbuf[2];
pbuf[3]=tempbuf[3];
pbuf[4]=tempbuf[4];
pbuf[5]=tempbuf[5];
pbuf[6]=tempbuf[6];
break;
and define the uint8_t tempbuf[7]; in the user private_variables section.
Without the increased heap size, Windows does not react at all.
Without the point 3, Windows will send the baud rate information and then read the baud rate, expecting to get back the same values. Since you do not return any values, the virtual com port remains as driver-not-loaded.
If you do all of that, the Windows 10 out-of-the-box VCP driver can be used. No need to install the very old ST VCP driver on your system.
PS: I read somewhere turning on VSense makes problems, too. Don't know, I have not configured it and all works like a charm.
Put delay before CDC_Transmit_FS call - it will wait for the initiatialization. Your code should be like this
int main(void)
{
uint8_t Text[] = "Hello\r\n";
HAL_Delay(1000);
while (1)
{
CDC_Transmit_FS(Text,6); /*when commented the port is recognized*/
HAL_Delay(1000);
}
}
I had similar issue. I couldn't connect to a port and the port appears as just "virtual com port". I added while loop to wait for USBD_OK from CDC_Transmit_FS. Then it stars work even with out it or a delay after init function. I am not sure what the issue was.
while(CDC_Transmit_FS((uint8_t*)txBuf, strlen(txBuf))!=USBD_OK)
{
}
you may have to install driver to get device recognized as com port
you can get it from st site
if not installed the device is listed with question or exclamation mark on device manager
note that you cannot send until device get connected to host!
not sure that CubeMX CDC_Transmit_FS is checking for this
also instead of delay to resend you shall check the CDC class data "TXSstate"
is 0 mean tx is over.
I know it's a bit late, but I stumbled upon this post and it was extremely helpful.
Here is what I needed to do:
do the Line-Coding (I think only necessary on Windows-Systems)
increase Heap (Stack was left at default 0x200)
Here is what wasn't necessary for me (on a STM32F405RGT6 Chip):
change APP_RX_DATA_SIZE / APP_TX_DATA_SIZE (left it at 2048)
add a delay befor running CDC_Tranmit_FS()
Also some things to consider that happened to me in the past:
be sure to use a USB-Cable with data lines (most charging-cables don't have them)
double check the traces/connections if you use a custom board
Recently I had to write tests using the excellent google's packetdrill tool. (https://github.com/google/packetdrill)
To summarize, it's a tool that can test the TCP(or IP or UDP) stack of our computer just by writing some test cases that combines C commands, expected outbound and inbound packets.
But, I can't figure out how portable those tests are. For instance, if I run the tests on the github directory, nearly all of those fail.
Let's take this one fr-4pkt-sack-linux.pkt:
// Test fast retransmit with 4 packets outstanding, receiver sending SACKs.
// In this variant the receiver supports SACK.
// Establish a connection.
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
+0 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 10>
+0 > S. 0:0(0) ack 1 <mss 1460>
+.1 < . 1:1(0) ack 1 win 257
+0 accept(3, ..., ...) = 4
// Send 1 data segment and get an ACK, so cwnd is now 4.
+0 write(4, ..., 1000) = 1000
+0 > P. 1:1001(1000) ack 1
I get the following error:
fr-4pkt-sack-linux.pkt:19: error handling packet: live packet field ipv4_total_length: expected: 1040 (0x410) vs actual: 297 (0x129)
script packet: 0.100283 P. 1:1001(1000) ack 1
actual packet: 0.100277 P. 1:258(257) ack 1 win 29200
It seems to indicate that my computer (which is a 64bits Ubuntu gnome 16.04) sends only 257 bytes instead of 1000 for the first packet (the window scaling argument is simply ignored).
If I run other tests, such as sack-shift-sacked-1-2-3-fack.pkt, it seems to indicate that the wscale argument is ignored by my computer.
So, my questions are:
Is that normal to ignore the wscale argument? Is my computer behaving strangely?
If it's normal (like it's some specific linux TCP feature), how can we ensure that the packetdrill tests that run on my computer will run on some other computer?
Thank you in advance
The solution was very simple but I'll keep this topic for those who are in the same situation.
In fact, I simply disabled TCP windows scaling via sysctl.
I used a configuration file here: http://cnp3book.info.ucl.ac.be/2nd/html/_downloads/sysctl-cnp3.conf
I changed variables with sysctl -w variable but I wasn't aware that those changes were persistent after rebooting computer.
So, don't make the same mistake as I've done: be careful when using sysctl, it can break your entire computer (if you forget to reset those settings after your tests).
After resetting to default, it now works perfectly. So, the portability of packetdrill tests seems ok (if there's no new major TCP feature).
I have a UDP client based off of http://cs.baylor.edu/~donahoo/practical/CSockets/code/UDPEchoClient.c
where the client sends a message and the server echos it back. I have a configurable server where I can drop packets, and I am sending multiple messages instead of just 1 in the code linked above. How do I make the message drop if it takes more than 1 second? As of right now, I am checking each message after I get it in recvfrom(), but I want my whole program to run in under ~1.5s because I do not want to wait 1 second for each message (this would take forever if there were a lot of messages). Is there a way to attach like a timer or something to each message so that it considers itself dropped if its not received within 1 second? Thanks!
Use TTL for UDP packets
int ttl = 60; /* max = 255 */
setsockopt(s, IPPROTO_IP, IP_TTL, &ttl, sizeof(ttl));
My current project is a bare-metal webserver. For this I'm using no libraries, and programming directly onto the chip. What I'm trying to do right now is send a single piece of HTTP data:
HTTP/1.1 200 OK\r\n
Content-Length: 45\r\n
Content-Type: text/html\r\n
Server: microserver\r\n
Connection: close\r\n
\r\n
<!DOCTYPE html><html>Hello<br>world!<hr></html>
My server tries to go through the following steps:
Receive SYN
Send [SYN+ACK]
Receive ACK
Receive ACK containing HTTP GET
Send [ACK,PUSH,FIN] with HTTP data (this one changed a lot, I've tried sending ACK PUSH and FIN seperately (with the content in PUSH), tried [ACK+PUSH],FIN and ACK,[PUSH+FIN] as well.
Receive [ACK+FIN] <<--- Here's where it goes wrong, this one is never even sent, according to wireshark.
Send ACK.
As said, it goes wrong at step 6. every single time. No matter what combination of ACK, PUSH and FIN I use in step 5. When looking at it with wireshark all the SEQ and ACK numbers are correct.
My server is able to close connections once the [FIN+ACK] finally does get sent, which sometimes happens on sockets that are kept open by the browser synchronously.
Pcap file of what wireshark records: https://www.dropbox.com/s/062k9hkgkenaqij/httpdump.pcap with as filter: (tcp.dstport == 80 || tcp.srcport == 80) && (ip.dst == 169.254.100.100 || ip.src == 169.254.100.100)
I know there is a 4 year old very similar question, Building a webserver, client doesn't acknowledge HTTP 200 OK frame, but I've tried pretty much everything that was suggested in there, and it didn't seem to get me any further, so I'm out of ideas.
EDIT:
I have found the problem, after studying sections of wireshark captures for hours an end. Apparently the mistake was that I did not use data to calculate the TCP checksum... But well, found the solution.