Newbie in the community, here. First of all, thanks for all the help in all these years i've been working on embedded development :D
I have a problem with an Atmel AT91RM9200 ARM microprocessor, connected via RMII to a Mikrel KSZ8863 ethernet physical interface. The ARM is loaded with U-Boot 1.1.2, which loads the Linux Kernel v2.4.27.
I manually added the code to interface U-Boot with the KSZ.
The problem is:
Using U-Boot, if I try to download something from my TFTP server (located in the same network), the connection sometimes has so many timeouts that the download fails, and sometimes has just 2 or 3 timeouts.
I checked the U-Boot FAQ page, and the most probable reason for the timeouts is a wrong speed configuration, which I double checked.
What could be the reason for the unreliability of my connection?
Thanks,
Loranzp.
Setup a minimalist network consistent of TFTP server, client and Sniffer (Wireshark in promiscuous mode) (if you use a switch it must have a repeater port where to connect the sniffer PC)
Next run traffic captures and analyze when and how the timeouts occur.
Consider:
TFTP BlockSize too big leading to mishandled IP fragmentation.
Server provides the packets too fast after REQ or ACK
Not correctly handling block number roll-over (only when handling big files)
etc
Related
I'm having an issue connecting a serial device to an embedded device I'm writing code for.
The device I am writing has two serial ports, an incoming from my laptop, and an outgoing to an external device.
When I connect both terminals to my laptop and view the data, I get exactly the data I am expecting.
When I connect my laptop to the external device directly, I am getting exactly what I expect, and a response.
When I connect the laptop and the external device to the embedded device I am working on, the laptop sends data to it, it receives it, it passes it on to the external device. This works as expected.
However, the external device doesn't send back the response.
If I send data to the external device from the embedded device, each new message I send allows it to send the original reply.
I know the first message got through correctly because the external device whirrs to live, and I know when it is sending the response by running and logic analyser on the tx/rx comms and viewing the traffic.
I considered that the embedded device is holding the rx line and preventing its transmission, but I don't see how that possible in the code. Also if that is the case it shouldn't work when I plug both lines into my laptop.
I also considered the DTR was not set high, but checked this and it appears to be set high.
Does anyone know a reason which would prevent a device from responding?
Note: When I say Serial Ports I am referring to the UART when referring to the embedded device. All device use a DB9 connector running RS232.
Edit: Operating System on laptop is Windows 10. Embedded device is a Atmega324p.
Edit 2: Did some more testing. It appears that it sometimes work and sometimes doesn't.
I have added an image which show a almost perfect signal of the response.
The blue section is a gap in the signal that shouldn't be there.
Ended up finding a solution.
The RTS line was held via the embedded device at 1.2v, while the Pc was holding it at 5.2v.
Pulling the RTS line up to 5v fixed the issue.
I just implemented my first UDP server/client. The server is on localhost.
I'm sending 64kb of data from client to server, which the server is supposed to send back. Then, the client checks how many of the 64kb are still intact and they all are. Always.
What are the possible causes for this behaviour? I was expecting at least -some- dataloss.
client code: http://pastebin.com/5HLkfcqS
server code: http://pastebin.com/YrhfJAGb
PS: A newbie in network programming here, so please don't be too harsh. I couldn't find an answer for my problem.
The reason why you are not seeing any lost datagrams is that your network stack is simply not running into any trouble.
Your localhost connection can easily cope with what you provide, a localhost connection is able to process several 100 megabyte of data per second on a decent CPU.
To see dropped datagrams you should increase the probability of interference. You have several opportunities:
increase the load on the network
busy your cpu with other tasks
use a "real" network and transfer data between real machines
run your code over a dsl line
set up a virtual machine and simulate network outages (Vmware Workstation is able to do so)
And this might be an interesting read: What would cause UDP packets to be dropped when being sent to localhost?
Hardware:
derived from Sequoia-Platform (AMCC)
Using AMCC PowerPC 440EPx and
Marvell 88E1111 Ethernet PHY, 256 M DDR2 RAM
Linux version 2.6.24.2
I transmit data via udp socket ca. 60MB per second in a linux-application (C –Language). Sometimes my PC-Test program notice a lost packet because all packets are numbered (GigE Vision Stream Channel protocol) I Know that the UDP-protocol is unreliable. But because I have clean labor conditions and it always the same last packet which is lost, I think it must be a systematic error somewhere in my code.
So I try to find out the cause of the missing packet over a week but I can’t find it.
Following issues:
using Jumbo-Frames : packet size 8K Bytes
always the same last packet which is lost
error is rare (after some hours and thousands of transferred images)
error rate is higher after a connect or reconnect the device on NIC (after Auto negotiation)
I tried :
Use another NIC
Check my code : check the return values of functions, check the error handling of functions
Log the outgoing packages on my device
View packages with wireshark tool, and check with logged
packages from device
How I can solve the problem?
I know it is difficult because there are so many possibilities of cause of failure.
Questions:
Are there any know bugs on linux 2.6.24 ethernet driver stack(
especially after Auto negotiation) which were fixed in later
versions?
Should I set special options on my transfer socket ? (sock
= socket(AF_INET, SOCK_DGRAM, 0);
Should I renew the socket after Auto negotiation ?
Should I enable any linux diagnostic messages in linux kernel to find out what is going wrong ?
Are there other recommendations ?
I have seen similar problems on an application I once developed where one side of the connection was windows, and the other side was an embedded real time OS (not linux) and the only thing in between was cat5 ethernet cable. Indeed I found that a certain flury of UDP messages would almost always cause 1 of the messages to be lost, and it was always the same message. Very strange, and after a lot of time with wireshark and other network tools I finally decided that it could only be the fact that UDP was unreliable.
My advice is to switch to TCP and build a small message framer:
http://blog.chrisd.info/tcp-message-framing/
I find using TCP to be very reliable, and it can also be very fast if the traffic is "stream like" (meaning: mostly unidirectional) additionally building a message framer on top of TCP is much easier than building TCP on top of UDP.
It might be possible that it's the camera's fault - surprisingly even on the camera side, GigE vision can be more complicated and less deterministic than competing technologies like CameraLink. In particular I've seen my own strange problems and have been told from manufacturers that many of the cameras have some known buffering issues, in particular when running at their higher resolution/framerates.
Check your camera's firmware with the vendor and see if there's an update to address this.
Alternatively perhaps you have some delay between the last packet recvmsg and the previous recvmsg such as processing the frame data before receiving the end of the gvsp frame packet?
Additional recommendation: make sure no switch or other networking equipment is in the middle between the system and the camera - use a direct Cat-6e cable.
Does anyone know of a Source Code for a FTP Client that I can use with a PIC microcontroller?
Preferably in C and I am using a PIC 18F97J60. It is ok even if the source is for a different PIC, I can modify it to support my need.
Thanks.
From this page:
Microchip offers a free licensed TCP/IP stack optimized for the PIC18, PIC24, dsPIC and PIC32 microcontroller families. As shown in figure below, the stack is divided into multiple layers, where each layer accesses services from one or more layers directly below it. Per specifications, many of the TCP/IP layers are “live”, in the sense that they not only act when a service is requested, but also when events like time-out or new packet arrival occurs. Microchip’s TCP/IP stack includes the following key features:
Optimized for all PIC18, PIC24, dsPIC and PIC32 families
Supported Protocols: ARP, IP, ICMP, UDP, TCP, DHCP, SNMP, HTTP, FTP, TFTP
EDIT: This package does not include source for an FTP client; only an FTP server. So this can get you most of the way there.
There are so many open source code, such as(get from google):
http://www.thefreecountry.com/webmaster/freeftpclients.shtml
I am using libpcap library. I have made one packet sniffer C program using pcap.h. Now I want to block packets coming on port 23 on my computer via eth0 device. I tried pcap_filter function but it is not useful for blocking.
Please explain to me how to code this functionality using c program.
Libpcap is just used for packet capturing, i.e. making packets available for use in other programs. It does not perform any network setup, like blocking, opening ports. In this sense pcap is a purely passive monitoring tool.
I am not sure what you want to do. As far as I see it, there are two possibilities:
You actually want to block the packets, so that your computer will not process them in any way. You should use a firewall for that and just block this port. Any decent firewall should be able to do that fairly easy. But you should be aware, that this also means no one will be able to ssh into your system. So if you do that on a remote system, you have effectively locked out yourself.
You still want other programs (sshd) to listen on port 23 but all this traffic is annoying you in your application. Libpcap has a filtering function for that, that is quite powerful. With this function you can pass small scripts to libpcap and have it only report packets that fit. Look up filtering in the pcap documentation for more information. This will however not "block the traffic" just stop pcap from capturing it.
Actually using pcap you are not able to build firewall. This is because packets seen inside your sniffer (built using pcap) are just copy of packets which (with or without sniffer) are consumed by network stack.
In other words: using filters in pcap will cause that you will not see copies of original packets (as far as I know pcap compiles filters and add those to kernel so that on kernel level copy will not be done); anyway original packet will go to network stack anyway.
The solution of your problem most probably could be done by netfilter. You can register in NF_IP_PRE_ROUTING hook and there decide to drop or allow given traffic.