Xbee Pro 802.15.4 not finding each other when they do not share the same power source? - xbee

I wasted countless hours and I am completely clueless on a problem with two xbee S1 modules.
I have two 802.15.4 modules which can reach 1.6km.
I have one connected on PC through USB (5V( and a 3.3V linear regulator, scope shows 3.289V with some ripples of 0.2v.
I have another one connected on a battery powered source, again with linear regulator, scope shows 3.299V, very stable voltage.
They are set up for the same PANID and channel, I use the ATND command to discover each other.
There is no chance they find each other, resets, etc will not help.
I have a scope connected, the voltage stays stable at 3.3V.
I have a USB to serial converter connected to both chips (3.3V ttl level of course) and BOTH can be reached through serial.
I can send AT commands to both, so both are powered on and 'should' work.
If I now connect the battery powered one to the same power source which is connected to my PC then suddenly they see each other and everything works as it should.
It is not depending on range, I already made sure of that.
I am left clueless, it makes absolutely no sense.
Anyone with an idea would be very welcome.

(Note: your question is probably better suited for https://electronics.stackexchange.com/ I'm an EE so I'll go ahead and try to answer though.)
First, make 100% sure the battery is actually good, and the voltage isn't getting loaded down when it tries to transmit. You can watch the voltage on an oscilloscope to check. (I know you looked at it on a scope, but make sure you do it during operation, not during idle.) If it drops then the battery can't supply enough current. If there's heavy RF noise then you need bigger ceramic power supply bypass capacitors. Those are my top guesses.
If the battery is good, then it's also possible that the antenna connection is bad on one. When they share a power supply, RF might be coupled through the power or ground wire, so they can still talk to each other through the wires instead of through the air. If you try powering them BOTH from the battery, and they start working, this might be what's happening.

Related

SGMII without phy - external loopback on Xlinix Zynq UltraScale+ RFSoC board

I have a costume board with Xilinx Zynq UltraScale+ RFSoC.
I'm using 3 PS_GTR transceivers as sgmii.
2 of them are connected to external Marvell phy and the third connects directly (fixed link - without phy).
In the manufacturing stage i would like to make sure that the direct sgmii interface is assembled correctly - so I made an external loopback between tx and rx sgmii signals.
Now, Is it possible to transmit something through this external loopback and compare with the received data?
Is it possible to ping with yourself? (simple ping command not working: "ping -I eth2 ")
perhaps there is a 'patch' under the 'macb' kernel driver that someone can guide me through?
Thank you all,
Tzipi Kluska
Yes it is possible to ping yourself. Note that linux does or at least used to bypass the hardware when talking to itself and would do the loopback in the IP stack. I recently saw someone within a terminal (window, command line) isolate one network interface, then another another network interface and then it was trivial to use stock tools like ping and iperf to test the link.
Before doing that though, the serdes on your part should have PRBS capabilities (for a reason), some may have internal scope like features that allow you to extract an eye or at least numbers that indicate the quality of the eye. The marvell phy should also have this capability and you can both use a loopback to talk to yourself use various prbs lengths to check the quality of the link (less than one error in so many 10 to the 14th bits or whatever your desired quality is), and then when connected to the marvell repeat that.
Before doing all of this the software is often the hard part and you need to insure you have it working first, so you may wish to do loopbacks inside the fpga that do not have analog issues and get the software worked out, then in the serdes on the edge of the fpga they may have loopbacks in both directions, the marvell as well may have loopbacks in both directions so you can for example go direct fpga to marvell one is the tx and one the rx and vice versa, or you might enable a lan side shallow loopback on the marvell and talk to yourself.
Also depending on these speeds, hand made loopbacks might be noisy so sometimes a pcb based loopback (which also has to be designed) may wish to be deployed.
Can you ping yourself, absolutely. You can use other low level network interfaces like sockets, to make raw packets and talk to yourself through these interfaces as well. Ping, doing a ping flood, iperf, netperf, etc are all fine ways to exercise or get a warm fuzzy about the interface during both development and manufacture test.
Being an fpga you can of course have a test design that you load into the fpga that pushes the external interfaces and reports the bit error rate.

Output user data fast on Cortex R5

I'm trying to output some user data, character series, on a Cortex R5 to a PC.
Problem is that the uart is too slow for the amount of data and I'm looking for something faster. I hoped ITM could be used but sadly that's only available for Cortex M-series. The data contains status info about processes which I'd like to visualise for a better insight.
The Uart was running at the maximum of 921600 baud thus I'm looking for something faster than that. I'm looking for 2-5 Mbit.
I found information of DCC (debug communication channel) and ETM but I can't really figure out their speeds and how I could use them with user data instead of tracing data.
I have acces to tracers and debuggers (Green Hills SuperTrace and Realview ICE) so requiring those is no problem. I just can't figure out how to read the data. Perhaps I missed the obvious?
Edit: For now it looks like the easiest way is to bypass the CP2105 which limits my uart to 921600. I'll connect the RX/TX pins from the SoC to a RPi which should be able to get much higher bauds. Ofcourse, I'll also need a logic level shifter since the SoC is only 2.5V tolerant (74LVC245). If this setup works I'll answer my question. Thanks for the input!
The DCC is probably going to be slow, and maybe intrusive to use. You're limited to using JTAG to access this.
The ETM ought to be able to trace this information, and you should be able to configure the filtering to trace just the accesses to a specific memory address. Its a very long time since I looked in detail at the ETMv3 data trace, so I'm not sure if you need to trace the associated instruction or not. The debug tools also tend to be more focused on tracing instructions with data being an additional decoration rather than presenting a raw datastream, so processing the data might be non-trivial.
The ETM should provide several bits of data throughput per cycle, so as long as the data is in small bursts there should be enough bandwidth. Obviously this is package dependant, but small numbers of Gbps are achievable (with a substantial protocol cost depending on what information you're trying to push through the trace stream).
In some chips, an ETM can be shared between several processors (of the same type). ETCSCR[14:14] will be non-zero if this is the case, and then you're limited to selecting one core and tracing that (until the ETM is disabled/re-programmed).

How do I increase the speed of my USB cdc device?

I am upgrading the processor in an embedded system for work. This is all in C, with no OS. Part of that upgrade includes migrating the processor-PC communications interface from IEEE-488 to USB. I finally got the USB firmware written, and have been testing it. It was going great until I tried to push through lots of data only to discover my USB connection is slower than the old IEEE-488 connection. I have the USB device enumerating as a CDC device with a baud rate of 115200 bps, but it is clear that I am not even reaching that throughput, and I thought that number was a dummy value that is a holdover from RS232 days, but I might be wrong. I control every aspect of this from the front end on the PC to the firmware on the embedded system.
I am assuming my issue is how I write to the USB on the embedded system side. Right now my USB_Write function is run in free time, and is just a while loop that writes one char to the USB port until the write buffer is empty. Is there a more efficient way to do this?
One of my concerns that I have, is that in the old system we had a board in the system dedicated to communications. The CPU would just write data across a bus to this board, and it would handle communications, which means that the CPU didn't have to waste free time handling the actual communications, but could offload the communications to a "co processor" (not a CPU but functionally the same here). Even with this concern though I figured I should be getting faster speeds given that full speed USB is on the order of MB/s while IEEE-488 is on the order of kB/s.
In short is this more likely a fundamental system constraint or a software optimization issue?
I thought that number was a dummy value that is a holdover from RS232 days, but I might be wrong.
You are correct, the baud number is a dummy value. If you create a CDC/RS232 adapter you would use this to configure your RS232 hardware, in this case it means nothing.
Is there a more efficient way to do this?
Absolutely! You should be writing chunks of data the same size as your USB endpoint for maximum transfer speed. Depending on the device you are using your stream of single byte writes may be gathered into a single packet before sending but from my experience (and your results) this is unlikely.
Depending on your latency requirements you can stick in a circular buffer and only issue data from it to the USB_Write function when you have ENDPOINT_SZ number of byes. If this results in excessive latency or your interface is not always communicating you may want to implement Nagles algorithm.
One of my concerns that I have, is that in the old system we had a board in the system dedicated to communications.
The NXP part you mentioned in the comments is without a doubt fast enough to saturate a USB full speed connection.
In short is this more likely a fundamental system constraint or a software optimization issue?
I would consider this a software design issue rather than an optimisation one, but no, it is unlikely you are fundamentally stuck.
Do take care to figure out exactly what sort of USB connection you are using though, if you are using USB 1.1 you will be limited to 64KB/s, USB 2.0 full speed you will be limited to 512KB/s. If you require higher throughput you should migrate to using a separate bulk endpoint for the data transfer.
I would recommend reading through the USB made simple site to get a good overview of the various USB speeds and their capabilities.
One final issue, vendor CDC libraries are not always the best and implementations of the CDC standard can vary. You can theoretically get more data through a CDC endpoint by using larger endpoints, I have seen this bring host side drivers to their knees though - if you go this route create a custom driver using bulk endpoints.
Try testing your device on multiple systems, you may find you get quite different results between windows and linux. This will help to point the finger at the host end.
And finally, make sure you are doing big buffered reads on the host side, USB will stop transferring data once the host side buffers are full.

How to get WIFI parameters (bandwidth, delay) on Ubuntu using C

I am student and I am writting simple application in C99 standard. Program should working on Ubuntu.
I have one problem - I don't know how can I get some Wifi parameters like bandwidth or delay. I haven't any idea how to do this. It is possible to do this using standard functions or any linux API (ech I am windows user)?.
In general, you don't know the bandwidth or delay of a wifi device.
Bandwidth and delay is the type of information from a link.
As far as I know, there is no such information holding in WiFi drivers.
The most link-related information is SINR.
For measuring bandwidth or delay, you should write your own code.
Maybe you should tell us more about your concrete problem. For now, I assume that you are interested in the throughput and latency of a specific wireless link, i.e. a link between two 802.11 stations. This could be a link between an access point and a client or between two ad-hoc stations.
The short answer is that there is no such API. In fact, it is not trivial even to estimate these two link parameters. They depend on the signal quality, on the data rate used by the sending station, on the interference, on the channel utilization, on the load of the computer systems at both ends, and probably a lot of other factors.
Depending on the wireless driver you are using it may be possible to obtain information about the currently used data rate and some packet loss statistics for the station you are communicating with. Have a look at net/mac80211/sta_info.h in your Linux kernel source tree. If you are using MadWifi, you may find useful information in the files below /proc/net/madwifi/ath0/ and in the output of wlanconfig ath0 list sta.
However, all you can do is to make a prediction. If the link quality changes suddenly, your prediction may be entirely wrong.

How would one go about to measure differences in clock time on two different devices?

I'm currently in an early phase of developing a mobile app that depends heavily on timestamps.
A master device is connected to several client devices over wifi, and issues various commands to these. When the client devices receive commands, they need to mark the (relative) timestamp when the command is executed.
While all this is simple enough, I haven't come up with a solution for how to deal with clock differences. For example, the master device might have its clock at 12:01:01, while client A is on 12:01:02 and client B on 12:01:03. Mostly, I can expect these devices to be set to similar times, as they sync over NTP. However, the nature of my application requires ms precision, so therefore I would like to safeguard against discrepancies.
A short delay between issuing a command and executing the command is fine, however an incorrect timestamp of when that command was executed is not.
So far, I'm thinking of something along the line of having the master device ping each client device to determine transaction time, and then request the client to send their "local" time. Based on this, I can calculate what the time difference is between master and client. Once the time difference is know, the client can adapt its timestamps accordingly.
I am not very familiar with networking though, and I suspect that pinging a device is not a very reliable method of establishing transaction time, since a lot factors apply, and latency may change.
I assume that there are many real-world settings where such timing issues are important, and thus there should be solutions already. Does anyone know of any? Is it enough to simply divide response time by two?
Thanks!
One heads over to RFC 5905 for NTPv4 and learns from the folks who really have put their noodle to this problem and how to figure it out.
Or you simply make sure NTP is working properly on your servers so that you don't have this problem in the first place.

Resources