How to get WIFI parameters (bandwidth, delay) on Ubuntu using C - c

I am student and I am writting simple application in C99 standard. Program should working on Ubuntu.
I have one problem - I don't know how can I get some Wifi parameters like bandwidth or delay. I haven't any idea how to do this. It is possible to do this using standard functions or any linux API (ech I am windows user)?.

In general, you don't know the bandwidth or delay of a wifi device.
Bandwidth and delay is the type of information from a link.
As far as I know, there is no such information holding in WiFi drivers.
The most link-related information is SINR.
For measuring bandwidth or delay, you should write your own code.

Maybe you should tell us more about your concrete problem. For now, I assume that you are interested in the throughput and latency of a specific wireless link, i.e. a link between two 802.11 stations. This could be a link between an access point and a client or between two ad-hoc stations.
The short answer is that there is no such API. In fact, it is not trivial even to estimate these two link parameters. They depend on the signal quality, on the data rate used by the sending station, on the interference, on the channel utilization, on the load of the computer systems at both ends, and probably a lot of other factors.
Depending on the wireless driver you are using it may be possible to obtain information about the currently used data rate and some packet loss statistics for the station you are communicating with. Have a look at net/mac80211/sta_info.h in your Linux kernel source tree. If you are using MadWifi, you may find useful information in the files below /proc/net/madwifi/ath0/ and in the output of wlanconfig ath0 list sta.
However, all you can do is to make a prediction. If the link quality changes suddenly, your prediction may be entirely wrong.

Related

Upload in a restricted country is too slow

I'm a C programmer and may write programs in Linux area to connect machines over internet. After I examined that speedtest.net can't upload or has a very poor upload speed, I decided to examine a simple TCP socket connection and see whether it's really slow, and found that yes, it's really slow. I've rented a VPS outside of my country. I don't know what's happening by the government in infrastructure and how packets are routed and how they're restricted. Examining what I saw in speedtest.net, again in a simple socket connection proves me that I can't have a chance. When the traffic in shaped so, there's no way. It proves that it's not a restriction on HTTPS or any application layer protocol, when just a simple TCP socket connection also can't succeed to gain reasonable speed. The speed is below 10 kilobytes per second! Damn!
In contrast, after I got disappointed, I examined some barrier breakers like CyberGhost extension for Chrome. I wondered when I saw that it may overcome the barrier by increasing the upload speed to about 200 kilobytes per second! How?! They can't use any method closer to hardware than sockets.
Now I come here to consult with you and see what ideas you may have about it, so that I may write a program or change the written program based on it.
Thank you

Output user data fast on Cortex R5

I'm trying to output some user data, character series, on a Cortex R5 to a PC.
Problem is that the uart is too slow for the amount of data and I'm looking for something faster. I hoped ITM could be used but sadly that's only available for Cortex M-series. The data contains status info about processes which I'd like to visualise for a better insight.
The Uart was running at the maximum of 921600 baud thus I'm looking for something faster than that. I'm looking for 2-5 Mbit.
I found information of DCC (debug communication channel) and ETM but I can't really figure out their speeds and how I could use them with user data instead of tracing data.
I have acces to tracers and debuggers (Green Hills SuperTrace and Realview ICE) so requiring those is no problem. I just can't figure out how to read the data. Perhaps I missed the obvious?
Edit: For now it looks like the easiest way is to bypass the CP2105 which limits my uart to 921600. I'll connect the RX/TX pins from the SoC to a RPi which should be able to get much higher bauds. Ofcourse, I'll also need a logic level shifter since the SoC is only 2.5V tolerant (74LVC245). If this setup works I'll answer my question. Thanks for the input!
The DCC is probably going to be slow, and maybe intrusive to use. You're limited to using JTAG to access this.
The ETM ought to be able to trace this information, and you should be able to configure the filtering to trace just the accesses to a specific memory address. Its a very long time since I looked in detail at the ETMv3 data trace, so I'm not sure if you need to trace the associated instruction or not. The debug tools also tend to be more focused on tracing instructions with data being an additional decoration rather than presenting a raw datastream, so processing the data might be non-trivial.
The ETM should provide several bits of data throughput per cycle, so as long as the data is in small bursts there should be enough bandwidth. Obviously this is package dependant, but small numbers of Gbps are achievable (with a substantial protocol cost depending on what information you're trying to push through the trace stream).
In some chips, an ETM can be shared between several processors (of the same type). ETCSCR[14:14] will be non-zero if this is the case, and then you're limited to selecting one core and tracing that (until the ETM is disabled/re-programmed).

How do I increase the speed of my USB cdc device?

I am upgrading the processor in an embedded system for work. This is all in C, with no OS. Part of that upgrade includes migrating the processor-PC communications interface from IEEE-488 to USB. I finally got the USB firmware written, and have been testing it. It was going great until I tried to push through lots of data only to discover my USB connection is slower than the old IEEE-488 connection. I have the USB device enumerating as a CDC device with a baud rate of 115200 bps, but it is clear that I am not even reaching that throughput, and I thought that number was a dummy value that is a holdover from RS232 days, but I might be wrong. I control every aspect of this from the front end on the PC to the firmware on the embedded system.
I am assuming my issue is how I write to the USB on the embedded system side. Right now my USB_Write function is run in free time, and is just a while loop that writes one char to the USB port until the write buffer is empty. Is there a more efficient way to do this?
One of my concerns that I have, is that in the old system we had a board in the system dedicated to communications. The CPU would just write data across a bus to this board, and it would handle communications, which means that the CPU didn't have to waste free time handling the actual communications, but could offload the communications to a "co processor" (not a CPU but functionally the same here). Even with this concern though I figured I should be getting faster speeds given that full speed USB is on the order of MB/s while IEEE-488 is on the order of kB/s.
In short is this more likely a fundamental system constraint or a software optimization issue?
I thought that number was a dummy value that is a holdover from RS232 days, but I might be wrong.
You are correct, the baud number is a dummy value. If you create a CDC/RS232 adapter you would use this to configure your RS232 hardware, in this case it means nothing.
Is there a more efficient way to do this?
Absolutely! You should be writing chunks of data the same size as your USB endpoint for maximum transfer speed. Depending on the device you are using your stream of single byte writes may be gathered into a single packet before sending but from my experience (and your results) this is unlikely.
Depending on your latency requirements you can stick in a circular buffer and only issue data from it to the USB_Write function when you have ENDPOINT_SZ number of byes. If this results in excessive latency or your interface is not always communicating you may want to implement Nagles algorithm.
One of my concerns that I have, is that in the old system we had a board in the system dedicated to communications.
The NXP part you mentioned in the comments is without a doubt fast enough to saturate a USB full speed connection.
In short is this more likely a fundamental system constraint or a software optimization issue?
I would consider this a software design issue rather than an optimisation one, but no, it is unlikely you are fundamentally stuck.
Do take care to figure out exactly what sort of USB connection you are using though, if you are using USB 1.1 you will be limited to 64KB/s, USB 2.0 full speed you will be limited to 512KB/s. If you require higher throughput you should migrate to using a separate bulk endpoint for the data transfer.
I would recommend reading through the USB made simple site to get a good overview of the various USB speeds and their capabilities.
One final issue, vendor CDC libraries are not always the best and implementations of the CDC standard can vary. You can theoretically get more data through a CDC endpoint by using larger endpoints, I have seen this bring host side drivers to their knees though - if you go this route create a custom driver using bulk endpoints.
Try testing your device on multiple systems, you may find you get quite different results between windows and linux. This will help to point the finger at the host end.
And finally, make sure you are doing big buffered reads on the host side, USB will stop transferring data once the host side buffers are full.

Get system date/time via USB

Is there any way to query the system's date/time via USB without installing anything on the host computer (maybe just drivers)?
Background of the original problem
To avoid the XY problem, let me explain a bit what I'm trying to do.
To be able to calculate a TOTP token for 2FA (e.g. like Google Authenticator app does) you need a real-time clock to get the date and time.
There's this USB device called SC4-HSM that I would like to use to calculate the tokens, however it doesn't have a clock and according to the designer, adding one would be too expensive (needs a battery, etc).
Possible solution to the original problem
This device is going to be used with a computer which already has an RTC of course. Thus I had the idea of querying the system for a date/time which would solve the issue.
(Note: I know that a USB device can be connected to all sorts of hosts and not all hosts will have an RTC, but since this only needs to work with a computer, I thought this shouldn't be an issue)
My first thought was that there might be some USB device class that had date/time needs, so I could register the device as that type and then I would be able to query the values.
After going through the device class codes list (Internet Archive) nothing jumped at me as needing date/time. The closest ones I could think of were:
Content Security (PDF)
Personal Healthcare
Smart Card Class (PDF)
I skimmed the device class documents in the USB Implementers Forum but there's nothing in there even remotely related to date or time.
Current problem
Since the USB specs seemed like a dead-end I thought that maybe there was a way to write a very simple USB driver that can be auto-loaded when the device is plugged in to a computer and then we can use the driver to return the date/time when the device asks for it (unless I'm misunderstanding something).
I am now looking through USB development docs like Michael Opdenacker's Linux USB drivers course, I tried the Linux USB Project which seems dead. Skimmed through Driver Development for Windows NT just to get an idea, however I am still not able to figure out if this is possible or not, and how hard it would be.
I'm a complete beginner at this and maybe this is something out of my skill level, but I would like to figure out if will I need weird hacks and workarounds or is there a much more straightforward way to do this?
There seems to be little information about it or I'm just searching the wrong places.
Any ideas/or pointers on either solving the original problem or the current one?
system time is not necessarily the general time i.e. the 'atomic' time you get from a NTP server
the most obvious solution is to use autorun, this is also possible on linux but normally autorun is blocked so the user explicitely has to activate it
https://askubuntu.com/questions/642511/how-to-autorun-files-and-scripts-in-ubuntu-when-inserting-a-usb-stick-like-autor
the linux command to get the time is date or hwclock or if the computer is connected to the net it may be possible to contact a NTP server (if the firewall does not block this)
then your autorun program has to send the data to the SC4-HSM. i do not know what USB classes the SC4-HSM implements if it implements CDC ACM (virtual COM port) this is easy:
Unable to sync computer time to Arduino via USB
(something like echo "T$(($(date +%s)+60*60*$TZ_adjust))" >/dev/tty.usbmodemfa131)
maybe it is possible to access system time over the USB drivers, i do not know this right now

How would one go about to measure differences in clock time on two different devices?

I'm currently in an early phase of developing a mobile app that depends heavily on timestamps.
A master device is connected to several client devices over wifi, and issues various commands to these. When the client devices receive commands, they need to mark the (relative) timestamp when the command is executed.
While all this is simple enough, I haven't come up with a solution for how to deal with clock differences. For example, the master device might have its clock at 12:01:01, while client A is on 12:01:02 and client B on 12:01:03. Mostly, I can expect these devices to be set to similar times, as they sync over NTP. However, the nature of my application requires ms precision, so therefore I would like to safeguard against discrepancies.
A short delay between issuing a command and executing the command is fine, however an incorrect timestamp of when that command was executed is not.
So far, I'm thinking of something along the line of having the master device ping each client device to determine transaction time, and then request the client to send their "local" time. Based on this, I can calculate what the time difference is between master and client. Once the time difference is know, the client can adapt its timestamps accordingly.
I am not very familiar with networking though, and I suspect that pinging a device is not a very reliable method of establishing transaction time, since a lot factors apply, and latency may change.
I assume that there are many real-world settings where such timing issues are important, and thus there should be solutions already. Does anyone know of any? Is it enough to simply divide response time by two?
Thanks!
One heads over to RFC 5905 for NTPv4 and learns from the folks who really have put their noodle to this problem and how to figure it out.
Or you simply make sure NTP is working properly on your servers so that you don't have this problem in the first place.

Resources