How do I calculate network utilization for both transmit and receive - c

How do I calculate network utilization for both transmit and receive either using C or a shell script?
My system is an embedded linux. My current method is to recorded bytes received (b1), wait 1 second, then recorded again (b2). Then knowing the link speed, I calculate the percentage of the receive bandwidth used.
receive utilization = (((b2 - b1)*8)/link_speed)*100
is there a better method?

Check out open source programs that does something similar.
My search turned up a little tool called vnstat.
It tries to query the /proc file system, if available, and uses getifaddrs for systems that do not have it. It then fetches the correct AF_LINK interface, fetches the corresponding if_data struct and then reads out transmitted and received bytes, like this:
ifinfo.rx = ifd->ifi_ibytes;
ifinfo.tx = ifd->ifi_obytes;
Also remember that sleep() might sleep longer than exactly 1 second, so you should probably use a high resolution (wall clock) timer in your equation -- or you could delve into the if-functions and structures to see if you find anything appropriate for your task.

thanks to 'csl' for pointing me in the direction of vnstat. using vnstat example here is how I calculate network utilization.
#define FP32 4294967295ULL
#define FP64 18446744073709551615ULL
#define COUNTERCALC(a,b) ( b>a ? b-a : ( a > FP32 ? FP64-a-b : FP32-a-b))
int sample_time = 2; /* seconds */
int link_speed = 100; /* Mbits/s */
uint64_t rx, rx1, rx2;
float rate;
/*
* Either read:
* '/proc/net/dev'
* or
* '/sys/class/net/%s/statistics/rx_bytes'
* for bytes received counter
*/
rx1 = read_bytes_received("eth0");
sleep(sample_time); /* wait */
rx2 = read_bytes_received("eth0");
/* calculate MB/s first the convert to Mbits/s*/
rx = rintf(COUNTERCALC(rx1, rx2)/(float)1048576);
rate = (rx*8)/(float)sample_time;
percent = (rate/(float)link_speed)*100;

Related

How to receive long data frame by esp32 rmt ringbuffer

I'm a newbie and I'm playing with ESP32 and IR receiver to capture signal from AC IR remote.
Currently, I refer to example code for capturing IR signal as follows:
static void nec_rx_init()
{
rmt_config_t rmt_rx;
rmt_rx.channel = RMT_RX_CHANNEL;
rmt_rx.gpio_num = RMT_RX_GPIO_NUM;
rmt_rx.clk_div = RMT_CLK_DIV;
rmt_rx.mem_block_num = 1;
rmt_rx.rmt_mode = RMT_MODE_RX;
rmt_rx.rx_config.filter_en = true;
rmt_rx.rx_config.filter_ticks_thresh = 100;
rmt_rx.rx_config.idle_threshold = rmt_item32_tIMEOUT_US / 10 * (RMT_TICK_10_US);
rmt_config(&rmt_rx);
rmt_driver_install(rmt_rx.channel, 3000, 0);
}
//get RMT RX ringbuffer
RingbufHandle_t rb = NULL;
rmt_get_ringbuf_handle(RMT_RX_CHANNEL, &rb);
// rmt_rx_start(channel, rx_idx_rst) - Set true to reset memory index for receiver
rmt_rx_start(RMT_RX_CHANNEL, 1);
while(rb) {
uint32_t rx_size = 0;
//try to receive data from ringbuffer.
//RMT driver will push all the data it receives to its ringbuffer.
//We just need to parse the value and return the spaces of ringbuffer.
rmt_item32_t* item = (rmt_item32_t*) xRingbufferReceive(rb, &rx_size, 1000);
...
}
Although IR signal is emitted from an AC IR remote is about 100 items but I always see that rx_size is only 256 (64 items). So it is problem, how can I capture total signals from AC IR remote?. Note that I set the buffer size from 3000 to 10000.
I appreciate any suggestion for me in order to deal with this problem.
I had this same issue this past week. I was receiving a serial packet that was 128 bits, but only ever got the first 64. After digging into it a bit deeper I found that the hardware buffer on the RMT interface defaults to a 64x32-bit RAM block per channel. You can set the channel up to use the memory blocks normally assigned to subsequent channels if you need to receive more data at once.
For my project I used the following function to give 4 RAM blocks to channel 0, thus increasing the maximum receive size to 256 bits, which is more than enough for my application. I also had to move the receive to channel 4 since channel 0 was now using the memory block for channel 1.
rmt_set_mem_block_num((rmt_channel_t) 0, 4);
Documentation for this function can be found here:
https://docs.espressif.com/projects/esp-idf/en/stable/api-reference/peripherals/rmt.html#_CPPv421rmt_set_mem_block_num13rmt_channel_t7uint8_t
It's also worth noting that it did throw an error in the serial monitor while the issue was occurring, which did help find the cause of the issue.
E (33323) rmt: RMT[0] ERR
E (33323) rmt: status: 0x14000100
E (33373) rmt: RMT RX BUFFER FULL
With the default amount of RAM I was getting an error code of 0x14000040, and when I increased it to 2 blocks I got a status code of 0x13000040. After increasing to 4 blocks of RAM the error message stopped appearing.

Determine the number of samples in audio buffer

I am writing a small program to perform real-time ambient noise removal using PortAudio. To do some of the necessary calculations (like Fourier transforms), I need to supply the sample data, but I also need to know exactly how many samples I am working with at a given time.
How can I determine the number of audio samples in a buffer?
When attempting to solve this myself, two variables seemed particularly relevant and useful, namely: the sampling rate and the frames per buffer. When I attempted to calculate the number of samples using the sampling rate, I ran into the issue of miscalculating the time between each callback invocation.
int ambienceCallback(const void * inputBuffer,
void * outputBuffer,
unsigned long framesPerBuffer,
const PaStreamCallbackTimeInfo * timeInfo,
PaStreamCallbackFlags statusFlags,
void * userData)
{
const SAMPLE * in = (const SAMPLE *) inputBuffer;
PaStreamParameters * inputParameters = (PaStreamParameters *) userData;
PaTime time = timeInfo->inputBufferAdcTime;
int sampleCount = (time - callbackTime) * Pa_GetDeviceInfo(inputParameters->device)->defaultSampleRate;
callbackTime = time;
// extraneous ...
}
where callbackTime is a variable declared in the header file, and initialized upon starting the audio input stream.
// extraneous ...
error = Pa_StartStream(stream);
callbackTime = Pa_GetStreamTime(stream);
// extraneous ...
However, the calculated time would always be zero. As a result, I could not make my idea to simply multiply the sampling rate by the elapsed time work. The other variable, framesPerBuffer seemed like it could be useful for calculating the sample count if I could find how many samples were in a frame but, I flat out could not manage do that.
Again, how can I determine how many samples are in the buffer? As a disclaimer, I am new to audio programming. I am probably mixing up some terms or concepts, causing the more experienced to scratch their heads. (I apologize!)
Get the number of samples from the callback parameters! :)
framesPerBuffer gives you number of frames.
A frame is a set of samples that occur simultaneously. For a stereo stream, a frame is two samples.
Timestamps are not useful for your purpose, e.g. Pa_GetStreamTime() returns the stream's current time in seconds. This resolution won't allow you to calculate the number of samples.

The meaning of period in ALSA

I'm using ALSA for and audio application on Linux, I found great docs explain how to use it : 1 and this one. although I have some issues to understand this part of the setup :
/* Set number of periods. Periods used to be called fragments. */
if (snd_pcm_hw_params_set_periods(pcm_handle, hwparams, periods, 0) < 0) {
fprintf(stderr, "Error setting periods.\n");
return(-1);
}
what does mean set a number of period when I'm using the PLAYBACK mode
and :
/* Set buffer size (in frames). The resulting latency is given by */
/* latency = periodsize * periods / (rate * bytes_per_frame) */
if (snd_pcm_hw_params_set_buffer_size(pcm_handle, hwparams, (periodsize * periods)>>2) < 0) {
fprintf(stderr, "Error setting buffersize.\n");
return(-1);
}
and the same question here about the latency , how should I understand it?
I assume you've read and understood this section of linux-journal. You may also find that this blog clarify things with respect to period size selection (or fragment in the blog) in the context of ALSA. To quote:
You shouldn't misuse the fragments logic of sound devices. It's like
this:
The latency is defined by the buffer size.
The wakeup interval is defined by the fragment size.
The buffer fill level will oscillate between 'full buffer' and 'full
buffer minus 1x fragment size minus OS scheduling latency'. Setting
smaller fragment sizes will increase the CPU load and decrease battery
time since you force the CPU to wake up more often. OTOH it increases
drop out safety, since you fill up playback buffer earlier. Choosing
the fragment size is hence something which you should do balancing out
your needs between power consumption and drop-out safety. With modern
processors and a good OS scheduler like the Linux one setting the
fragment size to anything other than half the buffer size does not
make much sense.
...
(Oh, ALSA uses the term 'period' for what I call 'fragment'
above. It's synonymous)
So essentially, typically you would set period to 2 (as was done in the howto you referenced). Then periodsize * period is your total buffer size in bytes. Finally, the latency is the delay that is induced by the buffering of that many samples, and can be computed by dividing the buffer size by the rate at which samples are played back (ie. according to the formula latency = periodsize * periods / (rate * bytes_per_frame) in the code comments).
For example, the parameters from the howto:
period = 2
periodsize = 8192 bytes
rate = 44100Hz
16 bits stereo data (4 bytes per frame)
correspond to a total buffer size of period * periodsize = 2 * 8192 = 16384 bytes, and a latency of 16384 / (44100 * 4) ~ 0.093` seconds.
Note also that your hardware may have some size limitations for the supported period size (see this trouble shooting guide)
When the application tries to write samples into the buffer, an if the buffer is already full, the process goes to sleep. It gets woken up by the hardware through an interrupt; this interrupt is raised at the end of each period.
There should be at least two periods per buffer; otherwise, the buffer is already empty when a wakeup happens, which result in an underrun.
Increasing the number of periods (i.e., reducing the period size) increases the safety margin against underruns caused by scheduling or processing delays.
The latency is just proportional to the buffer size: when you completely fill the buffer, the last sample written is played by the hardware only after all the other samples have been played.

how to measure serial receive byte speed. eg bytes per second.

I'm receiving byte by byte via serial at baud rate of 115200. How to calculate bytes per sec im receiving in a c program?
There are only 3 ways to measure bytes actually received per second.
The first way is to keep track of how many bytes you receive in a fixed length of time. For example, each time you receive bytes you might do counter += number_of_bytes, and then every 5 seconds you might do rate = counter/5; counter = 0;.
The second way is to keep track of how much time passed to receive a fixed number of bytes. For example, every time you receive one byte you might do temp = now(); rate = 1/(temp - previous); previous = temp;.
The third way is to combine both of the above. For example, each time you receive bytes you might do temp = now(); rate = number_of_bytes/(temp - previous); previous = temp;.
For all of the above, you end up with individual samples and not an average. To convert the samples into an average you'd need to do something like average = sum_of_samples / number_of_samples. The best way to do this (e.g. if you want nice/smooth looking graphs) would be to store a lot of samples; where you'd replace the oldest sample with a new sample and recalculate the average.
For example:
double sampleData[1024];
int nextSlot = 0;
double average;
addSample(double value) {
double sum = 0;
sampleData[nextSlot] = value;
nextSlot++;
if(nextSlot >= 1024) nextSlot = 0;
for(int i = 0; i < 1024; i++) sum += sampleData[1024];
average = sum/1024;
}
Of course the final thing (collecting the samples using one of the 3 methods, then finding the average) would need some fiddling to get the resolution how you want it.
Assuming you have some fairly continuous input, just count the number of bytes you receive, and after some number of characters have been received, print out the time and number of characters over that time. You'll need a fairly good timestamp - clock() may be one reasonable source, but it depends on what system you are on what is the "best" option - as well as how portable you want it, but serial comms tend to not be very portable anyways, or your error will probably be large. Each time you print, reset the count.
To correct some odd comments in this thread about the theoretical maximum:
Around the time that 14400 Baud modems came to the pre-web world, the measure of Baud changed from Baud (wiki it) to match emerging digital technologies such as ISDN 64kbit. At that time, Baud became to mean Bits/second.
Being serial data in the format of 8N1, a common shorthand notation, there are eight bits, no parity bit, and one stop bit for every byte. There is no start bit.
So a theoretical maximum for 8N1 serial over 115200 Baud (bits/sec) = 115200/(8+1) = 12800 bytes/sec.
Similar (but not the same) to watching your download speeds, the rough ball-park way to work out bytes/sec from bits/sec, without a calculator, is to divide by 10.
Baud rate is measurement of how many times per second a signal is able to change. In one of that cycles, depending on the modulation you are using, you can send one or more bits (if you are using no modulation - bit rate is the same as baud rate).
Let's say you are using QPSK modulation, so you can transmit/receive 2 bits per baud. So, if you are receiving data at 115200 baud rate, 2 bits per symbol, you are receiving data with 115200 * 2 = 230400bps.

C Linux Bandwidth Throttling of Application

What are some ways I can try to throttle back a send/sendto() function inside a loop. I am creating a port scanner for my network and I tried two methods but they only seem to work locally (they work when I test them on my home machine but when I try to test them on another machine it doesn't want to create appropriate throttles).
method 1
I was originally parsing /proc/net/dev and reading in the "bytes sent" attribute and basing my sleep time off that. That worked locally (the sleep delay was adjusting to adjust the flow of bandwidth) but as soon as I tried it on another server also with /proc/net/dev it didn't seem to be adjusting data right. I ran dstat on a machine I was locally scanning and it was outputting to much data to fast.
method 2
I then tried to keep track of how many bytes total I was sending and adding it to a total_sent variable which my bandwidth thread would read and compute a sleep timer for. This also worked on my local machine but when I tried it on a server it was saying that it was only sending 1-2 packets each time my bandwidth thread would check total_sent making my bandwidth thread reduce sleep to 0, but even at 0 the total_sent variable did not increase due to the reduced sleep time but instead stayed the same.
Overall I am wanting a way to monitor bandwidth of the Linux computer and calculate a sleep time I can pass into usleep() before or after each of my send/sendto() socket calls to throttle back the bandwidth.
Edit: some other things I forgot to mention is that I do have a speedtest function that calculates upload speed of the machine and I have 2 threads. 1 thread adjusts a global sleep timer based on bandwidth usage and thread 2 sends the packets to the ports on a remote machine to test if they are open and to fingerprint them (right now I am just using udp packets with a sendto() to test this all).
How can I implement bandwidth throttling for a send/sendto() call using usleep().
Edit: Here is the code for my bandwidth monitoring thread. Don't concern yourself about the structure stuff, its just my way of passing data to a thread.
void *bandwidthmonitor_cmd(void *param)
{
int i = 0;
double prevbytes = 0, elapsedbytes = 0, byteusage = 0, maxthrottle = 0;
//recreating my param struct i passed to the thread
command_struct bandwidth = *((command_struct *)param);
free(param);
//set SLEEP (global variable) to a base time in case it was edited and not reset
SLEEP = 5000;
//find the maximum throttle speed in kb/s (takes the global var UPLOAD_SPEED
//which is in kb/s and times it by how much bandwidth % you want to use
//and devides by 100 to find the maximum in kb/s
//ex: UPLOAD_SPEED = 60, throttle = 90, maxthrottle = 54
maxthrottle = (UPLOAD_SPEED * bandwidth.throttle) / 100;
printf("max throttle: %.1f\n", maxthrottle);
while(1)
{
//find out how many bytes elapsed since last polling of the thread
elapsedbytes = TOTAL_BYTES_SEND - prevbytes;
printf("elapsedbytes: %.1f\n", elapsedbytes);
//set prevbytes to our current bytes so we can have results next loop
prevbytes = TOTAL_BYTES_SEND;
//convert our bytes to kb/s
byteusage = 8 * (elapsedbytes / 1024);
//throttle control to make it adjust sleep 20 times every 30~
//iterations of the loop
if(i & 0x40)
{
//adjust SLEEP by 1.1 gain
SLEEP += (maxthrottle - byteusage) * -1.1;//;
if(SLEEP < 0){
SLEEP = 0;
}
printf("sleep:%.1f\n\n", SLEEP);
}
//sleep the thread for a short bit then start the process over
usleep(25000);
//increment variable i for our iteration throttling
i++;
}
}
My sending thread is just a simple sendto() routine in a while(1) loop sending udp packets for testing. sock is my sockfd, buff is a 64 byte character array filled with "A" and sin is my sockaddr_in.
while(1)
{
TOTAL_BYTES_SEND += 64;
sendto(sock, buff, strlen(buff), 0, (struct sockaddr *) &sin, sizeof(sin))
usleep(SLEEP);
}
I know my socket functions work because I can see the usage in dstat on my local machine and the remote machine. This bandwidth code works on my local system (all the variables change as they should) but on the server I tried testing on elapsed bytes does not change (is always 64/128 per iteration of the thread) and results in SLEEP throttling down to 0 which should in theory make the machine send packets faster but even with SLEEP equating to 0 elapsedbytes remain 64/128. I've also encoded the sendto() function inside a if statement checking for the function returning -1 and alerting me by printf-ing the error code but there hasn't been one in the tests I've done.
It seems like this could be most directly solved by calculating the throttle sleep time in the send thread. I'm not sure I see the benefit of another thread to do this work.
Here is one way to do this:
Select a time window in which you will measure your send rate. Based on your target bandwidth this will give you a byte maximum for that amount of time. You can then check to see if you have sent that many bytes after each sendto(). If you do exceed the byte threshold then sleep until the end of the window in order to perform the throttling.
Here is some untested code showing the idea. Sorry that clock_gettime and struct timespec add some complexity. Google has some nice code snippets for doing more complete comparisons, addition, and subtraction with struct timespec.
#define MAX_BYTES_PER_SECOND (128L * 1024L)
#define TIME_WINDOW_MS 50L
#define MAX_BYTES_PER_WINDOW ((MAX_BYTES_PER_SECOND * TIME_WINDOW_MS) / 1000L)
#include <time.h>
#include <stdlib.h>
int foo(void) {
struct timespec window_start_time;
size_t bytes_sent_in_window = 0;
clock_gettime(CLOCK_REALTIME, &window_start_time);
while (1) {
size_t bytes_sent = sendto(sock, buff, strlen(buff), 0, (struct sockaddr *) &sin, sizeof(sin));
if (bytes_sent < 0) {
// error handling
} else {
bytes_sent_in_window += bytes_sent;
if (bytes_sent_in_window >= MAX_BYTES_PER_WINDOW) {
struct timespec now;
struct timespec thresh;
// Calculate the end of the window
thresh.tv_sec = window_start_time.tv_sec;
thresh.tv_nsec = window_start_time.tv_nsec;
thresh.tv_nsec += TIME_WINDOW_MS * 1000000;
if (thresh.tv_nsec > 1000000000L) {
thresh.tv_sec += 1;
thresh.tv_nsec -= 1000000000L;
}
// get the current time
clock_gettime(CLOCK_REALTIME, &now);
// if we have not gotten to the end of the window yet
if (now.tv_sec < thresh.tv_sec ||
(now.tv_sec == thresh.tv_sec && now.tv_nsec < thresh.tv_nsec)) {
struct timespec remaining;
// calculate the time remaining in the window
// - See google for more complete timespec subtract algorithm
remaining.tv_sec = thresh.tv_sec - now.tv_sec;
if (thresh.tv_nsec >= now.tv_nsec) {
remaining.tv_nsec = thresh.tv_nsec - now.tv_nsec;
} else {
remaining.tv_nsec = 1000000000L + thresh.tv_nsec - now.tv_nsec;
remaining.tv_sec -= 1;
}
// Sleep to end of window
nanosleep(&remaining, NULL);
}
// Reset counters and timestamp for next window
bytes_sent_in_window = 0;
clock_gettime(CLOCK_REALTIME, &window_start_time);
}
}
}
}
If you'd like to do this at the application level, you could use a utility such as trickle to limit or shape the socket transfer rates available to the application.
For instance,
trickle -s -d 50 -w 100 firefox
would start firefox with a max download rate of 50KB/s and a peak detection window of 100KB. Changing these values may produce something suitable for your application testing.

Resources