Is there a way to ping faster in Busy Box or Tiny Core Linux? - c

Solution at end of this post.
By default the time is set to one second, and under the usual iputils version of ping there is an option to reduce this number with the -i switch. I need to ping faster, as I have 120 pings in a certain test that needs to be run many times.
I tried modifying the source of ping.c from the busybox source but I don't know much about compiling and I get the error "could not be found libbb.h" and I couldn't find anyone else with a similar error on busybox.
Does anyone know of a way for me to ping faster than 1 per second, I am hoping to go down to 0.1 or 0.05 seconds if at all possible.
Thanks in advance
Solution
In case anyone comes looking for an answer, the solution I came up with was much better. If you write a script to ping with the -c 1 flag, and count the failures yourself you can ping much faster.
Example:
fails=0
for i in `seq 1 20`
do
x=`ping -c 1 192.168.1.1 | grep received | cut -d' ' -f4`
if [ x -eq 0 ]
then
fails=$(($fails+1))
fi
done
echo $fails fails
done

You are correct in that you have to modify the ping.c file. As you have determined, BusyBox ping does not support the -i switch.
What platform are you building this for? A PC, an embedded system?
Option 1:
Modify ping.c from BusyBox and recompile BusyBox. To do this, you would use 'make' in the root of the BusyBox project.
user#linux:~/busybox-1.19.2$ make
Option 2:
It might be easier and more simplistic to leave BusyBox alone and get ping.c from another archive such as iputils. This supports the -i switch and goes as low as 0.2 seconds. To compile ping.c:
user#linux:~/iputils-s20101006$ make ping

Related

Sagemaker: MemoryError: Unable to allocate ___for an array with shape ___ and data type float64

I am running a notebook in sagemaker and it seems like one of the arrays produced after vectorizing text is causing issues.
Reading other answers it seems like it is an issue with overcommit. And one of the solutions proposed is to set it to always overcommit with this:
$ echo 1 > /proc/sys/vm/overcommit_memory
Is there any documentation or do you have any suggestion on how to do the same thing in sagemaker?
Thank you very much.
Open a root shell by sudo -i and then run the echo 1 > /proc/sys/vm/overcommit_memory
Changing the default kernel in the SageMaker Notebook to conda_python3 resolved this issue for me.

Is there a way to stop recording screen with Byzanz?

I use tool called byzanz to record my screen and create gif files.
This is the way I use it:
byzanz-record -d 55 --delay=2 -x 0 -y 0 -w 3940 -h 950 desktop-animation.gif
However, often I can't tell in advance how long the recording will be so it ends up with awkward moments at the end or prematurely ending the recording. Is there a way how to tell byzanz to stop its job, perhaps by sending a signal to it with kill or something?
There seems to be an option that could achieve that :
http://manpages.ubuntu.com/manpages/zesty/man1/byzanz-record.1.html
-e, --exec=COMMAND
Instead of specifying the duration of the animation, execute the
given COMMAND and record until the command exits. This is useful
both for benchmarking and to use more complex ways to stop the
recording, like writing scripts that listen on dbus.
However, in my package manager with the latest byzanz (fedora), --exec doesn't exist.
I think with that option, you could do :
byzanz-record --exec 'sleep 1000000' --delay=2 -x 0 -y 0 -w 3940 -h 950 desktop-animation.gif
and when you want to stop the recording, do : killall sleep
Sidenote: I have opened an issue on redhat bugzilla tracker to update their byzanz-record version : https://bugzilla.redhat.com/show_bug.cgi?id=1531055

Linux - Displaying Memory Usage Live

I'm running GNU - Screen (4.03.01) so I can have multiple terminals in one, and I'm looking for a good way to display live stats of my memory, so as I do things like compiling, testing programs, etc... I can see how much resources I have left.
I know there is "TOP" the performance monitor... and other similar programs, but I'm not looking for the entire active process list etc... I just want a snapshot of my memory stats that updates for example every 3-5 seconds.
I really appreciate anyone taking the time to help me with this, so thank you!
(for visualization purposes)
Screenshot:
You can use the combination of watch which repeats the specified program and displays its output and free which shows current memory usage
watch free -m
free --help
Usage:
free [options]
Options:
-b, --bytes show output in bytes
-k, --kilo show output in kilobytes
-m, --mega show output in megabytes
-g, --giga show output in gigabytes
--tera show output in terabytes
-h, --human show human-readable output
--si use powers of 1000 not 1024
-l, --lohi show detailed low and high memory statistics
-o, --old use old format (without -/+buffers/cache line)
-t, --total show total for RAM + swap
-s N, --seconds N repeat printing every N seconds
-c N, --count N repeat printing N times, then exit
--help display this help and exit
-V, --version output version information and exit
For more details see free(1).
watch --help
Usage:
watch [options] command
Options:
-b, --beep beep if command has a non-zero exit
-c, --color interpret ANSI color sequences
-d, --differences[=]
highlight changes between updates
-e, --errexit exit if command has a non-zero exit
-g, --chgexit exit when output from command changes
-n, --interval seconds to wait between updates
-p, --precise attempt run command in precise intervals
-t, --no-title turn off header
-x, --exec pass command to exec instead of "sh -c"
-h, --help display this help and exit
-v, --version output version information and exit
You could use valgrind tool Massif, I haven't tried it, but it seems to be what you are looking for.
To use massif, install valgrind then run:
valgrind --tool=massif program argument1 argument2 ...
another fast solution is script like this
while true; do
free -m
# any command for CPU stats - i didn't understand - what you really want to see, please clarify - just % of CPU usage ?
# i think this command should help you.
ps -A -o pcpu | tail -n+2 | paste -sd+ | bc
done
The other thing you can do is use htop. It displays memory usage, CPU usage per core and shows resources used by each process. Really neat but maybe not that detailed as the rest of the answers.

parsing the output of the 'w' command?

I'm writing a program which requires knowledge of the current load on the system, and the activity of any users (it's a load balancer).
This is a university assignment, and I am required to use the w command. I'm having a hard time parsing this command because it is very verbose. Any suggestions on what I can do would be appreciated. This is a small part of the program, and I am free to use whatever method i like.
The most condensed version of w which still has the information I require is `w -u -s -f' which produces this:
10:13:43 up 9:57, 2 users, load average: 0.00, 0.00, 0.00
USER TTY IDLE WHAT
fsm tty7 22:44m x-session-manager
fsm pts/0 0.00s w -u -s -f
So out of that, I am interested in the first number after load average and the smallest idle time (so i will need to parse them all).
My background process will call w, so the fact that w is the lowest idle time will not matter (all i will see is the tty time).
Do you have any ideas?
Thanks
(I am allowed to use alternative unix commands, like grep, if that helps).
Are you allowed to use other Unix commands? You could use grep, sed or head/tail to get the lines you need, and cut to split them up as needed.
Check out: http://www.gnu.org/s/libc/manual/html_node/Regexp-Subexpressions.html#Regexp-Subexpressions
Use regular expressions to match [0-9]+\.[0-9]{2} on the first line. You may have to fiddle with which characters are escaped. That will give you 3 load averages.
The remaining output is column-based, so if you count off the string positions from w, you'll be able to just strncpy the interesting bits.
Another possible theory (which sounds like it goes against the assignment, but I'd keep it in mind) is to go grab the source code of w and hack it up to just tell you the information via function calls. If you're feeling really hardcore, you can learn all the library api calls and do it directly that way.
I found i can use a combination of commands like so:
w -u -s -f | grep load | cut -d " " -f 11
and
w -u -s -f | grep tty | cut -d " " -f 13
the first takes the output of w, uses grep to only select the line with load, and then cuts everything except for the 11th chunk of data (delimiter is a space), which is the first load number with a comma.
the second does something similar, only for user load. And if there are multiple loads, its a list.
This is easy enough to parse, unless someone has an objection, or suggestion to improve it.

cat/Xargs/command VS for/bash/command

The page 38 of the book Linux 101 Hacks suggests:
cat url-list.txt | xargs wget –c
I usually do:
for i in `cat url-list.txt`
do
wget -c $i
done
Is there some thing, other than length, where the xargs-technique is superior to the old good for-loop-technique in bash?
Added
The C source code seems to have only one fork. In contrast, how many forks have the bash-combo? Please, elaborate on the issue.
From the Rationale section of a UNIX manpage for xargs. (Interestingly this section doesn't appear in the OS X BSD version of xargs, nor in the GNU version.)
The classic application of the xargs
utility is in conjunction with the
find utility to reduce the number of
processes launched by a simplistic use
of the find -exec combination. The
xargs utility is also used to enforce
an upper limit on memory required to
launch a process. With this basis in
mind, this volume of POSIX.1-2008
selected only the minimal features
required.
In your follow-up, you ask how many forks the other version will have. Jim already answered this: one per iteration. How many iterations are there? It's impossible to give an exact number, but easy to answer the general question. How many lines are there in your url-list.txt file?
There are other some other considerations. xargs requires extra care for filenames with spaces or other no-no characters, and -exec has an option (+), that groups processing into batches. So, not everyone prefers xargs, and perhaps it's not best for all situations.
See these links:
http://www.sunmanagers.org/pipermail/summaries/2005-March/006255.html
http://fahdshariff.blogspot.com/2009/05/find-exec-vs-xargs.html
Also consider:
xargs -I'{}' wget -c '{}' < url-list.txt
but wget provides an even better means for the same:
wget -c -i url-list.txt
With respect to the xargs versus loop consideration, i prefer xargs when the meaning and implementation are relatively "simple" and "clear", otherwise, i use loops.
xargs will also allow you to have a huge list, which is not possible with the "for" version because the shell uses command lines limited in length.
xargs is designed to process multiple inputs for each process it forks. A shell script with a for loop over its inputs must fork a new process for each input. Avoiding that per-process overhead can give an xargs solution a significant performance enhancement.
instead of GNU/Parallel i prefer using xargs' built in parallel processing. Add -P to indicate how many forks to perform in parallel. As in...
seq 1 10 | xargs -n 1 -P 3 echo
would use 3 forks on 3 different cores for computation. This is supported by modern GNU Xargs. You will have to verify for yourself if using BSD or Solaris.
Depending on your internet connection you may want to use GNU Parallel http://www.gnu.org/software/parallel/ to run it in parallel.
cat url-list.txt | parallel wget -c
One advantage I can think of is that, if you have lots of files, it could be slightly faster since you don't have as much overhead from starting new processes.
I'm not really a bash expert though, so there could be other reasons it's better (or worse).

Resources