Create and iterate through a list of numbers in Bash - arrays

I am trying to use cURL on about 200 select ports, and I would prefer to not have to do each one at the same time. I am also trying to learn the basics of Bash.
What I am trying to do is create a list of numbers and then iterate through each of those numbers. Here is what I have:
Ports={1,5,7,10,12}
for port in $Ports
do
$echo "Port $port"
curl "URL:$port"
done
Is this possible to do or am I thinking too high level? Thank you!

Try this:
#!/bin/bash
Ports="1 5 7 10 12"
for port in $Ports; do
echo "Port $port"
curl "URL:$port"
done

Related

Two Arrays from one text file

I have been completely stumped by this one. I have a piece of code that spits out this...
192.168.0.1=12345
192.168.0.2=35345
192.168.0.3=72345
192.168.0.4=43345
That is written to a text file then loaded back into a program into an array.
Is there a way to split it into 2D array? The first D containing the IP address the second the other numbers. I will need to find out the IP that is linked to the numbers later in the code.
So far I just have this...
IFS=$'\r\n' GLOBIGNORE='*' command eval 'uparray=($(cat ./uptime.txt))'
I should probably mention this is running on Raspbian
if your bash version supports associative array
declare -A ip_nums
while IFS== read ip num; do
ip_nums[$num]=$ip
done <./uptime.txt
then to retreive ip from num
echo "${ip_nums[$num]}"
EDIT: To memorize the biggest number in the loop
biggest=0
while ...
...
if ((num>biggest)); then
biggest=$num
fi
done ...

How can I download an array of files from a website? wget?

Let's say I want to download example.com/pics/0000.jpg through example.com/pics/9999.jpg.
What's the best way to do that?
I tried:
wget example.com/pics/{0000..9999].jpg
but it said "Argument list too long".
What's a good script or program I can use to do this?
I don't code much. I am thinking it will involve a shell script that uses wget to get 0000.jpg and then it will +1 to get the next picture, until it reaches 9999.jpg.
Thanks.
Here's a Bash one-liner that does what you want:
for n in $(seq -f "%04g" 0 9999); do wget http://example.com/pics/$n.jpg; done

Nmap port scanning array

I am doing a nmap bash script, and I am just wondering if there is any possibility to use array list for my port commands. For example:
port=[23,45,75,65]
for i in 21 do
nmap -p x,y 192.168.1.$i
done
e.g. At the x,y place I want to use the number 23,45
I'm not sure if that's what you want, but you can try this:
ports="23,45,75,65"
for i in 21 do
nmap -p "$ports" 192.168.1.$i
done
You can also do:
ports="23,45,75,65"
targets="1-25"
nmap -p "$ports" "192.168.1.$targets"
Scanning an array of ports is already built in to nmap. See http://nmap.org/book/man-port-specification.html for more details on the syntax, but here's an excerpt that may give you what you need:
For example, the argument -p U:53,111,137,T:21-25,80,139,8080 would scan UDP ports 53, 111,and 137, as well as the listed TCP ports.

How can I get a random file in linux? including subdirectories

Lets say I am in /home/myuser
there are 90,000 files there inside 3000 directories.
How can I write a bash function or with linux commands to get one random file?
It could be C as well I suppose
You can list all your files and then pick a random line between them:
find /home/myuser | sort -R | head -n1
However this is not very efficient, and could take a while, but is easy to understand. You can work from here.
You can use shuf for this task, for example set globstar option and try
shuf -e path/**/*.txt | head -n1

Find the most frequent entry in a file using unix

I have a file containing around 2,000,000 entries - just one column with that many entries, all numbers. I would like to quickly find out what the most frequent number in the file is.. Is there a way to do this using unix?
I know how to do it using gnuplot but it is a slightly tedious way and was wondering if there was a simpler way just by using some unix commands?
Like if my file is
1
1
1
2
3
4
Then I want it to read the file and give me the answer 1 because thats the most frequent.
You can do it like this:
$ cat file|sort -n|uniq -c|sort -n|tail -n 1|awk '{print $2}'
sort test.txt | uniq -c | sort -rn | head -n 1 should help. It prints the number of occurences and the number that is most used, so for your example file it would be: 3 1
My first answer to that would be building an histogram. It helps if the range of possible values is small enough.
Once the histogram is built, just look for the highest amount in it.

Resources