I have a problem with wget or my code..
wget -N --no-check-certificate "https://www.dropbox.com/s/qjf0ka54yuwz81d/Test.zip?dl=0"
The -N syntax is supposed to be downloading the file 'only' if the file is newer in terms of modification date but, It downloads the file every time I run the script. I see Last-modified header missing -- time-stamps turned off at the end of the output.
I don't get it as, When I upload/update a file on dropbox it does change and show "Modified" date-time but, wget doesn't get it or what?
Any help would be highly appreciated. Cheers.
Related
I am trying to download an artifact from a Jenkins project using a DOS batch script. The reason that this is more than trivial is that my artifact is a ZIP file which includes the Jenkins build number in its name, hence I don't know the exact file name.
My current plan of attack is to use wget pointing at: /lastSuccessfulBuild/artifact/
to do some sort of recursive/mirror download.
If I do the following:
wget -r -np -l 1 -A zip --auth-no-challenge --http-user=**** --http-password=**** http://*.*.*.*:8080/job/MyProject/lastSuccessfulBuild/artifact/
(*s are chars I've changed for posting to SO)
I never get a ZIP file. If I omit the -A zip option, I do get the index.html, so I think the authorisation is working, unless it's some sort of session caching issue?
With -A zip I get as part of the response:
Removing ...+8080/job/MyProject/lastSuccessfulBuild/artifact/index.html since it should be rejected.
So I'm not sure if maybe it's removing that file and so not following its links? But doing -A zip,html doesn't work either.
I've tried several wget options, and also curl, but I am getting nowhere.
I don't know if I have the wrong wget options or whether there is something special about Jenkins authentication.
You can add /*zip*/desired_archive_name.zip to any folder of the artifacts location.
If your ZIP file is the only artifact that the job archives, you can use:
http://*.*.*.*:8080/job/MyProject/lastSuccessfulBuild/artifact/*zip*/myfile.zip
where myfile.zip is just a name you assign to the downloadable archive, could be anything.
If you have multiple artifacts archived, you can either still get the ZIP file of all of them, and deal with individual ones on extraction. Or place the artifact that you want into a separate folder, and apply the /*zip*/ to that folder.
I have a link on my website that when clicked dynamically creates a csv file and downloads the file. I need a way to do this in a batch file so that the file can be downloaded automatically (via task scheduler). I have played around with wget but I can't get the file. Thank you in advance for your help!
bitsadmin.exe /transfer "Job Name" downloadUrl destination
If you are using Windows 7 then use same command in Power Shell
Note:
downloadUrl : It is the download url from referred website
destination : It is path of the file where we need to download it.
I use it as follows:
#plain wget
wget "http://blah.com:8080/etc/myjar.jar"
#wget but skirting proxy settings
wget --no-proxy "http://blah.com:8080/etc/myjar.jar"
Or to download to a specific filename (perhaps to enable consistent naming in scripts):
wget -O myjar.jar --no-proxy "http://blah.com:8080/etc/myjar1.jar"
If you're having issues, ensure wget logging is on and possibly debug (which will be augmented with your logging):
# additional logging
wget -o myjar1.jar.log "http://blah.com:8080/etcetcetc/myjar1.jar"
#debug (if wget was compiled with debug symbols only!)
wget -o myjar1.jar.log -d "http://blah.com:8080/etc/myjar1.jar"
Additional checks you may need to do if still no success:
Can you ping the target host?
Can you "see" the target file in a browser?
Is the target file actually on the server?
I have to write a batch file to download a .exe application and I am finding it very difficult to make sense of the whole process.
All I have got done so far is;
start /d C:"\Program Files <x86>\Google\Chrome\Application"
chrome.exe http://website/directory
This brings up the page I want to go to and the .exe file is on this page, but I don'y know how to download it, I tried;
start /d C:"\Program Files <x86>\Google\Chrome\Application"
chrome.exe http://website/directory/download.exe
This was no good, it tried to load the page, while I thought it would just download the file.
If anyone can give me some insight into this, it would be great
Do not use chrome. Depending on the tools you can rely on, use for example wget or curl. For documentation, have a look at the project's homepages (wget, curl), basic invokation is easy:
wget -o outfile http://example.com/url/to/file
curl -o outfile http://example.com/url/to/file
You may need to change:
http://www.
to
ftp://ftp.
It would help if you provided the actual internet file link.
I am transferring files from a folder on one server to another and I am using wget to do so.
But the problem is that wget gets terminated and when I rerun the command it starts from the very first file although I use -nc to skip files that exist but still it traverses all the files and then skip those files that exist so it takes too much time in skipping the files.
I want to know is there any way to have wget start downloading directly from the new file instead of checking each file from the top.
I hope I have made my question clear. Pardon me if couldn't.
This is the command that I am using:
wget -H -r --level=1 -k -p -nc http://www.example.com/images/
You could try using a reject-list to skip already downloaded files.
If all your files are in the same directory, it could be as simple as:
wget -R "`ls -1 | tr "\n" ,`" <your own options>
I am not sure what will happen with partial downloads.
I am having a bit of trouble grabbing some files that have a strange file structure. What do I mean exactly? http://downloads.cloudmade.com/americas/northern_america/united_states/district_of_columbia#downloads_breadcrumbs
Look at that example. I want to start at the root of the site and recursively grab all the files that end with *.shapefile.zip. wget appears to treat this as two separate files ending in .shapefile and .zip. Anyone have some wget goodness to help me get started on this one?
You can recursively wget specific file types with:
wget -A 'shapefiles.zip' -r <url>
Although I don't think .shapefiles.zip is an extension of .zip but more that site's naming conventions