Temp directory for unrar - unrar

and thanks in advance for all answer.
I'm running slitaz on a very small and just tailored patition, using unrar as root to decompress a file on a mounted external HD maybe I didn't specify the destination path so I failed to unrar the file but also the partition containing the filesystem get 100% full now and I wasn't able to find were unrar put its temp files so I can't delete them manually to make my Linux works correctly again.
Tanks to all.

Related

Efficiently Download Newer files via FTP (Large amount of files with WinSCP commandline)

I am currently creating a small process that will download any new or updated file from an FTP(not SFTP) location to a local drive. I have written a small batch script along with WinSCP, but it takes FAR too long. There are around 200k files and 16k folders on the FTP. It takes just under an hour to check every file and folder, not including any time downloading any new/updated file. From what I understand, WinSCP compared file modification dates of every single file and folder, which is why it takes so long. I'm not sure where to go from here.
I am using WINDOWS 10.
Is there a better way to go about synchronizing these? Possibly check FTP drive if anything has been changed, if so, find the change and download it? Will post pics of batch if needed.
Thank you!

How to get file's size on UNC path without pushd?

I'm helping a bud fix an application that has recently been changed over to using a UNC path. Before he could use a bat file to run
#echo off
echo %~z1
to get a file's size.
Now the bat file won't work because CMD does not support UNC paths as current directories. I thought about using pushd command to temporarily create a drive letter that points to the network resource but I'm thinking there has to be a more direct, cleaner way to do this I'm probably just not experience enough with CMD to know it yet.
Any suggestions or assistance would be greatly, appreciated!
Thanks.
Update
To clarify when the bat file is called (through a PHP file using exec() function) I get nothing in response. I tried a few ways of debugging (it's been a few days so I don't remember exactly what) but the most I could get was "Echo is off" or "The system cannot find the file specified." errors. I can copy/paste the file address into my Windows Explorer and I can find the file fine though.
Update II
It has been noted that the code shouldn't have a problem despite UNC not being supported. If this is true then what else could be the issue? Like I said before I can copy and paste the file paths that are given to the bat file and they open fine in windows explorer.
Update III
I tried timing how long the bat file took to execute and it seems to randomly either take almost no time or a little over a minute. So I'm guessing that might be my problem area. However when I run it via the ajax call its response time is about 550-650. I have no idea what would cause a bat file's execution time to vary by so much. Any ideas would be welcome!
Thanks in advance for any input!
cmd is not compatible with an UNC active directory, but the code in your file will not have any problem with it. You can invoke as
\\server\share\folder\file.bat \\server\share\folder\file.txt
d:\folder\file.bat "\\server\share\folder with spaces\file.txt"
"\\server\share\folder with spaces\file.bat" d:\file.txt
....
and in every case your posted code will work as long as both the batch file and the file to be processed exist

Need to write a batch file job

I need to write a batch job file to keep two folder in sync.
Inputs : Source and Destination folder names.
senario :
If the destination folder/subfolder is not found – create and proceed
If the files are already present in both the folders Compare and overwrite destination file if timestamp/size differs.
If the file not present in source and present in destination, delete the destination file.
kindly help me to write the batch job.
Thanks Vikram
A batch file written by a self-professed inexperienced user - even run periodically - is not a good replacement for a sync tool like rsync or dropbox or even the Briefcase built into Windows. Robocopy looks a good bet.

read from a file while installation in nsis but do no copy it on the destination pc

I want to display the version on the installation dialog pages of nsis, by reading the version from a text file, but i have to copy this on the destination pc where the executable is run, but i want that the text file is not copied on the user's pc but is only read from,
that is,
i want to include this file into the exe, and read text from it to display on the nsis dialog pages, but not copy it anywhere on the pc wherever the exe is run?
is this possible? or is there any other way of doing this?
In general, you can use $PLUGINSDIR constant. It is de-facto temporary directory on target system and you can put there something and use. Following code will copy the file into the temporary directory on the target machine, but whole directory will be deleted after the installation completion. (InitPluginsDir is needed somewhere before)
InitPluginsDir
File /oname=$PLUGINSDIR\blah.txt "..\myfile.txt"
But in your case, it could be better to solve it in some other way. You can define some constant containing version number and use it in the code, can't you? The !define command could be in generated file so you can automate it...

How to "defragment" a directory on ext3?

I am running a daemon which analyses files in a directory and then deletes them. In case the daemon is not running for whatever reason, files get stacked there. Today I had 90k files in that directory. After starting the daemon again, it processed all the files.
However, the directory remains large; "ls -dh ." returns a size of 5.6M. How can I "defragment" that directory? I already figured out that renaming that directory, and creaing a new one with the same name and permissions solves the problem. However, as files get written in there at any time, there doesn't seem to be a safe way to rename the directory and create a new one as for a moment, the target directory does not exist.
So a) is there a way/a (shell) program which can defragment directories on an ext3 filesystem? or b) is there a way to create a lock on a directory so that trying to write files blocks until the rename/create has finished?
"Optimize directories in filesystem. This option causes e2fsck to try to optimize all directories, either by reindexing them if the filesystem supports directory indexing, or by sorting and compressing directories for smaller directories, or for filesystems using traditional linear directories." -- fsck.ext3 -D
Of course this should not be done on a mounted filesystem.
Not really applicable for Ext3, but maybe useful for users of other filesystems:
according to https://wiki.archlinux.org/index.php/Btrfs#Defragmentation, with Btrfs it is apparently possible to defragment the metadata of a directory: btrfs filesystem defragment / will defragment the metadata of the root folder. This uses the online defragmentation support of Btrfs.
while Ext4 does support online defragmentation (with e4defrag), this doesn't seem to apply to directory metadata (according to http://sourceforge.net/p/e2fsprogs/bugs/326/).
I haven't tried either of these solutions, though.
I'm not aware of a way to reclaim free space from within a directory.
5MB isn't very much space, so it may be easiest to just ignore it. If this problem (files stacking up in the directory) occurs on a regular basis, then that space will be reused anytime the directory fills up again.
If you desperately need the ability to shrink the directory here's a (ugly) hack that might work.
Replace the directory with a symbolic link to an empty directory. If this problem reoccurs, you can create a new empty directory, and then change the symlink to point to the new directory. Changing the symlink should be atomic, so you won't loose any incoming files. Then you can safely empty and delete the old directory.
[Edited to add: It turns out that this does not work. As Bada points out in the comments you can't atomically change a symlink in the way I suggested. This leaves me with my original point. File systems I'm familiar with don't provide a mechanism to reclaim free space within directory blocks.]

Resources