Why should Hashicorp Packer file provisioner use temp directory? - file

I wonder what are the benefits of indirect file upload within /tmp directory Packer file provisioner doc suggests?
It's reasonable to assume uploading file/directory to the destination place without any intermediates.

This is probably recommended because the /tmp directory on any machine will likely fit the requirements mentioned in the docs for the destination parameter:
This value must be a writable location and any parent directories must already exist.
The /tmp directory usually exists and is usually readable and writeable by any process on your machine, so it's a good suggestion as a standard destination folder.

Related

Do you need to be specific about a file location when using os or do you still ned to write (folder/file)?

Let's say I have a file called hello.txt in the folder called coding, and I want to open that in python. I know that if I don't use os, I would have to write open("coding/hello.txt") but if I would write os.open would I still have to specify the folder like ("coding/hello.txt") or can I just write os.open("hello.txt") because I am using os?
"File" and "operating system" can mean a lot of different things, but typically operating systems have the concept of a "current" or "working" directory. Each process has its own current directory, and if you don't specify a directory for a file it uses the current directory.
Do not rely on this. Too many things can change the current directory unexpectedly, and your program will suddenly start using a different file.
Instead always specify the full file path like open("/usr/tmp/coding/hello.txt") or whatever is appropriate for your operating system; it will probably provide environment variables or something for the user's home or temporary directories.
Note that your examples "coding/hello.txt" and "hello.txt" both use the current directory, and are different files.

GetPrivateProfileString and AppData VirtualStore directory

I have a program which reads GetPrivateProfileString from a file ".\abcd.ini" - i.e. it will look for the ini file in the current directory.
If it does not find the ini file, it has a default value set in the 3rd parameter to GetPrivateProfileString.
I have an installer which installs the program to c:\program files (x86)\abcd\client directory.
Initially, the installer also installed an abcd.ini file in the same directory with a particular profile string key/value pair. Post that, I changed the installer to not install any ini file.
However, the program continued taking the value from the old ini file which I had shipped even if it didn't exist in that directory.
After doing a system wide search I found a copy of abcd.ini in c:\Users\myusername\AppData\Local\VirtualStore\Program Files (x86)\abcd\Client
Once I deleted this, the program worked correctly (as if there is no ini file).
Googling it seems that the virtualstore is used because myuser does not have full permissions for c:\program files (x86). However, the program itself doesn't write to the ini file, it only reads from it.
Is this actually how it's supposed to be? Why is the ini file copied to AppData & why does the program read from there if there is no local copy?
I am on Windows 10 64 bit.
The diagnostic is that the EXE program does not contain a manifest that declares itself compatible with UAC. Not unusual for the kind of app that still uses GetPrivateProfileString().
Is this actually how it's supposed to be?
Yes, this the way modern versions of Windows (major version >= 6, Vista and up) deal with legacy programs that assume the user always has admin privileges. Redirecting the file access to the VirtualStore directory ensures that the missing access rights to Program Files directory does not cause trouble.
it only reads from it
The OS does not have a time machine to guess whether you might write to the file and did so in a previous session. So it has to check the VirtualStore directory first. To find that .ini file.
It is also important to not assume that it was your program that got the .ini file in that directory. It could have been done by another ancient program, like a text editor. Or a previous version of your program. Or the installer you use.
Yes, because the program would crash, so Windows redirects the program to the VirtualStore directory.

Temp directory for unrar

and thanks in advance for all answer.
I'm running slitaz on a very small and just tailored patition, using unrar as root to decompress a file on a mounted external HD maybe I didn't specify the destination path so I failed to unrar the file but also the partition containing the filesystem get 100% full now and I wasn't able to find were unrar put its temp files so I can't delete them manually to make my Linux works correctly again.
Tanks to all.

How to determine if a file is on a FAT system (to see if it really is executable)

I am working at an OS independent file manager, and I divide files in groups, usually based on the extension. On Linux, I check if a file has the executable permissions or not, and if it does, I add it to the executables group.
This works great for Windows or Linux, but if you combine them it doesn't work so well. For example, while using it on Linux and exploring a windows mounted drive, all the files appear to be executable. I am trying to find a way to ignore those files and not add them to the executables group.
My code (on Linux) uses stat:
#ifndef WINDOWS
stat(ep->d_name, &buf);
....
if(!files_list[i].is_dir && buf.st_mode & 0111)
files_list[i].is_exe=1;
#endif
The first part of the answer is to find what filesystem the file is mounted on. To do that you need to find the filesystem using the st_dev field of the stat information for the file. (You can also do this by checking the file path, but you then have to check every path element for symbolic links).
You can then cross-reference the st_dev field with the mount table in /proc/mounts using getmntent_r(). There's an example of that in a previous answer. The mnt_type field will give you the text of the filesystem type, and you'll need to compare the string with a list of Windows filesystems.
Once you've found the filesystem, the only way to identify an executable is by heuristics. As other people have suggested, you can look at the file extension for Windows executables, and look at the initial bytes of the file for Linux executables. Don't forget executable scripts with the #! prefix, and you may need to read into a Jar file to find out if it contains an executable static main() method.
If you are browsing Windows files then you need to apply Windows rules for whether or not a file is executable. If the file extension is .EXE, .COM, .BAT, or .CMD then it is executable. If you want a more complete list then you should check MSDN. Note that it is possible to add registry entries on a machine that makes any extension you want to be considered executable, but it is best to ignore that kind of thing when you are browsing a drive from the network.
The fact is that you are fighting an uphill battle. The reason all the files have executable permissions is that the windows filesystem driver on Linux allows you to specify that as an option. This masks whether or not any files are Linux exceutables, for instance.
However, you could look into the file header for EVERY file and see if it is a Linux ELF executable (just like the Linux file command does).
It might be helpful to start by checking all the information about mounted filesystems so that you know what you are dealing with. For instance, do you have a CIFS filesystem mounted that is actually a Linux filesystem served up by SAMBA? If you enumerate every bit of information available about the mounted filesystem plus the complete set of stat info, you can probably identify combinations that act as fingerprints of the different scenarios.
Another option I could imagine, is to call the file util, and depend on its output (maybe its enough to grep for the words executable / script). This util exist/is compileable for windows (basically it just checks for some magic bytes in the files), too.

How to "defragment" a directory on ext3?

I am running a daemon which analyses files in a directory and then deletes them. In case the daemon is not running for whatever reason, files get stacked there. Today I had 90k files in that directory. After starting the daemon again, it processed all the files.
However, the directory remains large; "ls -dh ." returns a size of 5.6M. How can I "defragment" that directory? I already figured out that renaming that directory, and creaing a new one with the same name and permissions solves the problem. However, as files get written in there at any time, there doesn't seem to be a safe way to rename the directory and create a new one as for a moment, the target directory does not exist.
So a) is there a way/a (shell) program which can defragment directories on an ext3 filesystem? or b) is there a way to create a lock on a directory so that trying to write files blocks until the rename/create has finished?
"Optimize directories in filesystem. This option causes e2fsck to try to optimize all directories, either by reindexing them if the filesystem supports directory indexing, or by sorting and compressing directories for smaller directories, or for filesystems using traditional linear directories." -- fsck.ext3 -D
Of course this should not be done on a mounted filesystem.
Not really applicable for Ext3, but maybe useful for users of other filesystems:
according to https://wiki.archlinux.org/index.php/Btrfs#Defragmentation, with Btrfs it is apparently possible to defragment the metadata of a directory: btrfs filesystem defragment / will defragment the metadata of the root folder. This uses the online defragmentation support of Btrfs.
while Ext4 does support online defragmentation (with e4defrag), this doesn't seem to apply to directory metadata (according to http://sourceforge.net/p/e2fsprogs/bugs/326/).
I haven't tried either of these solutions, though.
I'm not aware of a way to reclaim free space from within a directory.
5MB isn't very much space, so it may be easiest to just ignore it. If this problem (files stacking up in the directory) occurs on a regular basis, then that space will be reused anytime the directory fills up again.
If you desperately need the ability to shrink the directory here's a (ugly) hack that might work.
Replace the directory with a symbolic link to an empty directory. If this problem reoccurs, you can create a new empty directory, and then change the symlink to point to the new directory. Changing the symlink should be atomic, so you won't loose any incoming files. Then you can safely empty and delete the old directory.
[Edited to add: It turns out that this does not work. As Bada points out in the comments you can't atomically change a symlink in the way I suggested. This leaves me with my original point. File systems I'm familiar with don't provide a mechanism to reclaim free space within directory blocks.]

Resources