In the process of symlinking my dotfiles (.vimrc, .zshrc, .bashrc etc.) I wrote a simple ruby script to do this for me so I could switch between two different sets of dotfiles... however in the process I made a dumb mistake, and ended up linking my backup files as symlinks of the home folder copies, and vice versa... making them not accessible (vi says permission denied)
So I tried unlinking the backups, and now the files read 'no such file or directory' in my home folder, yet 'locate .zshrc' tells me its there. I realize it would have been prudent to push them to a repo first. Any suggestions?
locate works off of a cached database; you'll have to updatedb (possibly with root) in order to update said database.Unfortunately, that means said files are probably gone forever.
Related
I have a Linux subsystem installed on my Windows machine. I've transferred a tar.gz file I want to access by finding the location of my subsystem and dragging the files over. But when I run the command:
tar -zxvf file_name.tar.gz
I get the error:
tar (child): vmd-1.9.4a51.bin.LINUXAMD64-CUDA102-OptiX650-OSPRay185.opengl.tar.gz: Cannot open: Permission denied
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
I assume permission being denied is to do with having transferred from Windows since I couldn't access directories I created through Windows either. So, is there something I need to change to gain access to these files?
(PS. I know there are other way of getting tar.gz files other than transferring from Windows, but I'll need to do this for other folders too, I only included the filetype in case it was relevant .)
EDIT: You shouldn't attempt to drag files over. See answer below.
For starters, this belongs on Super User since it doesn't deal directly with a programming question. But since you've already provide an answer here that may be slightly dangerous (and even in your question), I didn't want to leave this unanswered for other people to find inadvertently.
If you used the first method in that link, you are using a WSL1 instance, not WSL2. Only WSL1 made the filesystem available in that way. And it's a really, really bad idea:
There is one hard-and-fast rule when it comes to WSL on Windows:
DO NOT, under ANY circumstances, access, create, and/or modify Linux files inside of your %LOCALAPPDATA% folder using Windows apps, tools, scripts, consoles, etc.
Opening files using some Windows tools may read-lock the opened files and/or folders, preventing updates to file contents and/or metadata, essentially resulting in corrupted files/folders.
I'm guessing you probably went through the install process for WSL2, but you installed your distribution before setting wsl --set-default-version 2 or something like that.
As you can see in the Microsoft link above, there's now a safe method for transferring and editing files between Windows and WSL - the \\wsl$\ tmpfs mounts. Note that as a tmpfs mount stored in memory, it's really more for transferring files over. They will disappear when you reboot or shutdown WSL.
But even if you'd used the second method in that article (/mnt/c), you probably would have run into permissions issues. If you do, the solution should be to remount the C: drive with your uid/gid as I describe here.
I was playing with a modified version of https://github.com/libfuse/libfuse/blob/master/example/passthrough.c and I mounted it in a folder in my home directory (home/joao/mnt). Without realizing that I was running it already, I re-ran the fuse programm and my computer stopped working. I rebooted the system and then I saw my home folder without most of the files that ithad before. Is there anything I can do to restore my old state?
I'm no way an expert, but I'm afraid you've lost everything unless you have a backup of your system... I can only give a hint. In passthrough.c it is said that
This file system mirrors the existing file system hierarchy of the
system, starting at the root file system. This is implemented by just "passing through" all requests to the corresponding user-space libc functions. Its performance is terrible.
So, if I understood it right, every change you're applying to mounted passthrough FS affects your home dir. Or, since it mirrors your home dir, you could just make a pretty recursion by mounting it there and thus break something.
I'm having a tough time with ClearCase. I'm working with a dynamic view.
Somehow, I got two files that are eclipsed. I compared the folder in my version (with the eclipsed files) with every version on my branch and every version on the main branch. The original files are nowhere to be found.
I searched for the files in Windows Explorer and found them in the lost+found directory (with a 32 character extension). This directory appears to be invisible because I can't see it in either Windows Explorer or ClearCase.
I opened a DOS window and ran cleartool. I removed the files (I had fun typing it all, plus the 32 character extension at the DOS prompt). I could not find a way to delete them from either Clearcase Home Base or ClearCase Explorer.
I thought this would solve my problem, since there are no more files with the same names anywhere on my computer.
I deleted the eclipsed files and created them again in Qt Creator. But when I opened ClearCase Explorer again, there they were - eclipsed! I cannot figure out where the evil twins are. I tried finding the eclipsed files by using cleartool. Nothing. I've tried many approaches I've found online - none work.
I tried stopping and starting the view. I deleted the eclipsed files again, closed Qt Creator and then opened Qt Creator again and recreated them. I tried many other things suggested - none made any difference.
If I'm eclipsing existing files, where are they? I'm starting to think that the real evil one here is the parent - ClearCase!
Eclipsed doesn't mean evil twins (the fact that you add multiple times a file does though).
When you add to source control a file, ClearCase will:
checkout the parent directory
access the file in order to create a temporary one (called 'afile.mkelem')
create the file in the ClearCase vob
check in the parent directory
I usually see repeated eclipsed file when ClearCase isn't able to access the content of a file, because another process prevents it.
Try adding those files after closing the Qt editor.
The OP Rob Moore mentions having solved the issue with:
I changed the view to main/LATEST, and the file showed up.
I went to the tree view of that file and noticed that I had a branch there with one version.
I compared my branch version with the main/LATEST and they were the same, so I deleted my branch and put my label on the main/LATEST version
So it is possible that, as soon as the element was added, it wasn't properly selected by the config spec (being a new version on a branch which wasn't part of the config spec), and its state reverted to "eclipsed".
I am running a daemon which analyses files in a directory and then deletes them. In case the daemon is not running for whatever reason, files get stacked there. Today I had 90k files in that directory. After starting the daemon again, it processed all the files.
However, the directory remains large; "ls -dh ." returns a size of 5.6M. How can I "defragment" that directory? I already figured out that renaming that directory, and creaing a new one with the same name and permissions solves the problem. However, as files get written in there at any time, there doesn't seem to be a safe way to rename the directory and create a new one as for a moment, the target directory does not exist.
So a) is there a way/a (shell) program which can defragment directories on an ext3 filesystem? or b) is there a way to create a lock on a directory so that trying to write files blocks until the rename/create has finished?
"Optimize directories in filesystem. This option causes e2fsck to try to optimize all directories, either by reindexing them if the filesystem supports directory indexing, or by sorting and compressing directories for smaller directories, or for filesystems using traditional linear directories." -- fsck.ext3 -D
Of course this should not be done on a mounted filesystem.
Not really applicable for Ext3, but maybe useful for users of other filesystems:
according to https://wiki.archlinux.org/index.php/Btrfs#Defragmentation, with Btrfs it is apparently possible to defragment the metadata of a directory: btrfs filesystem defragment / will defragment the metadata of the root folder. This uses the online defragmentation support of Btrfs.
while Ext4 does support online defragmentation (with e4defrag), this doesn't seem to apply to directory metadata (according to http://sourceforge.net/p/e2fsprogs/bugs/326/).
I haven't tried either of these solutions, though.
I'm not aware of a way to reclaim free space from within a directory.
5MB isn't very much space, so it may be easiest to just ignore it. If this problem (files stacking up in the directory) occurs on a regular basis, then that space will be reused anytime the directory fills up again.
If you desperately need the ability to shrink the directory here's a (ugly) hack that might work.
Replace the directory with a symbolic link to an empty directory. If this problem reoccurs, you can create a new empty directory, and then change the symlink to point to the new directory. Changing the symlink should be atomic, so you won't loose any incoming files. Then you can safely empty and delete the old directory.
[Edited to add: It turns out that this does not work. As Bada points out in the comments you can't atomically change a symlink in the way I suggested. This leaves me with my original point. File systems I'm familiar with don't provide a mechanism to reclaim free space within directory blocks.]
This is starting to vex me. I recently decided to clear out my FTP, and stumbled across an old Wordpress install I forgot I had (oh yes, very security conscious me). Anyway, for some reason deleting the directory failed so I investigated to see what was causing the blockage and I've narrowed it down to a file in wp-content.
Now when I try to delete this file I can get two errors. I've tried in Windowx Explorer (FTP) and the Web Control Panel's File Manager. Here's some error shots:
As you can see my File manager thinks the file is a Symbolic Link. While it scares me that
my web server is host to an obviously religoious artifact I'm also heavily confused by the situation.
I've tried renaming the file.
I've refreshed the FTP view.
I've tried moving the file to another dir (which worked, no success on deletion though).
I've tried editing the file and then deletion.
And I'm at a loss. Is there a special way to delete SymLinks? I've never heard of them, until now.
edit
Oho Windows you really are a magician of sorts. I decided to take a look at my FTP via command prompt and guess what? The file doesn't exist. Whether ftp ignores symlinks I don't know but I'm about to give up :P
First of all, try emailing your webhost either for SSH-access or to remove the symlink for you.
If you get SSH-access, use:
unlink index.php
Or if neither works: Create a PHP file there (for instance remove.php) that contains:
<?php unlink("./index.php") ?>
Then open that file in your browser, afterwards remove the remove.php file.