I was playing with a modified version of https://github.com/libfuse/libfuse/blob/master/example/passthrough.c and I mounted it in a folder in my home directory (home/joao/mnt). Without realizing that I was running it already, I re-ran the fuse programm and my computer stopped working. I rebooted the system and then I saw my home folder without most of the files that ithad before. Is there anything I can do to restore my old state?
I'm no way an expert, but I'm afraid you've lost everything unless you have a backup of your system... I can only give a hint. In passthrough.c it is said that
This file system mirrors the existing file system hierarchy of the
system, starting at the root file system. This is implemented by just "passing through" all requests to the corresponding user-space libc functions. Its performance is terrible.
So, if I understood it right, every change you're applying to mounted passthrough FS affects your home dir. Or, since it mirrors your home dir, you could just make a pretty recursion by mounting it there and thus break something.
Related
I have a Linux subsystem installed on my Windows machine. I've transferred a tar.gz file I want to access by finding the location of my subsystem and dragging the files over. But when I run the command:
tar -zxvf file_name.tar.gz
I get the error:
tar (child): vmd-1.9.4a51.bin.LINUXAMD64-CUDA102-OptiX650-OSPRay185.opengl.tar.gz: Cannot open: Permission denied
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
I assume permission being denied is to do with having transferred from Windows since I couldn't access directories I created through Windows either. So, is there something I need to change to gain access to these files?
(PS. I know there are other way of getting tar.gz files other than transferring from Windows, but I'll need to do this for other folders too, I only included the filetype in case it was relevant .)
EDIT: You shouldn't attempt to drag files over. See answer below.
For starters, this belongs on Super User since it doesn't deal directly with a programming question. But since you've already provide an answer here that may be slightly dangerous (and even in your question), I didn't want to leave this unanswered for other people to find inadvertently.
If you used the first method in that link, you are using a WSL1 instance, not WSL2. Only WSL1 made the filesystem available in that way. And it's a really, really bad idea:
There is one hard-and-fast rule when it comes to WSL on Windows:
DO NOT, under ANY circumstances, access, create, and/or modify Linux files inside of your %LOCALAPPDATA% folder using Windows apps, tools, scripts, consoles, etc.
Opening files using some Windows tools may read-lock the opened files and/or folders, preventing updates to file contents and/or metadata, essentially resulting in corrupted files/folders.
I'm guessing you probably went through the install process for WSL2, but you installed your distribution before setting wsl --set-default-version 2 or something like that.
As you can see in the Microsoft link above, there's now a safe method for transferring and editing files between Windows and WSL - the \\wsl$\ tmpfs mounts. Note that as a tmpfs mount stored in memory, it's really more for transferring files over. They will disappear when you reboot or shutdown WSL.
But even if you'd used the second method in that article (/mnt/c), you probably would have run into permissions issues. If you do, the solution should be to remount the C: drive with your uid/gid as I describe here.
We're experiencing a strange problem.
We have a file component monitoring a folder. This works perfectly if the path is either
a) myrelativepath - which is relative to the Karaf installation where the camel route is run; or
b) /tst/mypath - which reads from a folder from the root
If I set log level to DEBUG I see the logs of it polling based on my interval.
However, if I set the path to be:
/mnt/windowsshare - which is a mounted windows share.
I get nothing in the logs, I don't see the poll, and it doesn't pick up any files. apparently the route is started though.
Interestingly, I have another camel route which writes a file to that location (a subfolder called inbound) and it writes file with no problem.
Any ideas?
I can get perhaps more logs tomorrow, but this is only happening in this environment where we have a windows share. And the share seems to be fine.
For testing we have run Camel as root and as root on the commandline we have tested reading the files (via vi) and all is ok.
Any suggestions for things to look at?
Basically make sure you don't have too many files in the antexlcude...polling is logarithmic and a fraction more makes polling very slow.
Needs more code analysis and VM introspection to understand why.
In the process of symlinking my dotfiles (.vimrc, .zshrc, .bashrc etc.) I wrote a simple ruby script to do this for me so I could switch between two different sets of dotfiles... however in the process I made a dumb mistake, and ended up linking my backup files as symlinks of the home folder copies, and vice versa... making them not accessible (vi says permission denied)
So I tried unlinking the backups, and now the files read 'no such file or directory' in my home folder, yet 'locate .zshrc' tells me its there. I realize it would have been prudent to push them to a repo first. Any suggestions?
locate works off of a cached database; you'll have to updatedb (possibly with root) in order to update said database.Unfortunately, that means said files are probably gone forever.
I have developed a Mac software (using DiscRecordingFramework and IOKit) that creates hybrid Video-DVD. The resulting DVD is fully compatible with Video-DVD specifications. The hybrid disc hosts HFS+, UDF and ISO filesystems. Now the problem is Mac system automatically mounts HFS+ filesystem but default DVD Player on Mac cannot play a CSS protected movie from HFS+ filesystem. As a workaround I developed a script which mounts UDF filesystem along with HFS+ filesystem. This script actually load UDF2.1 kernel extension and mounts UDF filesystem. This solution worked but it's not desirable as it requires root passwords.
Is it possible to develop a solution which auto detects the hybrid disc and mounts both HFS+ and UDF filesystems? This solution should not compromise the system security. If it requires root passwords once in a lifetime that is OK but if it requires root passwords every time disc is used is not desirable.
Any help would be highly appreciated.
Summary of the comment thread:
diskutil mount doesn't require root permissions, so it's preferable to use that if possible
The Disk Arbitration framework can be used to prevent filesystems being mounted, if necessary.
If you need to repeatedly perform an action as root and don't want to keep asking for the password, you can put the commands in a script, mark it as owned by root and set the setuid bit. You'll only need root permissions once for this.
For serious filesystem and disk trickery, you sometimes can't avoid dropping to the kernel level. An advantage of an installed kext is that it's the earliest possible way to respond to an inserted disk.
Hello all I have some very important system files which I want to protect from accidental deletion even by root user. I can create a new partition for that and mount it with readonly access but the problem is that I want my application which handles those system files to have write access to that part and be able to modify them. Is that possible using VFS? As VFS handles access to the files I could have a module inserted in the VFS layer which can see if there is a write access to that part then see the authorization and allow it or otherwise reject it.
If not please provide me suggestions regarding how can such a system be implemented what would I need in that case.
If there exists a system like this please suggest about them also.
I am using linux and want to implement this in C, I think it would be possible in C only.
Edit: There are such kind of programs implemented in windows which can restrict access to administrator even, to some important folders, would that be possible in linux?
My application is a system backup and restore program which needs to keep its backup information safe and secure. So I would like to have a secured part of a partition which could not be accidently deleted in any way. There are methods of locking a flashdrive can we use some of those methods for locking a partition in linux also ? so that mount is password protected ? I am not writing a virus application, my application would give user option to delete the backups but I don't wanna allow them to be deleted by any other application.
Edit: I am writing a system restore and backup program for ubuntu, I am a computer engineering student.
Edit: As I have got opinion from Basile Starynkevitch that I would be committing worst sin of programming if I do anything like this, but you could provide me suggestions considering this as a experimental project, I could make some changes in the VFS layer so that this could work.
You could use chattr, e.g.
chattr +i yourfile
But I don't think it is a good thing to do that. People using root access are expected to be careful. Those having root access can still issue the command undoing the above.
There is no way to forbid people having root access, or people having physical access to the computer, to access, remove, change your file, if they really want to (they could update & hack the kernel, for instance). Read more about trusted compute base
And I believe it is even unethical (and perhaps illegal, in some countries) to want to do that. I own my PC, and I don't understand why you should disallow me to change some data on it, because I happened to install some software.
By definition of root on Linux, it can do anything... You won't be able to prohibit him to erase or alter data... People with root access can write arbitrary bytes at arbitrary places on the disk.
And on a machine that I own (or perhaps just have physical access to), I will, thanks God, always be able to remove a file (even under Windows: I could for example boot a Linux CDROM and remove the file from Linux accessing an NTFS, and then reboot the Windows...).
So I think you should not bother and take even a minute to find out how to make root altering your precious files more difficult. Leave them as other root files...
PHILOSOPHICAL RANT
The unix philosophy has always been to trust the system administrator (while protecting newbie users from mistakes), that is the root user. The root is able to do anything (this is why people avoid being root, even on a personal machine). There have never been strong features to prohibit root doing mistakes, because the system administrator is expected to know well the system, and is trusted.
And Unix sysadmins understand this fact: it is part of their culture. (This is probably in contrast with Windows administration culture). They know when to be careful, they don't expect software to prevent mistakes as root.
In order to use root squashing (which makes it so that root can't even see files for a local user) you can set up a local nfs. This forum page explains how to mount an nfs locally. The command is:
mount -t nfs nameofcomputer:/directory_on_that_machine /directory_you_should_have_already_created
nfs has root squashing enabled by default, which should solve your problem. From there, you just make sure your program stores its files on the nfs mount.
Sounds to me like you're trying to write a virus.
No doubt you will disagree.
But I'm willing to bet the poor people that install your software will feel like it's a virus, because it will be behaving like one by making itself hard to remove.
Simply setting r/w flags should suffice for anything else.