Does anyone know of an easy way to programatically mount a file as a "disk" (FAT32) in Windows 7? - c

I have some automated test (using CUnit) which require a "disk-image"-file (raw copy of a disk) to be "mounted" in windows and explored. I have previously used a tool/library called "FileDisk-17" , but this doesn't seem to work on my Windows 7 (64bit).
Update
I should point out, that changing the image-format (to say VHD) is not at option.
Any suggestions as to other (perhaps better supported) tools or libraries for mouting the file? The project is coded in ANSI C and compiled using MinGW.
Best regards!
Søren

Edit: Searching Bing for +filedisk 64 brings up a 64bit build of FileDisk, the utility you refer to:
http://www.winimage.com/misc/filedisk64.htm
And FileDisk-15 signed for 64bit here:
http://www.acc.umu.se/~bosse/
I can't vouch for it as I have never used it and am not familiar with the author.
Alternatively:
If you have a VHD, you can mount that in windows easily:
http://technet.microsoft.com/en-us/library/cc708295(WS.10).aspx
See also:
http://www.petri.co.il/mounting-vhd-files-with-vhdmount.htm
Since you have a raw DD image not a VHD, you will need to convert it first:
http://www.bebits.com/app/4554
Or qemu-img.exe can also do this:
qemu-img.exe convert -f raw rawdisk.img -O vpc rawdisk.vhd
Alternatively, you can create an empty VHD, and use DD to copy the raw image to the VHD, by opening the VHD as a raw device.

I faced this problem recently and found ImDisk to be an extremely nice solution:
Free, with source available and a very flexible open source license
Trivial setup (I have seen filedisk64 (in the accepted answer) described as having a "technical" setup)
Straightforward GUI and command-line access
Worked on Windows 7 64-bit
Seems to happily mount any kind of filesystem recognised by Windows (in my case, FAT16)
Works with files containing
Raw partitions
Entire raw disks (i.e. including the MBR and one or more partitions; which partition to mount can be selected)
VHD files (which it turns out are just raw partitions or disks with a 512-byte footer appended!)
Also can create RAM drives -- either initially empty or based on an existing disk image! (Very neat I must say!)
I did encounter minor issues trying to unmount drives. I was unable to unmount a drive from the GUI right-click context menu as the drive appeared to be "in use" by the explorer.exe process. Closing the Explorer window and using imdisk -d -m X: also didn't work; however imdisk -D -m X: (-D "forces" an unmount, whatever that means) did. This worked even if the drive was visible in an open Explorer window, without appearing to create any problems. However even after the drive appeared to have fully unmounted, an imdisk -l to list all available devices would still report that \Device\ImDisk0 exists, and if you remount the drive later, both that and \Device\ImDisk1 will appear in the output of imdisk -l (and so on with more unmount/remount cycles). This didn't create any problems with actually using the mounted drive when I tried a few unmount/remount cycles, though it theoretically might if you perform this many times between reboots.
ImDisk was invaluable for transferring the contents of a 1.5Gb disk drive with one FAT16 DOS partition from an ancient 486 machine.

Related

How can I get access to files transferred from Windows to WSL-2 Ubuntu?

I have a Linux subsystem installed on my Windows machine. I've transferred a tar.gz file I want to access by finding the location of my subsystem and dragging the files over. But when I run the command:
tar -zxvf file_name.tar.gz
I get the error:
tar (child): vmd-1.9.4a51.bin.LINUXAMD64-CUDA102-OptiX650-OSPRay185.opengl.tar.gz: Cannot open: Permission denied
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
I assume permission being denied is to do with having transferred from Windows since I couldn't access directories I created through Windows either. So, is there something I need to change to gain access to these files?
(PS. I know there are other way of getting tar.gz files other than transferring from Windows, but I'll need to do this for other folders too, I only included the filetype in case it was relevant .)
EDIT: You shouldn't attempt to drag files over. See answer below.
For starters, this belongs on Super User since it doesn't deal directly with a programming question. But since you've already provide an answer here that may be slightly dangerous (and even in your question), I didn't want to leave this unanswered for other people to find inadvertently.
If you used the first method in that link, you are using a WSL1 instance, not WSL2. Only WSL1 made the filesystem available in that way. And it's a really, really bad idea:
There is one hard-and-fast rule when it comes to WSL on Windows:
DO NOT, under ANY circumstances, access, create, and/or modify Linux files inside of your %LOCALAPPDATA% folder using Windows apps, tools, scripts, consoles, etc.
Opening files using some Windows tools may read-lock the opened files and/or folders, preventing updates to file contents and/or metadata, essentially resulting in corrupted files/folders.
I'm guessing you probably went through the install process for WSL2, but you installed your distribution before setting wsl --set-default-version 2 or something like that.
As you can see in the Microsoft link above, there's now a safe method for transferring and editing files between Windows and WSL - the \\wsl$\ tmpfs mounts. Note that as a tmpfs mount stored in memory, it's really more for transferring files over. They will disappear when you reboot or shutdown WSL.
But even if you'd used the second method in that article (/mnt/c), you probably would have run into permissions issues. If you do, the solution should be to remount the C: drive with your uid/gid as I describe here.

rsync on fat32 and ntfs

A little background: I have tried to use rsync to backup my wife's home directory to an external usb drive with the command
rsync -va /home/wife /run/media/wife
but kept getting error messages that mkstemp failed, and that rsync failed to set times, becuase of a read-only filesystem. Worse, it seems that rsync is unable to tell when files don't need syncing, and winds up copying a lot of stuff it doesn't need to, resulting in rediculously slow backup times.
So I tried using rsync -rtvO instead, based on this guy's advice. Okay, no more warnings, but the backups still seem too slow, and esp on big media files that already exists -- i.e. it's still copying stuff unnecessarily.
Is my analysis correct?
Is there a workaround?
Will the problem be fixed if I use an NTFS drive for here backups?
I could of course use a linux filesytem, but on rare occasions she would like to be able to take the drive to work and access it from the Windows machines there.
Try using --modify-window=1
In particular, when transferring to or from an MS Windows FAT
filesystem (which represents times with a 2-second resolution),
--modify-window=1 is useful (allowing times to differ by up to 1 second).
https://download.samba.org/pub/rsync/rsync.html
You could also try using --size-only
skip files that match in size
For rsync to FAT, this is what I use and it seems to work pretty well:
rsync -rtv --modify-window=1 source/ destination/
Source: https://serverfault.com/a/144475/58568

Real remote editing without X-Forwarding, using Vim or the like

I'm currently working an a rather large web project which is written using C servlets ( utilizing GWAN Web server ). In the past I've used a couple of IDEs for my LAMP/PHP jobs, like Eclipse.
My problems with Eclipse are that you can either mirror the project locally, which isn't possible in this case as I'm working on a Mac (server does not run on OSX), or use the "remote" view, which would re-upload files when you save them.
In the later case, the file is only partly written while uploading, which makes this a no-go for a running web server, or the file could become corrupted if the connection was lost during uploading. Also, for changing some character, uploading the whole file seems rather inefficient to me.
So I was thinking:
Wouldn't it be possible to have the IDE open Vim per SSH and mirror my changes there, and then just :w (save) ? Or use some kind of diff-files for changes?
The first one would be preffered, as it has the added advantage of Vim .swp files, which makes it possible that others know when someone is already editing the file.
My current solution is using ssh+vim, but then I lose all the cool features I have with Eclipse and other more advanced IDEs.
Also, regarding X-Forwarding: The reason I don't like it is speed. It feels way slower than just editing locally, and takes up unneeded bandwidth, when all I want to do is basically "text editing".
P.S.: I couldn't find any more appropriate tags for the question, especially no "remote" tag, but if you know any, feel free to add them. Also, if there is another similar question, feel free to point it out - I couldn't find any.
Thank you very much.
If you're concerned about having to transmit the entire file for minor changes, the only solution that comes to my mind is running (either continuously, or on demand) an rsync job that mirrors the remote site to your local system (and back). The rsync protocol just transmits the delta information. According to Are rsync operations atomic at file level?, the change is atomic.
Another possibility: run everything in a virtual machine on your Mac. The server and the IDE/text editor are both on the same virtual machine so you don't have to fear network issues.
Because the source code on the virtual machine is under some kind of VCS the classic code → test → commit process is trivial (at least theoretically).

Two way sync with rsync

I have a folder a/ and a remote folder A/.
I now run something like this on a Makefile:
get-music:
rsync -avzru server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
when I make sync-music, it first gets all the diffs from server to local and then the opposite, sending all the diffs from local to server.
This works very well only if there are just updates or new files on the future. If there are deletions, it doesn't do anything.
In rsync there is --delete and --delete-after options to help accomplish what I want but thing is, it doesn't work on a 2-way-sync.
If I want to delete server files on a syn, when local files have been deleted, it works, but if, for some reason (explained after) I have some files that aren't in the server but exist locally and they were deleted, I want locally to remove them and not server copied (as it happens).
Thing is I have 3 machines in context:
desktop
notebook
home-server
So, sometimes, server will have files that were deleted with a notebook sync, for example and then, when I run a sync with my desktop (where the deleted server files still exist on) I want these files to be deleted and not to be copied again to the server.
I guess this is only possible with a database and track of operations :P
Any simpler solutions?
Thank you.
Try Unison: http://www.cis.upenn.edu/~bcpierce/unison/
Syntax:
unison dirA/ dirB/
Unison asks what to do when files are different, but you can automate the process by using the following which accepts default (nonconflicting) options:
unison -auto dirA/ dirB/
unison -batch dirA/ dirB/ asks no questions at all, and writes to output how many files were ignored (because they conflicted).
Note: I am no longer using Unison (I use NextCloud, which doesn't address the original use case). However, note that rsync is not designed for bidirectional sync, while unison is. unison may have its bugs (as any other piece of software) and its wrinkles. I am surprised it seems to be actively maintained now (last time I looked I think I thought it looked dead), but I'm not sure what's the state nowadays. I haven't had the need to have a two-way file synchronizer, so there may be better options, though.
Since the original question also involves a desktop and laptop and example involving music files (hence he's probably using a GUI), I'd also mention one of the best bi-directional, multi-platform, free and open source programs to date: FreeFileSync.
It's GUI based, very fast and intuitive, comes with filtering and many other options, including the ability to remote connect, to view and interactively manage "collisions" (in example, files with similar timestamps) and to switch between bidirectional transfer, mirroring and so on.
FreeFileSync can easily sync two computers on the same network and also sync two computers on different and remote networks.
On same network: have FreeFileSync use the local file system on one side and a shared network drive / path on the other. On Windows systems you enable file / disk sharing on one computer and access that share from the other. I use FreeFileSync this way to keep my main development PC source code synced with my 2 laptops.
I have also synced one of these laptops with a Linux server with Samba installed and sharing one of its directories.
Across networks: create a VPN and do the same as above. FreeFileSync will see the remote disk as it was on the local network. Or buy one router that allows you to connect a USB disk to it and share over the internet. I have installed a VPN on a remote Linux server and used it through the OpenVPN Windows client.
You could also try bitpocket: https://github.com/sickill/bitpocket
Try this,
get-music:
rsync -avzru --delete-excluded server:/media/10001/music/ /media/Incoming/music/
put-music:
rsync -avzru --delete-excluded /media/Incoming/music/ server:/media/10001/music/
sync-music: get-music put-music
I just test this and it worked for me. I'm doing a 2-way sync between Windows7 (using cygwin with the rsync package installed) and FreeNAS fileserver (FreeNAS runs on FreeBSD with rsync package pre-installed).
You might use Osync: http://www.netpower.fr/osync , which is rsync based with intelligent deletion propagation. it has also multiple options like resuming a halted execution, soft deletion, and time control.
You could try csync, it is the sync engine under the hood of owncloud.
I'm surprised no one has mentioned Syncthing yet. I have been using it for years to synchronize my phone, my tablet and my two laptops. One time I also used it to send 10 GB of photos to my family ~600 km away, straight from my machine to their machine, and it was incredibly fast (despite the data getting routed through Syncthing's discovery server to work around NAT issues). I also tried OwnCloud/NextCloud at some point but Syncthing has been much more reliable and, also, much faster.
I'm now using SparkleShare https://www.sparkleshare.org/
works on mac, linux and windows.
I'm not sure whether it works with two syncing but for the --delete to work you also need to add the --recursive parameter as well.
Rclone is what you are looking for. Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers including local filesystems. Rclone was previously known as Swiftsync and has been available since 2013.

Read data from damaged media

Is it possible to read damaged media (cd, hdd, dvd,...) even if windows explorer bombs out?
What I mean to ask is, whether there is a set of APIs or something that can access the disk at a very low level (below explorer?) and read whatever can be retrieved even if it is only partial, especially if you can still see the file is there from explorer, but can't do anything with it because it is damaged somehow (scratch on cd, etc)?
The main problem with Windows Explorer is that it doesn't support resuming copying after a read error. Most superficially scratched CDs, for example, will fail on different areas of the disk every time you eject and reinsert them.
Therefore, with a utility that supports resuming copy operations, it is possible to read the entire contents of a damaged CD with by doing "eject/reload/resume" a few times.
In fact, this is what a utility I wrote does, and I've never needed anything fancier to read scratched disks. (It simply uses ReadFile and WriteFile.)
One step lower would be opening the raw partition (i.e. disk image) by passing a string such as "\.\F:" (note: slashes are literal here) to CreateFile. It would allow you to read raw sectors from a drive, but reconstructing files from that data would be hard.
In fact, the "\.\" syntax allows you to open devices in the "\GLOBAL??" branch of the Windows Object Manager namespace as if they were files. It's not unlike calling dd with /dev/x as a parameter. There is also a "\Device" branch, but that's only accessible via DeviceIoControl() (i.e. ioctl()), meaning there's no simple ReadFile()/WriteFile() interface.
Anything lower level than that would be device-specific, I guess; like reading raw CD-ROM data (including ECC bits) the way some CD-burning programs do. You'd have to do some research on the specific media (CD, flash, DVD) and what your hardware allows you to do on them.
Note: The backslashes seem to get lost on the way to the web page; you need to pass "backslash backslash dot backslash DeviceName" to CreateFile. You need to escape them, too, of course.
If you want to do it, do it from the Linux side - see: http://sourceforge.net/projects/monkeycity/ opensource
or ready made app and freeware too: http://www.theabsolute.net/sware/dskinv.html
the first step is dd_rescue. After that, you're free to try anything to reconstruct the data.
And there's GNU ddrescue
GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.
Make sure to use the 3-arg version (manual):
ddrescue [options] infile outfile [mapfile]
That is, do use a mapfile even if it's optional, because:
If you use the mapfile feature of ddrescue, the data is rescued very efficiently, (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point. The mapfile is an essential part of ddrescue's effectiveness. Use it unless you know what you are doing.
And it's also included in Cygwin and Homebrew.
I don't know what layer exists between Windows Explorer and the Win32 APIs. You can try to write a program with the Win32 File I/O stuff. If that doesn't work, then you have to write your own device driver to get any lower.
I've had some luck from the linux side, or using BartPE (http://www.nu2.nu/pebuilder/), but just seeing the file doesn't always mean the file is going to be recoverable, whether you're trying from Windows or Linux. You're best bet might be to use a trial of a recovery program.
I have had two disks start to disintegrate on me. From the pattern of unreadable sectors I think they had internal flaking of their emulsion. WinXP Explorer just threw up its hands and said the drive didn't even exist.
In both cases I used "GetDataBack for NTFS" from Runtime Software (http://www.runtime.org/). You can download a free trial which will show you what you could get back if you paid for it. When I bought it it was $49, but I see it is now $79.
This program is amazing. It's not necessarily fast as it will reread some sectors over and over, trying to get a consensus value from multiple tries, but when it's done you can get back stuff that you thought was gone forever. I had one drive that it took over 10 hours to analyze, but when it was done I got back over 97% of a 500GB drive. Definitely worth the price.
Another great tool is Beyond Compare. I have rev 2.5.3, but it is currently at 3.?? and costs $30. They have a full-functionality, 30-day trail. It does a great job of copying large quantities of files (and only those that need to be copied) and, unlike Explorer, it doesn't blow up if something fails. It's sort of like a visual rsync for Windows, if you're familiar with that program from the Samba people.
I have no connection with either of the comapnies mentioned other than being a very satisfied customer.
The gold standard for recovering data from a magnetic storage device would have to be SpinRite. It's a commerical app though, so you probably wouldn't learn much from it.
If you have a Linux machine around, I can recommend dvdisaster. It is originally meant for creating error correction files, but it also reads DVDs into an image and ignores read errors; and you can use different drives one after another to get missing sectors filled in the image.

Resources