how to wipe a filesystem on AIX - filesystems

My company is selling some old AIX systems, and they want me to wipe all data from them. Not sure of the best way to do that. I'm guessing it would go something like this:
Unmount the filesystem
lookup the 'hdisk#" device in fstab - and comment it out.
use dd to copy /dev/null to the device
What I'm not sure about is whether the whole logical volume manager complicates things further than that. I.e., is it correct to treat the /dev/hdisk# device like a physical disk for this purpose?

You can boot from Standalone Diagnostics CD and wipe the drives. You have options to format and certify, format without certify and erase. You can run each option more than one time if you need.

After unmounting the file systems you should do an exportvg <your_vg>, otherwise you get complaints when rebooting the system (although, taking into account you are selling them, this is probably not your concern any more).
After removing the VG from the system you can wipe the disks using dd as you described.
(It is a common misconception that you have to wipe more than once. Just do it correctly and once is enough.)

Related

A way to make a file contents snapshot in Linux

What is the best way to create an "atomic" snapshot of file contents in Linux? Emphasis is not on performance, but on getting contents as a whole.
I may think of using sendfile(2) (since 2.6.33) or splice(2), but neither have any indication of operation atomicity. Both are run in the kernel-space entirely, but at least sendfile(2) implies it's using mmap(2) and mmap gives no guarantees that writes to the same mmaped (as MAP_SHARED) region in other processes won't be visible even with MAP_PRIVATE (probably they will, because that are the same pages).
Taking that this functions are writing with performance in mind and sendfile(2) is optimized to be used with DMA, I may only assume that they just copy memory in some background kernel thread and it's quite possible that other operations may also affect the data being copied.
So the only possible solution I see is to place a read lease with fcntl(2) (FD_SETLEASE) and copy file as normal, but if someone opens it for writing, either try to "rush" it (very reliable, I know) and beat the timer, or just give up and try later. Is that correct?
So the only possible [filesystem-independent] solution I see is to place a read lease with fcntl(2) (FD_SETLEASE) and copy file as normal, but if someone opens it for writing, either try to "rush" it (very reliable, I know) and beat the timer, or just give up and try later. Is that correct?
Almost; there is also fanotify. Plus, as mentioned in a comment, there are some filesystem-specific options, and some possibilities only available in certain configurations.
The lease break timer is configurable, /proc/sys/fs/lease_break_time in seconds, and the default is 45 seconds.
"Just give up and try later" is also a bit defeatist; you do have ways to monitor when the snapshot might work. Consider placing an inotify IN_CLOSE_WRITE and IN_CLOSE_NOWRITE watch on the file, and try the snapshot whenever you receive such an event.
fanotify:
For a few years now, I've been monitoring the progress of Linux fanotify, in the hopes that it would grow enough features that it could be used for automagic file versioning. Essentially, whenever someone opens the file with write permissions, the current file would be snapshot to temporary storage, marked with some metadata (timestamp, real human user (backtracked through sudo/su), and so on). When that descriptor is closed, another snapshot is taken, and a helper thread/process diffs the two, annotating the changes (or even pushing it to git).
It is limited to local filesystems, but with 2.6.37 and later kernels (including 3.x), the interface is sufficient for specific files, or an entire mount. In your case, the fanotify interface allows similar features to file leases, except for local filesystems only, but you can simply deny any accesses during the snapshot. (One can argue whether that is a good idea at all, especially if the file to be snapshotted is a system or configuration file; many programmers overlook error checking, because "some files just have to be always accessible, or your system is broken".)
As far as my change monitoring goes, fanotify should now have all sufficient features, but only if an entire mount is monitored. I was hoping to monitor configuration files on multi-admin clusters, but those files reside on the same mount as all system libraries and binaries do, so the monitoring causes considerable overhead. So much so, that it seems more appropriate to just modify SSH configuration, console configuration (getty etc.), sudo configuration, and possibly su, to always include a dynamic library that interposes file access syscalls, and basically does the versioning on behalf of the user. This way service binaries are not affected, only user actions are monitored.
This might work under some circumstances:
(Optional) Do something to prevent new processes to open the file:
a/ rename the file
b/ restrict file permissions
Find all existing file readers/writers via lsof and kill -STOP them
Do your snapshot
kill -CONT all readers/writers
(Optional) Restore action 1.

rsync on fat32 and ntfs

A little background: I have tried to use rsync to backup my wife's home directory to an external usb drive with the command
rsync -va /home/wife /run/media/wife
but kept getting error messages that mkstemp failed, and that rsync failed to set times, becuase of a read-only filesystem. Worse, it seems that rsync is unable to tell when files don't need syncing, and winds up copying a lot of stuff it doesn't need to, resulting in rediculously slow backup times.
So I tried using rsync -rtvO instead, based on this guy's advice. Okay, no more warnings, but the backups still seem too slow, and esp on big media files that already exists -- i.e. it's still copying stuff unnecessarily.
Is my analysis correct?
Is there a workaround?
Will the problem be fixed if I use an NTFS drive for here backups?
I could of course use a linux filesytem, but on rare occasions she would like to be able to take the drive to work and access it from the Windows machines there.
Try using --modify-window=1
In particular, when transferring to or from an MS Windows FAT
filesystem (which represents times with a 2-second resolution),
--modify-window=1 is useful (allowing times to differ by up to 1 second).
https://download.samba.org/pub/rsync/rsync.html
You could also try using --size-only
skip files that match in size
For rsync to FAT, this is what I use and it seems to work pretty well:
rsync -rtv --modify-window=1 source/ destination/
Source: https://serverfault.com/a/144475/58568

Prevent unauthorised write access to a part of filesystem or partition

Hello all I have some very important system files which I want to protect from accidental deletion even by root user. I can create a new partition for that and mount it with readonly access but the problem is that I want my application which handles those system files to have write access to that part and be able to modify them. Is that possible using VFS? As VFS handles access to the files I could have a module inserted in the VFS layer which can see if there is a write access to that part then see the authorization and allow it or otherwise reject it.
If not please provide me suggestions regarding how can such a system be implemented what would I need in that case.
If there exists a system like this please suggest about them also.
I am using linux and want to implement this in C, I think it would be possible in C only.
Edit: There are such kind of programs implemented in windows which can restrict access to administrator even, to some important folders, would that be possible in linux?
My application is a system backup and restore program which needs to keep its backup information safe and secure. So I would like to have a secured part of a partition which could not be accidently deleted in any way. There are methods of locking a flashdrive can we use some of those methods for locking a partition in linux also ? so that mount is password protected ? I am not writing a virus application, my application would give user option to delete the backups but I don't wanna allow them to be deleted by any other application.
Edit: I am writing a system restore and backup program for ubuntu, I am a computer engineering student.
Edit: As I have got opinion from Basile Starynkevitch that I would be committing worst sin of programming if I do anything like this, but you could provide me suggestions considering this as a experimental project, I could make some changes in the VFS layer so that this could work.
You could use chattr, e.g.
chattr +i yourfile
But I don't think it is a good thing to do that. People using root access are expected to be careful. Those having root access can still issue the command undoing the above.
There is no way to forbid people having root access, or people having physical access to the computer, to access, remove, change your file, if they really want to (they could update & hack the kernel, for instance). Read more about trusted compute base
And I believe it is even unethical (and perhaps illegal, in some countries) to want to do that. I own my PC, and I don't understand why you should disallow me to change some data on it, because I happened to install some software.
By definition of root on Linux, it can do anything... You won't be able to prohibit him to erase or alter data... People with root access can write arbitrary bytes at arbitrary places on the disk.
And on a machine that I own (or perhaps just have physical access to), I will, thanks God, always be able to remove a file (even under Windows: I could for example boot a Linux CDROM and remove the file from Linux accessing an NTFS, and then reboot the Windows...).
So I think you should not bother and take even a minute to find out how to make root altering your precious files more difficult. Leave them as other root files...
PHILOSOPHICAL RANT
The unix philosophy has always been to trust the system administrator (while protecting newbie users from mistakes), that is the root user. The root is able to do anything (this is why people avoid being root, even on a personal machine). There have never been strong features to prohibit root doing mistakes, because the system administrator is expected to know well the system, and is trusted.
And Unix sysadmins understand this fact: it is part of their culture. (This is probably in contrast with Windows administration culture). They know when to be careful, they don't expect software to prevent mistakes as root.
In order to use root squashing (which makes it so that root can't even see files for a local user) you can set up a local nfs. This forum page explains how to mount an nfs locally. The command is:
mount -t nfs nameofcomputer:/directory_on_that_machine /directory_you_should_have_already_created
nfs has root squashing enabled by default, which should solve your problem. From there, you just make sure your program stores its files on the nfs mount.
Sounds to me like you're trying to write a virus.
No doubt you will disagree.
But I'm willing to bet the poor people that install your software will feel like it's a virus, because it will be behaving like one by making itself hard to remove.
Simply setting r/w flags should suffice for anything else.

Garbage data in file after sudden power loss

I am using a flash with FAT32 system. I am continuosly writing data to a file using file system APIs from rtos(SMX). However, after sudden poweroffs, the file contains garbage values just above the first file entry on system reboot.
I run chkdsk utility, but it doesn't fix any problem.
Any idea how can i get rid of these garbage entries even on unclean power offs?
If you expect sudden power loss, you'll need to disable all caching/buffering on file writes. Of course you'll also need to deal with partially-written files, but that should at least prevent trailing garbage.
I don't know the API you're using, but this might be done by mounting the drive "synchronously" (e.g., mount -o sync in Linux) or by opening individual files with specific options. If you do disable buffering on individual file writes, you may still run the risk of corrupting the FAT, however, and losing all the files.

Read data from damaged media

Is it possible to read damaged media (cd, hdd, dvd,...) even if windows explorer bombs out?
What I mean to ask is, whether there is a set of APIs or something that can access the disk at a very low level (below explorer?) and read whatever can be retrieved even if it is only partial, especially if you can still see the file is there from explorer, but can't do anything with it because it is damaged somehow (scratch on cd, etc)?
The main problem with Windows Explorer is that it doesn't support resuming copying after a read error. Most superficially scratched CDs, for example, will fail on different areas of the disk every time you eject and reinsert them.
Therefore, with a utility that supports resuming copy operations, it is possible to read the entire contents of a damaged CD with by doing "eject/reload/resume" a few times.
In fact, this is what a utility I wrote does, and I've never needed anything fancier to read scratched disks. (It simply uses ReadFile and WriteFile.)
One step lower would be opening the raw partition (i.e. disk image) by passing a string such as "\.\F:" (note: slashes are literal here) to CreateFile. It would allow you to read raw sectors from a drive, but reconstructing files from that data would be hard.
In fact, the "\.\" syntax allows you to open devices in the "\GLOBAL??" branch of the Windows Object Manager namespace as if they were files. It's not unlike calling dd with /dev/x as a parameter. There is also a "\Device" branch, but that's only accessible via DeviceIoControl() (i.e. ioctl()), meaning there's no simple ReadFile()/WriteFile() interface.
Anything lower level than that would be device-specific, I guess; like reading raw CD-ROM data (including ECC bits) the way some CD-burning programs do. You'd have to do some research on the specific media (CD, flash, DVD) and what your hardware allows you to do on them.
Note: The backslashes seem to get lost on the way to the web page; you need to pass "backslash backslash dot backslash DeviceName" to CreateFile. You need to escape them, too, of course.
If you want to do it, do it from the Linux side - see: http://sourceforge.net/projects/monkeycity/ opensource
or ready made app and freeware too: http://www.theabsolute.net/sware/dskinv.html
the first step is dd_rescue. After that, you're free to try anything to reconstruct the data.
And there's GNU ddrescue
GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.
Make sure to use the 3-arg version (manual):
ddrescue [options] infile outfile [mapfile]
That is, do use a mapfile even if it's optional, because:
If you use the mapfile feature of ddrescue, the data is rescued very efficiently, (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point. The mapfile is an essential part of ddrescue's effectiveness. Use it unless you know what you are doing.
And it's also included in Cygwin and Homebrew.
I don't know what layer exists between Windows Explorer and the Win32 APIs. You can try to write a program with the Win32 File I/O stuff. If that doesn't work, then you have to write your own device driver to get any lower.
I've had some luck from the linux side, or using BartPE (http://www.nu2.nu/pebuilder/), but just seeing the file doesn't always mean the file is going to be recoverable, whether you're trying from Windows or Linux. You're best bet might be to use a trial of a recovery program.
I have had two disks start to disintegrate on me. From the pattern of unreadable sectors I think they had internal flaking of their emulsion. WinXP Explorer just threw up its hands and said the drive didn't even exist.
In both cases I used "GetDataBack for NTFS" from Runtime Software (http://www.runtime.org/). You can download a free trial which will show you what you could get back if you paid for it. When I bought it it was $49, but I see it is now $79.
This program is amazing. It's not necessarily fast as it will reread some sectors over and over, trying to get a consensus value from multiple tries, but when it's done you can get back stuff that you thought was gone forever. I had one drive that it took over 10 hours to analyze, but when it was done I got back over 97% of a 500GB drive. Definitely worth the price.
Another great tool is Beyond Compare. I have rev 2.5.3, but it is currently at 3.?? and costs $30. They have a full-functionality, 30-day trail. It does a great job of copying large quantities of files (and only those that need to be copied) and, unlike Explorer, it doesn't blow up if something fails. It's sort of like a visual rsync for Windows, if you're familiar with that program from the Samba people.
I have no connection with either of the comapnies mentioned other than being a very satisfied customer.
The gold standard for recovering data from a magnetic storage device would have to be SpinRite. It's a commerical app though, so you probably wouldn't learn much from it.
If you have a Linux machine around, I can recommend dvdisaster. It is originally meant for creating error correction files, but it also reads DVDs into an image and ignores read errors; and you can use different drives one after another to get missing sectors filled in the image.

Resources