How to create an undeletable file in Delphi - file

[the following is a rephrase of my previous question, which was deemed ambiguous].
I'm digging into creating a basic licensing mechanism for a demo application. What I have in mind goes like that: the application creates an empty "license file" called, say "0b1xa487x.ini" upon the first run, then expires 30 days after it has been first executed and can't be run anymore as long as that specific file is present on the system.
What I'm looking for is a method to protect that specific file in a way to deter deletion. Since it will be a blank file, devoid of any content, I wouldn't mind it to be corrupt, have corrupt headers, invalid date, whatever it takes to stay undeletable.
I've seen a similar approach somewhere based on file attributes (the file had the HX attributes set in place); however, the attribute approach lead me nowhere, as I can't find any documented feature on the existence of a file attribute X.
I also know that there are other approaches including rootkit drivers and system services launched as system user, but this particular one seems to fit best in this scenario. Again, I outline that the file's contents may as well be inaccessible, I'm not planning to use the approach in running any kind of malware from the file, as I've been accused below :)

Corrupt suggests not conforming to some standard. There are no standards for blank files.

Thanks everybody for your suggestions. I found a way to render my file inaccessible, namely by using fortunate combination of file permissions. The downside is that these things don't work on non-NTFS partitions. The good thing is that I can always clean up after my application by simply removing these permissions programatically and deleting everything afterwards.

Regarding your last answer to Henk, I believe it is more easier to create a service, start it automatically with the OS, and open the file in the fmShareExclusive by using a TFileStream.
But, you can not force the kernel of the OS, or an antivirus to make your file 'undeletable'.
Best regards,
Radu

Related

Is there any way to get exactly which part of file has been changed on Linux

I want to build file sync software. Is there any way to get exact file changes (or at least changes size) with kernel systems like I-notify or others?
EDIT:
I'm interested in the following scenario with I-notify:
When getting IN_MODIFY event on a file I want retrieve in some way changed lines of the file (some kind of a file diff format). Are there any linux kernel tools to achieve this?
Even if there were such a kernel feature, it would not work in practice. You see, most editors modify files by creating a copy, then renaming it over the original one. This way the user is assured of getting either the old contents or the new contents, never a mix between the two.
The only real option is to take snapshots of the file (at e.g. when file is closed when it was open for writing, or when the file is replaced with a new one), and compare the snapshots, to find which part was changed.
Comparing two versions of a file to see which part of it was changed is itself a difficult question, as it definitely depends on the file format. For source code, unified diffs work well, but for other types (including plain text files that are not line-oriented), it's not that simple.
Could you please refine your question? The inotify API on Linux does monitor such changes, and similar changes such as if a file was open, if a file inside a directory (or the directory itself) was moved and file deletions etc.
For more, see here:
(http://man7.org/linux/man-pages/man7/inotify.7.html)
EDIT:
I believe I misread the question the first time around, if I did, yes such programs exist and the inotify API is the primary one existing within the Linux kernels. See the above link for a comprehensive guide on the different functions it provides.

Trying to find information on how to build a simple file version controll system

Im want to build a file system for non-tecks( dont care about old versions of the file so no merging or svn/git). The thougt is that a user should be able to download a file, in the same instance the file should be locked for other users. When the first user is done editing the, the file should then automaticaly upload to the server. When he closes the file, the lock should den be opend.
Is this even possible? Im thingking a sort of browser plugin, but I cant find anywone that has done the same thing. (besides microsoft, but who want to go down that road)
That would be: Sharepoint, Alfresco, (almost every WIKI), ...
Actually that is a basic feature of most document management systems. Even SVN has that already and IIRC you can set that up with mod_dav_svn without a line of code (considering configuration is not code).
Also the interesting question is, IMHO, not TheHappyCase where the described unit of work goes well but what about this*:
I Checkout 50 random documents you need
(get some popcorn and wait for your stresslevel to go up)
?????
I get bored and forget about it (everything still being checked out)
*: Points (1) and (2) may change order

Free server side anti virus / security / trojan protection for file uploads?

I am allowing users to upload photos like photo albums, and also attach files (documents for now) as mail attachments. So i assume I need some anti virus/security tool in place to scan the files first in case people upload infected stuff. So two questions:
1) Are there any 'free' or open source tools for this I can use or integrate into my environment: codeignitor php?
2) How to secure the upload area from rest of the system? Say the virus scanner fails to catch a virus and it is uploaded, how to prevent it from infecting other files? Like can the upload area be sandboxed in or something always and use that filepath for users to access the content so it does not spread to other parts of the system?
There is clamav for a free virus scanner. Install it and you could do something like:
function virus_detected($filename)
{
$clamscan = "/usr/local/bin/clamscan";
$result = exec("$clamscan -i --no-summary $filename");
return strlen($result)?true:false;
}
As for security, make sure the temporary files are uploaded to a directory outside of your web root. You should then verify the file type, rename the file to something other than it's original file name and append the appropriate extension (gif,jpg,bmp,png). I believe this should keep you fairly safe aside from exploits in php itself.
For more information about verifying file types in php check out:
http://www.php.net/manual/en/function.finfo-file.php
I know this topic hasn't been active for three years now, but, in case anyone else in the future, similarly, is looking for a PHP-based anti-virus solution, for those without an anti-virus daemon, program or utility installed on their host machine and without the ability to install an anti-virus daemon, program or utility, phpMussel, a PHP script that I've written based on ClamAV that fits the bill for what Rohit (the the original poster) was looking for (a PHP-based anti-virus to protect their CMS against malicious file uploads), may possibly be a viable solution. It certainly isn't perfect and I can't guarantee that it'll catch everything, but by far, it's certainly better than using nothing at all.
Ideally, as per already suggested above by Matt, making a call to shell to have ClamScan scan the file uploads is definitely an ideal solution, and if this is something that a hostmaster, webmaster or anyone in Rohit's situation is able to do, I'd second that suggestion wholly. What I've written, because it is a PHP script, has limitations inherent to anything that relies wholly on PHP in order to function, but, in instances where the aforementioned suggestion and/or similar suggestions aren't a possibility (such as if the host machine doesn't have an anti-virus installed and shell access is disabled; common with cheaper shared hosting solutions), that's where what I'm suggesting here could potentially step in - Something that only requires PHP to be installed (with PCRE extension included, which is standard with PHP nowadays anyhow), and nothing more.
Also remember, as Matt has already suggested, to always upload outside of your root directory, to ensure that uploaded files can't be exploited by attackers (such as in the event of an attacker attempting to compromise your system by uploading backdoors or trojans) - Viruses are not the only threat you need to worry about, and the vast majority of anti-virus solutions nowadays do not solely focus on viruses. Matt is also entirely correct in pointing out that no anti-virus solution is perfect, and for that reason, anyone allowing file uploads to their website or server needs to remain vigilant - An anti-virus solution is a must-have for anyone in that situation, but no holy grail of internet security that'll cover every possible threat exists. Also, renaming files isn't only about ensuring that they can't execute (as may be somewhat inferred by the original poster's reply comment regarding EXEs) - The risk of threats such as directory traversal attacks can be reduced by renaming files as well as the risk associated with an attacker attempting to override an already existing file on a targeted system as a means to hide their dirty-work.
Regarding the threat of files that may be malicious being missed by an anti-virus solution and then potentially infecting the system where they are being uploaded to; What a hostmaster or webmaster could potentially do in this situation is employ some sort of quick and simple encoding process that'd render the file non-executable by the system itself, but which can be easily and readily reversed by the PHP script responsible for calling that file on request, such as by way of using base64_encode(), bin2hex(), or even by just rotating a few characters and adding a salt to displace the file's magic number or something similar.

Are intermediate files bad practice?

I was recently downvoted (which only bugged me a little :) ) for an answer I gave to this question. The person offered no explanation for the down vote which started me thinking: "Why would you avoid producing intermediate files?" Especially in a language like Python where File IO is laughably easy.
There seemed to be consensus that it was a bad idea, but I know for a fact that intermediate files are used regularly in practice. I worked for a very well respected research firm (let's just say S.O. wouldn't exist without this firm) where it was assumed that your programs would produce files as output. We did this because if your program indeed deserved to be a standalone program then it would need debuggable output and some way of passing its output between processes that could later be examined in case we discovered an error in our output further downstream.
Is it considered bad practice (in cases like the question linked above) to use intermediate files? Why?
One problem with intermediate files happens when multi-threading.
If Clients C1 and C2 are handled simultaneously by server process S (which may or not have forked into seperate processes, used threads, or whatever concurrency system..), you may get weird issues when both try to create the same intermediate file.
I believe one of Unix philosophies is that all programs should act as filters, however this doesn't necessarily mean creating files on the disk, and using intermediate files leads to unwieldly behaviour in my opinion. One should also treat the disk as a last resort and only use it for storing/retrieving data that should be available after powering off the computer, and maybe even take care to allow programs to run on read-only media.
Well, there are some issues when you use files, especially there may be many unexpected failures while accessing or creating the files. The following listed are all the issues that I personally have experienced.
1) The file location is on the remote machine and the network is down. (NFS mounted).
2) There is not enough free space while creating the file.
3) In between the process the user press Ctrl-C to cancel the process the file is not deleted.
4) The file is mounted on the NFS and the network is slow.
5) The folder in which file was created was a soft link and the original link was deleted.
But still we have to use file because there are hardly any options while working in bash. But in C,C++ i think disk access should be considered as the last resort. Program producing files as output is ok, if that is the only way to communicate with the user. But atleast for intermediate savings use of disk files should be minimized.
If you create temporary files properly (with setting platform-specific 'temporary' flag meaning do not flush cache to disk when no urgent need) they are perfectly good if task requires them.
There are almost no things in IT that you can't use while having a good reason to. :-)

Read data from damaged media

Is it possible to read damaged media (cd, hdd, dvd,...) even if windows explorer bombs out?
What I mean to ask is, whether there is a set of APIs or something that can access the disk at a very low level (below explorer?) and read whatever can be retrieved even if it is only partial, especially if you can still see the file is there from explorer, but can't do anything with it because it is damaged somehow (scratch on cd, etc)?
The main problem with Windows Explorer is that it doesn't support resuming copying after a read error. Most superficially scratched CDs, for example, will fail on different areas of the disk every time you eject and reinsert them.
Therefore, with a utility that supports resuming copy operations, it is possible to read the entire contents of a damaged CD with by doing "eject/reload/resume" a few times.
In fact, this is what a utility I wrote does, and I've never needed anything fancier to read scratched disks. (It simply uses ReadFile and WriteFile.)
One step lower would be opening the raw partition (i.e. disk image) by passing a string such as "\.\F:" (note: slashes are literal here) to CreateFile. It would allow you to read raw sectors from a drive, but reconstructing files from that data would be hard.
In fact, the "\.\" syntax allows you to open devices in the "\GLOBAL??" branch of the Windows Object Manager namespace as if they were files. It's not unlike calling dd with /dev/x as a parameter. There is also a "\Device" branch, but that's only accessible via DeviceIoControl() (i.e. ioctl()), meaning there's no simple ReadFile()/WriteFile() interface.
Anything lower level than that would be device-specific, I guess; like reading raw CD-ROM data (including ECC bits) the way some CD-burning programs do. You'd have to do some research on the specific media (CD, flash, DVD) and what your hardware allows you to do on them.
Note: The backslashes seem to get lost on the way to the web page; you need to pass "backslash backslash dot backslash DeviceName" to CreateFile. You need to escape them, too, of course.
If you want to do it, do it from the Linux side - see: http://sourceforge.net/projects/monkeycity/ opensource
or ready made app and freeware too: http://www.theabsolute.net/sware/dskinv.html
the first step is dd_rescue. After that, you're free to try anything to reconstruct the data.
And there's GNU ddrescue
GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.
Make sure to use the 3-arg version (manual):
ddrescue [options] infile outfile [mapfile]
That is, do use a mapfile even if it's optional, because:
If you use the mapfile feature of ddrescue, the data is rescued very efficiently, (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point. The mapfile is an essential part of ddrescue's effectiveness. Use it unless you know what you are doing.
And it's also included in Cygwin and Homebrew.
I don't know what layer exists between Windows Explorer and the Win32 APIs. You can try to write a program with the Win32 File I/O stuff. If that doesn't work, then you have to write your own device driver to get any lower.
I've had some luck from the linux side, or using BartPE (http://www.nu2.nu/pebuilder/), but just seeing the file doesn't always mean the file is going to be recoverable, whether you're trying from Windows or Linux. You're best bet might be to use a trial of a recovery program.
I have had two disks start to disintegrate on me. From the pattern of unreadable sectors I think they had internal flaking of their emulsion. WinXP Explorer just threw up its hands and said the drive didn't even exist.
In both cases I used "GetDataBack for NTFS" from Runtime Software (http://www.runtime.org/). You can download a free trial which will show you what you could get back if you paid for it. When I bought it it was $49, but I see it is now $79.
This program is amazing. It's not necessarily fast as it will reread some sectors over and over, trying to get a consensus value from multiple tries, but when it's done you can get back stuff that you thought was gone forever. I had one drive that it took over 10 hours to analyze, but when it was done I got back over 97% of a 500GB drive. Definitely worth the price.
Another great tool is Beyond Compare. I have rev 2.5.3, but it is currently at 3.?? and costs $30. They have a full-functionality, 30-day trail. It does a great job of copying large quantities of files (and only those that need to be copied) and, unlike Explorer, it doesn't blow up if something fails. It's sort of like a visual rsync for Windows, if you're familiar with that program from the Samba people.
I have no connection with either of the comapnies mentioned other than being a very satisfied customer.
The gold standard for recovering data from a magnetic storage device would have to be SpinRite. It's a commerical app though, so you probably wouldn't learn much from it.
If you have a Linux machine around, I can recommend dvdisaster. It is originally meant for creating error correction files, but it also reads DVDs into an image and ignores read errors; and you can use different drives one after another to get missing sectors filled in the image.

Resources