Where to store a variable when needed in the next run? - c

I'm changing a program written in C.
For these changes I need a counter (variable int). When the run stops, I need the value of this counter in the following run of the program. (even if the pc is restarted in between).
What is the best way to store this value? I was thinking about the following : storing it as a registry-value, writing it to a file (not preferred, somebody might delete this file), using persistent variables (but I can't find many information on these).
Or, are there other ways to keep this variable?
The program has to run in a Windows environment and in a Linux environment (as it does now).

Store it in a file. If you want to protect the file from accidental deletion, have its name start with a period on Linux (.myfile) or mark it as "hidden" on Windows. If you want to protect it against more than just accidental deletion, the registry is no better than a file.

The best solution I think would be to store it in a database. Have you got any database experience? Could you store it in MySQL or SQL Server?

C doesn't have a concept of "persistent variables"; no actual programming language that I know of has that.
A file would be the best choice; detecting its absense and protesting/failing will be trivial.

Related

FileSystemWatcher handling moving file - another solution

Hi
I was trying to use FileSystemWatcher to detect if some files or directories has been moved to another location. The problem was, i had to use onCreated and onDeleted events to handle this, but there are many issues using this solution
how could i detect change if i will select more than one file and press Ctrl+C, Ctrl+V, or right-click and select Copy and then Paste in the same directory?
how could i detect, if i will select more than one directory?
the last one, what if i simulate moving file? I could delete file and create with same name in different place.
I know i could use, Timers, process locking detection, verification which process uses file (if explorer.exe then it could be moving file), but this solution is not perfect and it's very ineffective. I was whinking about this how to solve this issue, and i have decided to implement this in low-level language. Is this possible to do this using C, or assembler? I know that every thing is possible to do using assembler, so is it possible to implement this in asm? I would like to create my own FileSystemWatcher using assembler or C but where should i looking for info how to do this?
File movement within the same filesystem can be detected easily using a filesystem filter driver, as the filesystem received the corresponding request from the OS. Other scenarios such as moving to the other disk or moving by copy/delete sequence are hardly traceable even with the filter driver because you would need to match between the file which have been created/written to and the file which is being deleted (possibly on the other disk).
If you plan to write some security mechanism (like a DRM), then I need to remind that the data can be altered during copying (eg. encrypted or compressed), which makes your task even harder.
Still you can look at filesystem filter drivers - should you decide to go on with detection of filesystem events, such driver is a much more reliable and powerful mechanism than FileSystemWatcher.

How to create an undeletable file in Delphi

[the following is a rephrase of my previous question, which was deemed ambiguous].
I'm digging into creating a basic licensing mechanism for a demo application. What I have in mind goes like that: the application creates an empty "license file" called, say "0b1xa487x.ini" upon the first run, then expires 30 days after it has been first executed and can't be run anymore as long as that specific file is present on the system.
What I'm looking for is a method to protect that specific file in a way to deter deletion. Since it will be a blank file, devoid of any content, I wouldn't mind it to be corrupt, have corrupt headers, invalid date, whatever it takes to stay undeletable.
I've seen a similar approach somewhere based on file attributes (the file had the HX attributes set in place); however, the attribute approach lead me nowhere, as I can't find any documented feature on the existence of a file attribute X.
I also know that there are other approaches including rootkit drivers and system services launched as system user, but this particular one seems to fit best in this scenario. Again, I outline that the file's contents may as well be inaccessible, I'm not planning to use the approach in running any kind of malware from the file, as I've been accused below :)
Corrupt suggests not conforming to some standard. There are no standards for blank files.
Thanks everybody for your suggestions. I found a way to render my file inaccessible, namely by using fortunate combination of file permissions. The downside is that these things don't work on non-NTFS partitions. The good thing is that I can always clean up after my application by simply removing these permissions programatically and deleting everything afterwards.
Regarding your last answer to Henk, I believe it is more easier to create a service, start it automatically with the OS, and open the file in the fmShareExclusive by using a TFileStream.
But, you can not force the kernel of the OS, or an antivirus to make your file 'undeletable'.
Best regards,
Radu

Are intermediate files bad practice?

I was recently downvoted (which only bugged me a little :) ) for an answer I gave to this question. The person offered no explanation for the down vote which started me thinking: "Why would you avoid producing intermediate files?" Especially in a language like Python where File IO is laughably easy.
There seemed to be consensus that it was a bad idea, but I know for a fact that intermediate files are used regularly in practice. I worked for a very well respected research firm (let's just say S.O. wouldn't exist without this firm) where it was assumed that your programs would produce files as output. We did this because if your program indeed deserved to be a standalone program then it would need debuggable output and some way of passing its output between processes that could later be examined in case we discovered an error in our output further downstream.
Is it considered bad practice (in cases like the question linked above) to use intermediate files? Why?
One problem with intermediate files happens when multi-threading.
If Clients C1 and C2 are handled simultaneously by server process S (which may or not have forked into seperate processes, used threads, or whatever concurrency system..), you may get weird issues when both try to create the same intermediate file.
I believe one of Unix philosophies is that all programs should act as filters, however this doesn't necessarily mean creating files on the disk, and using intermediate files leads to unwieldly behaviour in my opinion. One should also treat the disk as a last resort and only use it for storing/retrieving data that should be available after powering off the computer, and maybe even take care to allow programs to run on read-only media.
Well, there are some issues when you use files, especially there may be many unexpected failures while accessing or creating the files. The following listed are all the issues that I personally have experienced.
1) The file location is on the remote machine and the network is down. (NFS mounted).
2) There is not enough free space while creating the file.
3) In between the process the user press Ctrl-C to cancel the process the file is not deleted.
4) The file is mounted on the NFS and the network is slow.
5) The folder in which file was created was a soft link and the original link was deleted.
But still we have to use file because there are hardly any options while working in bash. But in C,C++ i think disk access should be considered as the last resort. Program producing files as output is ok, if that is the only way to communicate with the user. But atleast for intermediate savings use of disk files should be minimized.
If you create temporary files properly (with setting platform-specific 'temporary' flag meaning do not flush cache to disk when no urgent need) they are perfectly good if task requires them.
There are almost no things in IT that you can't use while having a good reason to. :-)

Is it a good way to use system() for database scripts from C?

I was searching for connecting to database from C program. But I thought the ODBC connections, logon and all need some libraries. Also I am having a minimal compiler like Tiny C Compiler which is very fast. I do not want to use any ODBC logic etc which is needed to connect and query the database.
So I am using a method which is as follows.
I use a bteq script (teradata) which will have login, query, logoff commands in that. (FYI bteq is a command line database utility. You can use it similar to mysql.exe in command prompt by going to the path of the exe. You can replace bteq with mysql.exe etc). And I use
system("bteq <myscript.txt >out.txt");
myscript.txt will be like the following..
.logon boxname/user,password;
select date;
.logoff;
The above script will logon to the database and query date (you can change the query and write script according to your database engine and your needs) and give output into out.txt.
Now I will parse the out.txt for the row X column I want using fgetc,fscanf or fgets.
And use the data for checking and send a mail using PHP on any server
system("c:/server/php/php.exe sendmail.php");. We can do the same for many a database engines like mysql, .. etc through a simple C program.
Now my question is Is there any flaw in the above method.
If it is then how can I overcome it. I am asking this question because I think this method is unconventional. Please give your opinions on this method. I don't bother about time needed for execution, RAM used, performance issues etc. I know system() function is time consuming which is not my concern anyway. I also developed specific functions to access query results (similar to accessing a flat file). Please tell me if you have any improvements to this method. If you get to know of any flaws in this please let me know. All kinds of suggestions are welcome.
My environment is : teradata bteq on windows with Tiny C Compiler
This is a perfectly fine way to access an external database, as long as your needs are simple. If you already know about the performance and memory implications of doing this, then there's not much more to say.
The method is fine: it's great to decouple the db subsystem and the parser subsystem by implementing them in an appropriate language.
There's just this tiny little thing - but I may be mistaken because I'm not familiar with bteq: the program will need a bteq script installed in the execution folder; this script will contain username and password. If those aren't encripted in some way, there might be a security flaw.
I wouldn't recommend this if your calling code is running setuid or setgid, but in that case you could use one of the exec() functions instead. (There are a few other considerations you may wish to take into account, all detailed in man 3 system.)

Read data from damaged media

Is it possible to read damaged media (cd, hdd, dvd,...) even if windows explorer bombs out?
What I mean to ask is, whether there is a set of APIs or something that can access the disk at a very low level (below explorer?) and read whatever can be retrieved even if it is only partial, especially if you can still see the file is there from explorer, but can't do anything with it because it is damaged somehow (scratch on cd, etc)?
The main problem with Windows Explorer is that it doesn't support resuming copying after a read error. Most superficially scratched CDs, for example, will fail on different areas of the disk every time you eject and reinsert them.
Therefore, with a utility that supports resuming copy operations, it is possible to read the entire contents of a damaged CD with by doing "eject/reload/resume" a few times.
In fact, this is what a utility I wrote does, and I've never needed anything fancier to read scratched disks. (It simply uses ReadFile and WriteFile.)
One step lower would be opening the raw partition (i.e. disk image) by passing a string such as "\.\F:" (note: slashes are literal here) to CreateFile. It would allow you to read raw sectors from a drive, but reconstructing files from that data would be hard.
In fact, the "\.\" syntax allows you to open devices in the "\GLOBAL??" branch of the Windows Object Manager namespace as if they were files. It's not unlike calling dd with /dev/x as a parameter. There is also a "\Device" branch, but that's only accessible via DeviceIoControl() (i.e. ioctl()), meaning there's no simple ReadFile()/WriteFile() interface.
Anything lower level than that would be device-specific, I guess; like reading raw CD-ROM data (including ECC bits) the way some CD-burning programs do. You'd have to do some research on the specific media (CD, flash, DVD) and what your hardware allows you to do on them.
Note: The backslashes seem to get lost on the way to the web page; you need to pass "backslash backslash dot backslash DeviceName" to CreateFile. You need to escape them, too, of course.
If you want to do it, do it from the Linux side - see: http://sourceforge.net/projects/monkeycity/ opensource
or ready made app and freeware too: http://www.theabsolute.net/sware/dskinv.html
the first step is dd_rescue. After that, you're free to try anything to reconstruct the data.
And there's GNU ddrescue
GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in case of read errors.
Make sure to use the 3-arg version (manual):
ddrescue [options] infile outfile [mapfile]
That is, do use a mapfile even if it's optional, because:
If you use the mapfile feature of ddrescue, the data is rescued very efficiently, (only the needed blocks are read). Also you can interrupt the rescue at any time and resume it later at the same point. The mapfile is an essential part of ddrescue's effectiveness. Use it unless you know what you are doing.
And it's also included in Cygwin and Homebrew.
I don't know what layer exists between Windows Explorer and the Win32 APIs. You can try to write a program with the Win32 File I/O stuff. If that doesn't work, then you have to write your own device driver to get any lower.
I've had some luck from the linux side, or using BartPE (http://www.nu2.nu/pebuilder/), but just seeing the file doesn't always mean the file is going to be recoverable, whether you're trying from Windows or Linux. You're best bet might be to use a trial of a recovery program.
I have had two disks start to disintegrate on me. From the pattern of unreadable sectors I think they had internal flaking of their emulsion. WinXP Explorer just threw up its hands and said the drive didn't even exist.
In both cases I used "GetDataBack for NTFS" from Runtime Software (http://www.runtime.org/). You can download a free trial which will show you what you could get back if you paid for it. When I bought it it was $49, but I see it is now $79.
This program is amazing. It's not necessarily fast as it will reread some sectors over and over, trying to get a consensus value from multiple tries, but when it's done you can get back stuff that you thought was gone forever. I had one drive that it took over 10 hours to analyze, but when it was done I got back over 97% of a 500GB drive. Definitely worth the price.
Another great tool is Beyond Compare. I have rev 2.5.3, but it is currently at 3.?? and costs $30. They have a full-functionality, 30-day trail. It does a great job of copying large quantities of files (and only those that need to be copied) and, unlike Explorer, it doesn't blow up if something fails. It's sort of like a visual rsync for Windows, if you're familiar with that program from the Samba people.
I have no connection with either of the comapnies mentioned other than being a very satisfied customer.
The gold standard for recovering data from a magnetic storage device would have to be SpinRite. It's a commerical app though, so you probably wouldn't learn much from it.
If you have a Linux machine around, I can recommend dvdisaster. It is originally meant for creating error correction files, but it also reads DVDs into an image and ignores read errors; and you can use different drives one after another to get missing sectors filled in the image.

Resources