Is there any way to check if a handle, in my case returned by CreateFile, is valid?
The problem I face is that a valid file handle returned by CreateFile (it is not INVALID_HANDLE_VALUE) later causes WriteFile to fail, and GetLastError claims that it is because of an invalid handle.
Since it seems that you are not setting the handle value to INVALID_HANDLE_VALUE after closing it, what I would do is set a read watchpoint on the HANDLE variable, which will cause the debugger to break at each line that accesses the value of the HANDLE. You will be able to see the order in which the variable is accessed, including when the variable is read in order to pass it to CloseHandle.
See: Adding a watchpoint (breaking when a variable changes)
Your problem is caused most probably by either of two things:
You may close the file handle, nevertheless you still try to use it
File handle is overwritten due to a memory corruption
Generally it's a good practice to assign INVALID_HANDLE_VALUE to every handle as long as it's not supposed to contain any valid handle value.
In simple words - when your variable is declared - immediately initialize it to this value. And also write this value into your variable immediately after you close the file handle.
This will give you an indication of (1) - attempt to use the file handle which is already closed (or hasn't been opened yet)
The other answers are all important for your particular problem.
However, if you are given a HANDLE and simply want to find out whether it is indeed an open file handle (as opposed to, e.g., a handle to a mutex or a GDI object etc.), there is the Windows API function GetFileInformationByHandle for that.
Depending on the permissions your handle grants you for the file, you can also try to read some data from it using ReadFile or perform a null write operation using WriteFile with nNumberOfBytesToWrite set to 0.
Open-File are kept as a Data Structure in kernel, I don't think that has a official way to detect a file-handle is valid, just use it and check Error code as INVALID_HANDLE. Are you sure no others thread closed that file-handle?
Checking the validity of the handle is a band-aid, at best.
You should debug the process - set a breakpoint at the point the handle is set up (file open) and when you hit that code and after the handle is set up, set a second conditional breakpoint to trigger when the handle value changes.
This should enable you to work out the underlying cause rather than just check the handle is valid on each access, which is unreliable, costly and not necessary given correct logic.
Just to add to what everyone else is saying, make sure that you check the return value when you call CreateFile. IIRC, it will return INVALID_HANDLE_VALUE on failure, at which point you should call GetLastError to find out why.
Related
I allocated an array of HANDLE on an Heap and then each handle is associated with a thread.
Once I'm finished with the work, do I have to call CloseHandle() on each of them before calling HeapDestroy()? Or does the latter call make the first useless?
Always close a handle once you've finished with it - it is good practice. The Windows Kernel has tables which tracks assigned handles and who they are assigned to, so it will be in your best interest to remember to close them.
Handle leaks is also a thing which exist and it is when a caller requests for a handle but doesn't close it, and they pile up over a duration of time.
You can also occasionally cause other problems by not closing handles (e.g. sharing violations if you opened a handle to a file and denied sharing but you've kept the handle open when you no longer need the open handle).
To be precise though, handles are fake - the Windows Kernel translates them because it relies on an internal, undocumented and non-exported table which stores the real pointer address to a kernel object linked to that fake handle.
Yes, certainly you must first close the handles! Windows does not know (or care) what data you have stored in your heap, so it cannot close the handles automatically.
When we write C programs we make calls to malloc or printf. But do we need to check every call? What guidelines do you use?
e.g.
char error_msg[BUFFER_SIZE];
if (fclose(file) == EOF) {
sprintf(error_msg, "Error closing %s\n", filename);
perror(error_msg);
}
The answer to your question is: "Do whatever you want", there is no written rule, BUT the right question is "What do users want in case of failure".
Let me explain, if you are a student writing a test program for example, no absolute need to check for errors: it may be a waste of time.
Now, if your code may be distributed or used by other people, that quite different: put yourself in the shoes of future users. Which message do you prefer when something goes wrong with an application:
Core was generated by `./cut --output-d=: -b1,1234567890- /dev/fd/63'.
Program terminated with signal SIGSEGV, Segmentation fault.
or
MySuperApp failed to start MySuperModule because there is not enough space on the disk.
Try to free space on disk, then relaunch the app.
If this error persists contact us at support#mysuperapp.com
As it has already been addressed in the comment, you have to consider two types of error:
A fatal error is one that kills your program (app / server / site / whatever it is). It renders it unusable, either by crashing or by putting it in some state whereby it can't do it's usable work. e.g. memory allocation, disk space ...
Non-fatal error is one where something messes up, but the program can continue to do what it's supposed to do. e.g. file not found, serve other users not requesting the thing that called the error.
Source : https://www.quora.com/What-is-the-difference-between-an-error-and-a-fatal-error
Just do error checking if your program behaviour has to behave differently in case an error is detected. Let me illustrate this with an example: Assume you have used a temporary file in your program and you use the unlink(2) system call to erase that temporary file at the end of the program. Have you to check if the file has been successfully erased? Let's analyse the problem with some common sense: if you check for errors, are you going to be able (inside the program) of doing some alternate thing to cope with this? This is uncommon (if you created the file, it's rare that you will not be able to erase it, but something can happen in the time between --- for example a change in directory permissions that forbids you to write on the directory anymore) But what can you do in that case? Is it possible to use a different approach to erase temporary file in that case. Probably not... so checking (in that case) a possible error from the unlink(2) system call will be almost useless.
Of course, this doesn't apply always, you have to use common sense while programming. Errors about writing to files should be always considered, as they belong to access permissions or mostly to full filesystems (In that case, even trying to generate a log message can be useles, as you have filled your disk --- or not, that depends) Do you know always the precise environment details to obviate if a full filesystem error can be ignored. Suppose you have to connect to a server in your program. Should the connect(2) system call failure be acted upon? probably most of the times, at least a message to the user with the protocol error (or the cause of the failure) must be given to the user.... assuming everything goes ok can save you time in a prototype, but you have to cope with what can happen, in production programs.
When i want to use return value of function than suggested to check return value before using it
For example pointer return address that can be null also.so suggested to keep null check before using it.
I know that the POSIX write function can return successfully even though it didn't write the whole buffer (if interrupted by a signal). You have to check for short writes and resume them.
But does aio_write have the same issue? I don't think it does, but it's not mentioned in the documentation, and I can't find anything that states that it doesn't happen.
Short answer
Excluding any case of error: Practical yes, theoratical not necessarily.
Long answer
From my experience the caller does not need to call aio_write() more then once to write the whole buffer using aoi_write().
This however is not a guarantee that the whole buffer passed in really will be written. A final call to aio_error() gives the result of the whole asyncronous I/O operation, which could indicate an error.
Anyhow the documentation does not explicitly excludes the case that the final call to aio_return() returns a value less then the amount of bytes to write out specified in the original call to aio_write(), what indeed needs to be interpreted as that not the whole buffer would have been sent, in which case it would be necessary to call aio_write() again passing in what whould have been indicated as having been left over to write by the previous call.
The list of error codes on this page doesn't include EINTR, which is the value in errno that means "please call again to do some more work". So, no you shouldn't need to call aio_write again for the same piece of data to be written.
This doesn't mean that you can rely on every write being completed. You still could, for example, get an partial write because the disk is full or some such. But you don't need to check for EINTR and "try again".
Is there a way to determine if the process may execute a file without having to actually execute it (e.g. by calling execv(filepath, args) only to fail and discover that errno == EACCES)?
I could stat the file and observe st_mode, but then I still don’t know how that pertains to the process.
Ideally, I’m looking for a function along the lines of
get_me_permissions(filepath, &status); // me: the process calling this function
// when decoded, status tells if the process can read, write, and/or execute
// the file given by filepath.
Thanks in advance.
Assuming your end goal is to eventually execute the program, you don't. This test would be useless because the result is potentially wrong even before the function to make the check returns! You must be prepared to handle failure of execve due to permission errors.
As pointed out by Steve Jessop, checking whether a given file is executable can be useful in some situations, such as a file listing (ls -l) or visual file manager. One could certainly think of more esoteric uses such a using the permission bits of a file for inter-process communication (interesting as a method which would not require any resource allocation), but I suspect stat ("What are the permission bits set to?") rather than access ("Does the calling process have X permission?") is more likely to be the interesting question...
http://linux.die.net/man/2/access
Note the slight tricksiness around suid - if the process has effective superuser privilege it isn't taken into account, since the general idea of the function is to check whether the original invoking user should have access to the file, not to check whether this process could access the file.
My application uses lseek() to seek the desired position to write data.
The file is successfully opened using open() and my application was able to use lseek() and write() lots of times.
At a given time, for some users and not easily reproducable, lseek() returns -1 with an errno of 9. File is not closed before this and the filehandle (int) isn't reset.
After this, another file is created; open() is okay again and lseek() and write() works again.
To make it even worse, this user tried the complete sequence again and all was well.
So my question is, can the OS close the file handle for me for some reason?
What could cause this? A file indexer or file scanner of some sort?
What is the best way to solve this; is this pseudo code the best solution?
(never mind the code layout, will create functions for it)
int fd=open(...);
if (fd>-1) {
long result = lseek(fd,....);
if (result == -1 && errno==9) {
close(fd..); //make sure we try to close nicely
fd=open(...);
result = lseek(fd,....);
}
}
Anybody experience with something similar?
Summary: file seek and write works okay for a given fd and suddenly gives back errno=9 without a reason.
So my question is, can the OS close the file handle for me for some reason? What could cause > this? A file indexer or file scanner of some sort?
No, this will not happen.
What is the best way to solve this; is
this pseudo code the best solution?
(never mind the code layout, will
create functions for it)
No, the best way is to find the bug and fix it.
Anybody experience with something similar?
I've seen fds getting messed up many times, resulting in EBADF in the some of the cases,
and blowing up spectacularly in others, it's been:
buffer overflows - overflowing something and writing a nonsense value into a 'int fd;' variable.
silly bugs that happen because some corner case someone did
if(fd = foo[i].fd) when they meant if(fd == foo[i].fd)
Raceconditions between threads, some thread closes the wrong file descriptor that some other thread wants to use.
If you can find a way to reproduce this problem, run your program under 'strace', so you can see whats going on.
The OS shall not close file handles randomly (I am assuming a Unix-like system). If your file handle is closed, then there is something wrong with your code, most probably elsewhere (thanks to the C language and the Unix API, this can be really anywhere in the code, and may be due to, e.g., a slight buffer overflow in some piece of code which really looks like to be unrelated).
Your pseudo-code is the worst solution, since it will give you the impression of having fixed the problem, while the bug still lurks.
I suggest that you add debug prints (i.e. printf() calls) wherever you open and close a file or socket. Also, try Valgrind.
(I just had yesterday a spooky off-by-1 buffer overflow, which damaged the least significant byte of a temporary slot generated by the compiler to save a CPU register; the indirect effect was that a structure in another function appeared to be shifted by a few bytes. It took me quite some time to understand what was going on, including some thorough reading of Mips assembly code).
I don't know what type of setup you have, but the following scenario, could I think produce such an effect (or else one similar to it). I have not tested this to verify, so please take it with a grain of salt.
If the file/device you are opening implemented as a server application (eg NFS), consider what could happen if the server application goes down / restarts / reboots. The file descriptor though originally valid at the client end might no longer map to a valid file handle at the server end. This can conceivably lead to a sequence of events wherein the client will get EBADF.
Hope this helps.
No, the OS should not close file handles just like that, and other applications (file scanners etc.) should not be able to do it.
Do not work around the problem, find it's source. If you don't know what the reason for your problem was, you will never know if your workaround actually does work.
Check your assumptions. Is errno set to 0 before the call? Is fd really valid at the point the call is being made? (I know you said it is, but did you check it?)
What is the output of puts( strerror( 9 ) ); on your platform?