Can create a file but not write - file

I have an application that writes to a log file and also creates other files as it runs. What I am seeing happening is that, in some rare cases, the application is running fine (creating and writing files and appending to the log file) and then suddenly the application can continue to create and delete files, but the application cannot write to a file. The log file stops writing in the middle of a line and other files that are created are 0 bytes because although they can be created, we could not write to them.
Rebooting the machine helps and we can create and write files with no issue after the reboot. All affected machines are running either RHEL6 or CentOS 6. This is not a drive space issue, there is tons of room left on the machines that display the issue.
Does anyone have any clue what might cause this behavior?

Related

An unknown process is overwriting the contents of my application's files with 0x00 bytes

Summary
I have developed a Windows application that saves configuration files every time it runs. Rarely, when reading those files back in, I discover that they are filled with zeroes (0x00 bytes). I've determined that they are zeroed out long after my application has exited by an unknown process. What could this process be, and why is it zeroing out my files?
Details
Although rare, I am not the only one having this problem. For a broader perspective, see: Very strange phenomenon: output file filled or truncated with binary zeros. What could cause that?
I'm using Visual Studio 2017 C++, but files are opened with fopen(), written to with a single fwrite() call, and closed using fclose(). All calls are error-checked (other affected people report that they use CFile).
I confirm the validity of each save by immediately reading it back in and checking file contents (via hashes/checksums). I'm certain that the file is indeed written to the disk correctly.
At some point before the next invocation of my application, some unknown process writes to the file and fills it with zeroes (the file size/length stays the same, but all bytes are 0x00).
How I know this: Every time I save a file, I also save in the Registry the file's 3 GetFileTime() times (Creation, Last Access, Last Write). When I detect a corruption (zero-out), I compare the current GetFileTime() times with the ones stored in the Registry. In all cases, the "Last Access" and "Last Write" times have changed.
Something has written to my files a random amount of time later (anywhere between 10 minutes and 24+ hours, although that number is obviously also skewed by how frequently the application is executed).
Some more information:
I can't reproduce the problem on my development machine -- all instances are on clients (application users) through remote logging and crash reports (I've been trying to solve this for months). This makes it impossible to run the application in a controlled test harness (e.g. debugger, procmon, security policy's "Audit object access", etc)
Upon detecting a zero-out, I immediately log the currently running process list (using CreateToolhelp32Snapshot() -- no services). However, there isn't any common process that stands out. If I exclude processes also running on my development machine, then the next most common process (it's a Windows process) is only executing on 50% of the clients. Avast antivirus (a potential culprit) only exists on 35% of clients.
I've tried bracketing each save with "canary" files that are saved immediately prior and after the actual file. Canary files contain plain text, random binary data, or actual application data (encrypted or not). They are all zeroed out regardless.
Whenever multiple of my files get zeroed out, then they ALL get zeroed out simultaneously (their "Last Access"/"Last Write" times are within 100ms across all files).
I've tried compiling the application both with Visual Studio 2017 using CL and also Visual Studio 2019 using LLVM. They are both zeroed out.
Zero-outs still happen with a frequency of about 1 per day for every 1000 daily application executions (i.e., daily active users).
Zero-outs are random and once-only. It could be months before it occurs for a given client. And after it does, it never happens again.
Affects both Windows 7 and Windows 10, even if no antivirus is installed.

Unknow file extension with file name "store"

I was trying to reduce some disk space on on my external hard drive and I found a New folder with a file in it. When I opened the folder there was file with the name "store" and no extension. The size of the file is 5.24 GB does anyone know what could it be and is it safe to delete it as I have plenty of data on the External hard drive about 1TB which I cannot loose. please find the screenshot of the image here
If everything else looks and works fine then I'd just delete it. I'm guessing the file was created by a program, perhaps as temporary storage. To be more cautious you could move the file from your external hard drive to your computer, make sure everything works as expected, then either move the file back or delete it accordingly.
Initially I thought the worst case scenario was that the file was some file system information or driver that your computer wasn't recognizing and deleting it would cause corruption. However since it's always connected to the same computer and all the files etc look fine, the worst thing I can think of is that it's the results of a virus scan or some user settings. Therefore it should be safe to delete.
Side note: in the file properties details tab near the bottom there might be an "Owner" listed although it will probably be (your computer's name) / (your username).

Application calls old source functions

There is an application on remote machine with Linux OS(Fedora), writing to the log file when certain events occur. Some time ago I changed format of the message being written to the log file. But recently it turned out that for some reason in some seldom cases log files with old format messages appear there. I know for sure that none part of my code can write such strings. Also there is no instance of the old application running. Does anyone have some ideas why it can happen? It's not possible to check which process writes those files because anything like auditctl is not installed there, and neither package manager or yum to get it or install. Application is written in C language.
you can use fuser command to find out all the processes that are using that file
`fuser file.log`

Access Directories & Files

I was wondering, I am writing a program in C, and I writing the output from my program to a csv file. This works locally, and I can create and update the file with no errors.
But on the server, where I need to store the file - I do not have permissions to write to that file/directory. Is there a work around for getting around the permissions problems?
Well, you can run your program as a more authorative user, such as root.
There is no simple way from the program itself to just ignore the operating system's security model, that would make it quite pointless.
Note that if you're not the administrator on the server, you're likely not even allowed to run programs as root.
Finally, writing C programs that manipulate files and directories and then running them as root on a server is a fine way of shooting yourself in the foot. Be careful.

Foreach Loop Container with Foreach File Enumerator option iterates all files twice

I am using the SSIS Foreach Loop Container to iterate through files with a certain pattern on a network share.
I am encountering an kind of unreproducible malfunction of the Loop Container:
Sometimes the loop is executed twice. After all files were processed it starts over with the first file.
Have anyone encountered a similar bug?
Maybe not directly using SSIS but accessing files on a Windows share with some kind of technology?
Could this error relate to some network issues?
Thanks.
I found this to be the case whilst working with Excel files and using the *.xlsx wildcard to drive the foreach.
Once I put logging in place I noticed that when the Excel was opened it produced an excel file prefixed with ~$. This was picked up by the foreach loop.
So I used a trick similar to http://geekswithblogs.net/Compudicted/archive/2012/01/11/the-ssis-expression-wayndashskipping-an-unwanted-file.aspx to exclude files with a ~$ in the filename.
What error message (SSIS log / Eventvwr messages) do you get?
Similar to #Siva, I've not come across this, but some ideas you could use to try and diagnose. You may be doing some of these already, I've just written them down for completeness from my thought processes...
log all files processed. write a line to a log file/table pre-processing (each file), then post-process (each file). Keep the full path of each file. This is actually something we do as standard with our ETL implementations, as users are often coming back to us with questions about when/what has been loaded. This will allow you to see if files are actually being processed twice.
perhaps try moving each file after it is processed to a different directory. That will make it more difficult to have a file processed a second time and the problem may disappear. (If you are processing them from an area that is a "master" area (and so cant move them), consider copying the files to a "waiting" folder, then processing them and moving them to a "processed" folder)
#Siva's comment is interesting - look at the "traverse subfolders" check box.
check your eventvwr for odd network events, or application events (SQL Server restarting?)
use perf mon to see if there is anything odd happening in terms of network load on your server (a bit of a random idea!)
try running your whole process with files on a local disk instead of a network disk, if your mean time between failures is after running 10 times, then you could do this load locally 20-30 times and if you dont get an error it may be a network error
nothing helped - I implemented following workaround: script task in the foreach iterator which tracks all files. if a file was alread loaded a warning is fired and the file is not processed again. anyway, seems to be some network related problem...

Resources