Get last modified file without enumerating all files - file

In c#, given a folder path, is there a way to get the last modified file without getting all files?
I need to quickly find folders that have been updated after a certain time and if the file that was last modified is before the time, i want to skip the folder entirely.
I noticed that folder's last modified time does not get updated when one of its file get updated so this approach does't work.

No, this is why windows comes with indexing to speed up searching. The NTFS file system wasn't designed with fast searching in mind.
In any case you can monitor file changes which is not difficult to do. If it is possible to allow your program to run in the background and monitor changes then this would work. If you needed past history you could do an initial scan only once and then build up your hierarchy from their. As long as your program is always being ran then it should have a current snapshot and not have to do the slow scan.
You can also use the Window Search itself to find the files. If indexing is available then it's probably as fast as you'll get.

Try this.
DirectoryInfo di = new DirectoryInfo("strPath");
DateTime dt = di.LastWriteTime;
Then you should use
Directory.EnumerateFiles(strPath, "*.*", SearchOption.TopDirectoryOnly);
Then loop the above collection and get FileInfo() for each file.
I don't see a way how can you get the modified date of a file w/o getting reference to FileInfo() on that file.
I don't think FileInfo will get this file as far as I know.

Related

Update file across multiple folder locations?

I need something that can copy a specified file any and everywhere on my drive (or computer) where that file already exists; i.e. update a file. I tried to search this site, in case I'm not the first, and found this:
CMD command line: copy file to multiple locations at the same time
But not quite the same.
Example:
Say I have a file called CurrentList.txt, and I have copies of it all over my hard drive.  But then I change it and I want all the copies to update.  So I want to copy the newer one over all the others.  It could 'copy if newer', but generally I know it's newer, so it could also just find every instance and copy over it.
I was originally going to use some kind of .bat file that would have to iterate over every folder seeking the file in question, but my batch file programming is limited/rusty.  Then I looked to see if xcopy could do it, but I don't think so...
For how I will use it most, I generally know where those files are going to be, so it actually might be as good or better if I could specify it to (using example), "copy CurrentList.txt, overwriting all other copies wherever found in the C:\Lists folder and all subfolders".
I would really like to be able to have it in a context menu, so I could (from a file explorer) right click on a file or selected files and choose the option to distribute it.
Thanks in advance for any ideas.
Use the "replace" command...
replace CurrentList.txt C:\Lists /s

Shake: automatically deleting file after failed command

Using Shake, to create an mp3 (this is just a learning example), I use lame, and then id3v2 to tag it.
If the lame succeeds, but the id3v2 fails, then I'm left with the mp3 file in place; but of course it is "wrong". I was looking for an option to automatically delete target files if a producing command errors, but I can't find anything. I can do this manually by checking the exit code and using removeFiles, or by building in a temporary directory and moving as the last step; but this seems like a common-enough requirement (make does this by default), so I wonder if there's a function or simple technique that I'm just not seeing.
The reason Make does this by default is that if Make has a partial incomplete file on disk, it considers the task to have run successfully and be up to date, which breaks everything. In contrast, Shake records that a task ran successfully in a separate file (.shake.database), so it knows that your mp3 file isn't complete, and will rebuild it next time.
While Shake doesn't need you to delete the file, you might still want to so as to avoid confusing users. You can do that with actionOnException, something like:
let generateMp3 = do cmd "lame" ... ; cmd "id3v2" ...
let deleteMp3 = removeFile "foo.mp3"
actionOnException generateMp3 deleteMp3

Date in NLog file name and limit the number of log files

I'd like to achieve the following behaviour with NLog for rolling files:
1. prevent renaming or moving the file when starting a new file, and
2. limit the total number or size of old log files to avoid capacity issues over time
The first requirement can be achieved e.g. by adding a timestamp like ${shortdate} to the file name. Example:
logs\trace2017-10-27.log <-- today's log file to write
logs\trace2017-10-26.log
logs\trace2017-10-25.log
logs\trace2017-10-24.log <-- keep only the last 2 files, so delete this one
According to other posts it is however not possible to use date in the file name and archive parameters like maxArchiveFiles together. If I use maxArchiveFiles, I have to keep the log file name constant:
logs\trace.log <-- today's log file to write
logs\archive\trace2017-10-26.log
logs\archive\trace2017-10-25.log
logs\archive\trace2017-10-24.log <-- keep only the last 2 files, so delete this one
But in this case every day on the first write it moves the yesterday's trace to archive and starts a new file.
The reason I'd like to prevent moving the trace file is because we use Splunk log monitor that is watching the files in the log folder for updates, reads the new lines and feeds to Splunk.
My concern is that if I have an event written at 23:59:59.567, the next event at 00:00:00.002 clears the previous content before the log monitor is able to read it in that fraction of a second.
To be honest I haven't tested this scenario as it would be complicated to set up as my team doesn't own Splunk, etc. - so please correct me if this cannot happen.
Note also I know that it is possible to directly feed Splunk other ways like via network connection, but the current setup for Splunk at our company is reading from log files so it would be easier that way.
Any idea how to solve this with NLog?
When using NLog 4.4 (or older) then you have to go into Halloween mode and make some trickery.
This example makes hourly log-files in the same folder, and ensure archive cleanup is performed after 840 hours (35 days):
fileName="${logDirectory}/Log.${date:format=yyyy-MM-dd-HH}.log"
archiveFileName="${logDirectory}/Log.{#}.log"
archiveDateFormat="yyyy-MM-dd-HH"
archiveNumbering="Date"
archiveEvery="Year"
maxArchiveFiles="840"
archiveFileName - Using {#} allows the archive cleanup to generate proper file wildcard.
archiveDateFormat - Must match the ${date:format=} of the fileName (So remember to correct both date-formats, if change is needed)
archiveNumbering=Date - Configures the archive cleanup to support parsing of filenames as dates.
archiveEvery=Year - Activates the archive cleanup, but also the archive file operation. Because the configured fileName automatically ensures the archive file operation, then we don't want any additional archive operations (Ex. avoiding generating extra empty files at midnight).
maxArchiveFiles - How many archive files to keep around.
With NLog 4.5 (Still in BETA), then it will be a lot easier (As one just have to specify MaxArchiveFiles). See also https://github.com/NLog/NLog/pull/1993

How to get file name when file change is observed via watch_file

I am currently facing an issue which I don't know how to fix. I got the following Julia code:
while true
print(watch_file("test"))
end
So this should get me all the file changes in the directory named "test". At least on windows.
Now thats all well and good, and it kinda works, at least for creating a file or moving a file to that directory. This is an example of what I get:
("New Textfile.txt",Base.FileEvent(true,false,false))
But when I delete or rename that file, I don't get the filename of the file deleted or renamed.
("",Base.FileEvent(true,false,false))
Is there a different method/function I can get the filename with, even when the file is deleted or renamed? Or even better, a way that archives this and is cross-platform-compatible? Any help appreciated.
EDIT: If you could give me an alternative that supports recursive monitoring, that would be even better.
In Linux, Julia 0.4.5 and 0.4.3 watch_file returns file name always. It is a very platform-dependent feature (like in Node.js https://nodejs.org/api/fs.html#fs_caveats) and only manual polling can be truly platform-independent solution.

How do you set the modification time of a file in a ClearCase vob?

I have several files that I rsync'd over to the vob and they all have times 40 minutes in the future.
I tried touch, and all that does is maintain the time 40 minutes into the future from when I touch.
I guess that ClearCase is in charge of setting the modification time and is overriding touch.
Is there another way? Is there a way to tell ClearCase to stop messing up the file time?
What option did you use when adding those files to source control?
As explained in this help page:
To preserve the modification time of the file being checked in, use the -ptime option.
If you omit the -ptime option, the modification time of the new version is set to the checkin time.
The mkelem man page adds:
On some UNIX and Linux platforms, it is important that the modification time be preserved for archive files (libraries) created by ar(1) (and perhaps updated with ranlib(1)).
The link editor, ld(1), generates an error message if the modification time does not match a time recorded in the archive itself. Be sure to use this option, or (more reliably) store archive files as elements of a user-defined type, created with the mkeltype –ptime command. This causes –ptime to be invoked when the element is checked in.
Unless you remove those files and re-create them, I don't think you can change the "Created on" time.

Resources