Shake update of index files after changes - shake-build-system

I use the shake build system and watch to keep files for a web site current. I cannot understand how to assure that an index file for a directory is changed when a file in the directory changes.
The index file for a directory lists all the files in it with their title and includes a link. From this, a HTML file is eventually produced by the shake process.
It requires reconstruction when one of the indexed files in the directory changes.
For each index file, the set of files indexed are marked as needed but this does not force the index file to be reconstructed when a file in the directory changes. I had expected that this would trigger the reconstruction of the index file if any of the needed files changes. This seems not to be a correct understanding.
What is the most effective method, to force a re-shake of the index file when a file in the directory changes. Is it sufficient to touch the index file to trigger the reconstruction? Or is it better to recompute the conversion of the index.md file to the next step (pandoc) and the following processing steps are then triggered by the shake logic? Or anything else?

Related

Date in NLog file name and limit the number of log files

I'd like to achieve the following behaviour with NLog for rolling files:
1. prevent renaming or moving the file when starting a new file, and
2. limit the total number or size of old log files to avoid capacity issues over time
The first requirement can be achieved e.g. by adding a timestamp like ${shortdate} to the file name. Example:
logs\trace2017-10-27.log <-- today's log file to write
logs\trace2017-10-26.log
logs\trace2017-10-25.log
logs\trace2017-10-24.log <-- keep only the last 2 files, so delete this one
According to other posts it is however not possible to use date in the file name and archive parameters like maxArchiveFiles together. If I use maxArchiveFiles, I have to keep the log file name constant:
logs\trace.log <-- today's log file to write
logs\archive\trace2017-10-26.log
logs\archive\trace2017-10-25.log
logs\archive\trace2017-10-24.log <-- keep only the last 2 files, so delete this one
But in this case every day on the first write it moves the yesterday's trace to archive and starts a new file.
The reason I'd like to prevent moving the trace file is because we use Splunk log monitor that is watching the files in the log folder for updates, reads the new lines and feeds to Splunk.
My concern is that if I have an event written at 23:59:59.567, the next event at 00:00:00.002 clears the previous content before the log monitor is able to read it in that fraction of a second.
To be honest I haven't tested this scenario as it would be complicated to set up as my team doesn't own Splunk, etc. - so please correct me if this cannot happen.
Note also I know that it is possible to directly feed Splunk other ways like via network connection, but the current setup for Splunk at our company is reading from log files so it would be easier that way.
Any idea how to solve this with NLog?
When using NLog 4.4 (or older) then you have to go into Halloween mode and make some trickery.
This example makes hourly log-files in the same folder, and ensure archive cleanup is performed after 840 hours (35 days):
fileName="${logDirectory}/Log.${date:format=yyyy-MM-dd-HH}.log"
archiveFileName="${logDirectory}/Log.{#}.log"
archiveDateFormat="yyyy-MM-dd-HH"
archiveNumbering="Date"
archiveEvery="Year"
maxArchiveFiles="840"
archiveFileName - Using {#} allows the archive cleanup to generate proper file wildcard.
archiveDateFormat - Must match the ${date:format=} of the fileName (So remember to correct both date-formats, if change is needed)
archiveNumbering=Date - Configures the archive cleanup to support parsing of filenames as dates.
archiveEvery=Year - Activates the archive cleanup, but also the archive file operation. Because the configured fileName automatically ensures the archive file operation, then we don't want any additional archive operations (Ex. avoiding generating extra empty files at midnight).
maxArchiveFiles - How many archive files to keep around.
With NLog 4.5 (Still in BETA), then it will be a lot easier (As one just have to specify MaxArchiveFiles). See also https://github.com/NLog/NLog/pull/1993

kqueue watch size change on directory

I'm trying to watch for size change on directory using kqueue, is this possible? The reason for this because I am watching directories and whenever an event triggers, I stat the directory and compare last mod times etc to figure out if contents-modifed, added, removed, or renamed events happend. My goal is to get an even to trigger on directory when contents-modified happens on a file inside the directory, I couldn't accomplish that so we had an idea, we want to detect size change on directory, as if a contents-modified happend on a file within then the size of directory will change. Is this possible?
Thanks
You don't want/need to stat() the directory. You need to read the list of files in the directory each time kqueue says the directory was modified, and compare it to the list as it was the last time you read it. Only then will you know if a new file has appeared, or if a file has been removed, or if a file has been renamed (you will also need to keep track of the inode numbers for each file in the list to detect renames).
If you want to further monitor for changes to each file then you also need to add events for each file in the directory and update this list of events each time the event for the directory file is signalled.
FYI: This command-line utility does what you want, and can be built to use kqueue: https://github.com/emcrisostomo/fswatch

How do I reference a bibtex file from directory other than current project?

What I am trying to do is create a "Master" bibtex bibliography (organized via JabRef) in a convenient directory so that I do not need to copy new references from every project I work on into my master database. The issue I am coming up against is that while I can reference another file easily enough (e.g. for STATA regression table output), even if it is not in the same directory, the bibliography does not want to cooperate.
For the purposes of this example I have created a dummy directory in
My Documents/Course/Paper.
The Tex file in under
My Documents/Course/Paper/MasterTexFile.tex
and the example Tex file referenced in the code (simply called Text) is under
My Documents/Course/Text.tex.
My ideal is to have the bibliography in a more general directory altogether, but I have placed it just above the working tex file for illustrative purposes. Document code is as follows:
\documentclass[12pt,titlepage]{article}
\usepackage[round,longnamesfirst]{natbib}
\usepackage{setspace}
\doublespacing
\usepackage[margin=1in]{geometry}
\usepackage{hyperref}
\hypersetup{
pdftitle={TITLE},
pdfauthor={AUTHOR},
pdfsubject={SUBJECT},
pdfkeywords={KEYWORD} {KEYWORD} {KEYWORD},
colorlinks=true,
linkcolor=blue,
citecolor=blue,
filecolor=magenta,
urlcolor=blue
}
\begin{document}
\title{TITLE}
\date{\today}
\author{AUTHOR\\STUDENT NUMBER}
\maketitle
%Begin Document Text
\pagestyle{headings}
\section{Introduction}
\cite{Shapiro2015} %Example citation from my database
\input{../Text} %this comes from the directory ABOVE that of the current file.
%\input{../00 Master Bibliography} %This was inputted using the user interface (Texmaker). I have it here just to demonstrate that it is truly in the directory.
%References
\newpage
\bibliographystyle{plainnat}
\bibliography{../00 Master Bibliography}
%\bibliography{00_Master_Bibliography} %If the database is in the directory, everything works fine.
\end{document}
The document compiles (I use PdfLatex, BibTex, PdfLatex x2, sequentially), and properly references the Text document (which just contains the word "Text"), but I get the following errors:
Package natbib Warning: Citation `Shapiro2015' on page 1 undefined on input line 40.
Package natbib Warning: Empty `thebibliography' environment on input line 8.
Package natbib Warning: There were undefined citations.
Note: I have removed some lines of comments for brevity, so if you copy this into your editor the line numbers will be different.
These are to be expected if the database wasn't found, but I have no idea why it wouldn't be found. Does it have anything to do with natbib? Is it a feature of the natbib package that it cannot reference a database from any directory other than that of the current file? This seems unlikely.
Any help would be greatly appreciated!
Well, turns our there was a simple solution. I am not sure whether it is the '00' I had before the bibliography name (I added '00' to make sure it was at the top of the list of files in its folder), or the space between these and the word 'Bibliography', but changing the name of the file to simply `Bibliography' worked.

Solr segments.gen and segments_N file restore

Unfortunately I ran full data import with checked clean index option. I was able to copy whole index to backup directory before they were deleted (I killed solr), but segments.gen and segments_N files were already updated, so any time I copy back index to its origin directory, all index files are deleted on startup of Solr.
I think it is deleted because segments files does not contain my index files information - because segment files point to "after clean" index files.
I tried to somehow reconstructed segments files, but was unlucky and I also did not find way how to do it with solr code change.
Is there any possibility to do it?
I would guess that it is unlikely that the segment_N and segments.gen files were the only things lost, by the sound of it, but you can try using CheckIndex.
You can run it from a command line something like:
java -ea:org.apache.lucene... org.apache.lucene.index.CheckIndex path/to/index -fix
Or you can invoke methods of it in your own implementation, something like:
Directory directory = FSDirectory.open(new File("path/to/index"));
CheckIndex check = new CheckIndex(directory);
CheckIndex.Satus status = check.checkIndex();
check.fixIndex(status);

Clearcase: how to copy/fork a file?

In Clearcase, I want to copy (fork, split) a file while preserving its history. Something like svn cp old.txt new.txt. How do I do it?
It isn't possible do fork a file in ClearCase.
If you refactor your code and split a file in two, one of them will appear as a new file and you will loose the information about who coded it. The annotate command will say the author of the lines are who splited it.
UCM or not, you cannot duplicate easily the full history of a file.
The best way to isolate an history is still to create a branch in order to make new versions to that file without impacting the same file in the original branch.
Thinking 'svn cp' should be available in ClearCase might come from the fact that, in SVN, branches are directories, and a tool like cc2svn will actually replicate ClearCase branches using 'svn cp'.
But since, with ClearCase, branches are first-class citizen, it is best to reason in term of branch than in term of copy/fork.
From the main page of cc2svn:
There is a difference in creating the branches in ClearCase and SVN:
SVN copies all files from parent branch to the target like: svn cp branches/main branches/dev_branch
ClearCase creates the actual branch for file upon checkout operation only.
Pretty simply done
Check out parent folder
Move element you wish to duplicate to appropriate location (not within the checked out parent folder)
Undo Checkout of parent folder
All the files get returned to the original folder with history and also the duplicate ones remain in the new location with the history too. Now each file can be checked out and changed individually

Resources