Can visNetwork save html dependencies (javascript libraries etc.) to a specific folder? - htmlwidgets

I have a script that saves many visNetwork html outputs in the same folder. Each html file has its own set of associated files (e.g. visNetwork.js, htmlwidgets.js) in a separate folder. As far as I can tell, the contents of the associated folders is the same in every instance. Given that each associated folder is about 1MB, it would make sense to save this information only once, and have all of the html outputs use the same folder.
The saveWidget function, which I think the visSave function is related to, has a libdir parameter that specifies where these dependencies will be saved. But the libdir parameter doesn't seem to be supported in visNetwork. Is there some other way of specifying where the dependencies are saved?
(Note - I do NOT want to embed the dependencies into the html file, as per the selfcontained option. I just want to save them in a specific place to avoid replication.)

Ugh, all I had to do was call saveWidget directly and it works just the same as visSave, while allowing the libdir parameter.
Sorry, bit of a noob with this stuff!

Related

How to rename a file while using "move" in URL in apache camel

I have an URL like
url = "file:D:/inputFolder?move=D:/outputFolder". we are making this url dynamically.
I want to rename the file while moving, So I made it something like this
url = "file:D:/inputFolder?move=D:/outputFolder&fileName=abc.txt". But I think move and fileName do not work together, it is not renaming.
Is there any alternative to do it? Please remember I want with "move" only.
I cannot use .setHeader(..) also.
Thanks,
Hy,
as far as I understand you, your trying to move the file in one single uri.
That is not really how camel works.
The idea of camel is to have a "consumer" and a "producer", where the consumer loads data (e.g. your file) and the producer puts the data somewhere (e.g. save the file into a folder)
That being said, here is what worked for me with a java route:
from("file:/home/chris/temp/camel/in")
.to("file:/home/chris/temp/camel/out/?fileName=test.txt");
The from part configures the folder where camel looks for new files. A few notes on that:
The file component checks the folder each 0.5 sec for new files. This can be changed with the delay parameter
The option noop configures, if the file is being moved or copied. By default it is set to false, which means it is moved
In the to part you configure, where the file is supposed to be moved. Here you can use the fileName parameter to rename the file.
Be careful with this though, because setting an option in the uri directly does make it "static".
What I mean by that is, that the only way of changing the parameter is by completely reconfiguring the route or by restarting it, where neither is something you would want to do normally.
Note 1:
Moving all files that are put into one folder into the same file always overrides the previous file by default.
You could, for example, use the fileExists parameter to always just append the content of the file: fileExists=Append (See camel file docu for details)
Note 2:
There is an option in the file component to not "move" the file, but copy, rename and delete it, which sometimes is necessary, when you want to move it onto a different drive and a simple copy does not work.
Also see the docu for the camel file component for details on that.
Note 3:
You can have multiple to() statements in the same route to have the file moved to multiple locations. For example:
from("file:/home/chris/temp/camel/in")
.to("file:/home/chris/temp/camel/out/?fileName=test.txt")
.to("smtp:....");
Hope I could help you and answer you question.
Greets
Chris
Two possible ways to achieve your goal.
Use both "consumer" and "producer"
Using this way, you are free to control where and how your destination can be set and has great freedom to control filename with the use of a processor/bean.
from("file:D:/inputFolder")
.to("file:D:/outputFolder?fileName=abc.txt")
Use "consumer" only
Using this way, you are treating your work as source data control. This can be use when your file is going to move within same drive. The drawback is the filename rename pattern is limited (refer to camel file language)
from("file:D:/inputFolder?move=${file:parent}/../outputFolder/abc.txt")

Why Shake dependencies are explicitly `needed`?

I find first example of Shake usage demonstrating a pattern that seems error prone:
contents <- readFileLines $ out -<.> "txt"
need contents
cmd "tar -cf" [out] contents
Why do we need need contents when readFileLines reads them and cmd references them? Is this so we can avoid requiring ApplicativeDo?
I think part of the confusion may be the types/semantics of contents. The file out -<.> "txt" contains a list of filenames, so contents is a list of filenames. When we need contents we are requiring the files themselves be created and depended upon, using the filenames to specify which files. When we pass contents on to cmd we are passing the filenames which tar will use to query the files.
So the key point is that readFileLines doesn't read the files in question, it only reads the filenames out of another file. We have to use need to make sure that using the files is fine, and then we actually use the files in cmd. Another way of looking at the three lines is:
Which files do we want to operate on?
Make sure those files are ready.
Use those files.
Does that make sense? There's no relationship with ApplicativeDo - it's presence wouldn't help us at all.

Conditionally ignore path from Subversion?

Is it possible to globally ignore a folder IF it is a child of a folder having a specific name? For example...
Exclude:
client/vendor
... or ...
app/vendor
But never exclude a "vendor" folder if it appears anywhere else?
I'm working on an AngularJS project and the "vendor" folder is common for client-side files. However, theoretically, it is possible that "vendor" may have another meaning in future projects and, if it does, it would generally be in another path.
The docs on this are a bit misleading (to me, anyway). It says to use the svn:ignore property but no examples anywhere show how to specify the conditional parent folder. They all appear to be manually ignoring a specific folder every time... via a command line.
Per the TortoiseSVN docs:
No Paths in Global Ignore List (Link here) You should not include
path information in your pattern. The pattern matching is intended to
be used against plain file names and folder names. If you want to
ignore all CVS folders, just add CVS to the ignore list. There is no
need to specify CVS */CVS as you did in earlier versions. If you want
to ignore all tmp folders when they exist within a prog folder but not
within a doc folder you should use the svn:ignore property instead.
There is no reliable way to achieve this using global ignore patterns.

How to achieve symbol referencing across directories in vim?

Can ctags tag symbols from a directory up in the hierarchy also or is it limited to create tags for current and sub-directories only?
Basically I'm looking for Visual Studio like symbol cross referencing it is very helpful in understanding alien source code flow.
If not Vim, then which other editor should I use?
thanks
Ctags only recurses to subdirectories. But all you have to do is run ctags -R . in your project home directory, and it will create a tags file for your whole project.
You aren't limited to specifying one tags file in Vim. This is an alternative to the other answers; you can just do something like:
set tags=tags,~/wintags,c:/path/to/moretags/etc
So you don't need to take the time regenerating a monolithic tags file when you just want to update your local tags.
Regarding the OP's comment in another answer,
yes thats correct but when i open a file say proj/dir1/def.c and press ctrl+] on a function name which is defined say in proj/dir2/abc.c, I get tag not found :(
You could also create one tags file for all of your projects at the 'proj' root:
set tags=tags;c:/path/to/proj
This will use the first file named tags that it finds as it walks up the directory hierarchy from where you are.
You can combine these two techniques to have a project-local tags file and then a "global" tags file that isn't updated as often.
Whilst it's got similar user interface for asking it to do it's thing, so you need to actually specify "go down directories", I find that cscope is a very nice tool, whcih does everything that ctags does and a bit more.
ctags (well, exctags at least) can create tags for as many directory trees you want. Simply run
exctags -R dir1 dir2 ...
Then vim knows about all the symbols you need. For example, one of the directories could be /usr/include in addition to your own source directory.
Make sure to run vim path/to/file.c from the same directory you created the tags file in.

How can git be configured to ignore files?

There are some files we want ignored, not tracked, by git, and we are having trouble figuring out how to do that.
We have some third-party C library which is unpacked and we have it in Git. But when you configure && make it, it produces many new files. How to write .gitignore to track source files and not the new stuff. (it's not like forbidding *.o)
Edit: There are at least 12 file-types. So we would like NOT to enumerate, which type we want and which not.
Use ! to include all the types of files you need. Something like in the following example"
*
!*.c
!*.h
Explicitly specifying which files should be tracked and ignoring all others might be a solution. * says ignore everything and subsequent lines specify files and directories which should not be ignored. Wildcards are allowed.
*
!filename
!*.extension
!directory/
!/file_in_root_directory
!/directory_in_root_directory
Remember that the order matters. Putting * at the end makes all previous lines ineffective.
Take a look at man gitignore(5) and search for !. It says
Patterns have the following format:
(...)
An optional prefix ! which negates the pattern; any matching file excluded by a previous pattern will become included again. If a negated pattern matches, this will override lower precedence patterns sources.
I'm not sure why you say "it's not like forbidding *.o", but I think you mean that there aren't any good patterns you can identify that apply to the generated files but not to the source files? If it's just a few things that appear (like individual built executables that often don't have any extension on Linux), you can name them explicitly in .gitignore, so they aren't a problem.
If there really are lots and lots of files that get generated by the build process that share extensions and other patterns with the source files, then just use patterns that do include your source files. You can even put * in .gitignore if it's really that bad. This will mean that no new files show up when you type git status, or get added when you use git add ., but it doesn't harm any files that are already added to the repository; git will still tell you about changes to them fine, and pick them up when you use git add .. It just puts a bit more burden on you to explicitly start tracking files that you do care about.
I would make sure the repo is clean (no changes, no untracked files), run configure && make and then put the newly untracked filed into the ignore file. Something like git status --porcelain | fgrep '??' | cut -c4- will pull them out automatically, but it would be worth some eyeball time to make sure that is correct...

Resources