Changing MacDevices seems to confuse the RStudio. I want to always have the working directory be a folder on my external hard drive. Any tips?
I think you need to read a bit about .Rproj and how to use them.
R projects are a way to define your working directory and save memory depending on the project you openned. It allow to make different project without getting files together. Another advantage is that if all your data and script are within a directory linked to a Rproject, you can move it around and share it easily.
Here are some information on how to use them.
An other way of thinking could be modification of your .Rprofile so it setwd('where/you/work') at every R session. Some info on how to customize your .Rprofile. Note that there is drawbacks in this options, because your code may become not reproductible when you give it to someone else.
Related
Im trying to fuzz some tools but i need a huge amount of .zip or .jpg files for that. I ve tried crawlers like webripper but its not very effective (or im doing it wrong). Is there a better way to get lots of different files?
Ok, for the offchance that someone else might need sth like this:
In the end i used Webripper and instead of generating links to google/bing results with the "filetype" parameter i just put some upload/freeware pages as targeted rip job with the max link depth.
Webripper might crash sometimes and it will take quite some time but well it works somewhat.
A possible better solution would probably be to use the google API (e.g. c#SearchAPI ). Then extract the clean links from the results and call asynch download for those. Using the direct result link most likely wont work because google will block it after some files "Unusual datatransfer".
Before I begin, I would like to express my appreciation for all of the insight I've gained on stackoverflow and everyone who contributes. I have a general question about managing large numbers of files. I'm trying to determine my options, if any. Here it goes.
Currently, I have a large number of files and I'm on Windows 7. What I've been doing is categorizing the files by copying them into folders based on what needs to be processed together. So, I have one set that contains the files by date (for long term storage) and another that contains the copies by category (for processing and calculations). Of course this doubles my data each time. Now I'm having to create more than one set of categories; 3 copies to be exact. This is quadrupling my data.
For the processing side of things, the data ends up in excel. Originally, all the data was brough into excel. Then all organization and filtering was performed in excel. This was time consuming and not easily maintainable over the long term. Later the work load was shifted to the file system itself, which lightened the work in excel.
The long and short of it is that this is an extremely inefficient use of disk space. What would be a better way of handling this?
Things that have come to mind:
Overlapping Folders
Is there a way to create a folder that only holds the addresses of a file, rather than copying the file. This way I could have two folders reference the same file.
To my understanding, a folder is a file listing the memory addresses of the files inside of it, but on Windows a file can only be contained in one folder.
Microsoft SQL Server
Not sure what could be done here.
Symbolic Links
I'm not an administrator, so I cannot execute the mklink command.
Also, I'm uncertain about any performance issues with this.
A Junction
Apparently not allowed for individual files, only folders in windows.
Search folders (*.search-ms)
Maybe I'm missing something, but to my knowledge there is no way to specify individual files to be listed.
Hashing the files
Creating hash tags for all the files, would allow for the files to be stored once. But then I have no idea how I would handle the hash tags.
XML
Maybe I could use xml files to attach meta data to the files and somehow search using them.
Database File System
I recently came across this concept in my search. Not sure how it would apply Windows.
I have found a partial solution. First, I discovered that the laptop I'm using is actually logged in as Administrator. As an alternative to options 3 and 4, I have decided to use hard-links, which are part of the NTFS file system. However, due to the large number of files, this is unmanageable using the following command from an elevated command prompt:
mklink /h <source\file> <target\file>
Luckily, Hermann Schinagl has created the Link Shell Extension application for Windows Explorer and a very insightful reading of how Junctions, Symbolic Links, and Hard Links work. The only reason that this is currently a partial solution, is due to a separate problem with Windows Explorer, which I intend to post as a separate question. Thank you Hermann.
I'm new here, so hello everyone!
I wrote a few things in Processing language and now I need to switch to Processing.js. I need to write an app that first scans the sketch folder to prepare a list of provided files. And what was straightforward in Processing is not in PJS.
I'm currently searching the web but I only found solutions for classic Processing. I know that JavaScript has restrictions and in general can't access the user-side files, but is there any way to list the sketch-itself files?
The only way that comes to my mind is to list them on server side via PHP and generate the .pde file dynamically depending on the sketch folder. But the catch is to not use any other language.
Thanks in advance for help!
Processing.js running on a website can only get information that URLs can provide it, and since there are no "dir listings" on the web, it can't grab dir listing content for a URL for you work with. However, depending on what you really want to do, there might be a way to make it work without resorting to PHP.
Assuming you have your Pjs page running on www.example.org/index.html, and you want to list content for www.example.org/sketch/, one option is to simply have a file www.example.org/sketch/list.txt containing all the filenames that the sketch can access, and simply grab that with a
String[] fileNames = loadStrings("./sketch/list.txt")
instruction.
If you can give an example of what you mean with "I need to write an app that first scans the sketch folder to prepare a list of provided files", a more specific solution is probably possible (i.e., what are the files, what does the user need them for, etc)
I would like to know your opinion about how you would organize the files/directores in a big web application using MVC (backbone for example).
I would make the following ( * ). Please tell me your opinion.
( * )
js
js/models/myModel.js
js/collections/myCollection.js
js/views/myView.js
spec/model/myModel.spec.js
spec/collections/myCollection.spec.js
spec/views/myView.spec.js
This is how I've traditionally organized my files. However, I've found that with larger applications it really becomes a pain to keep everything organized, named uniquely, etc. A 'new' way that I've been going about it is organizing my files by feature rather than type. So, for example:
js/feature1/someView.js
js/feature1/someController.js
js/feature1/someTemplate.html
js/feature1/someModel.js
But, oftentimes there are global "things" that you need, like the "user" or a collection of locations that the user has built. So:
js/application/model/user.js
js/application/collection/location.js
This pattern was suggested to me because then you can work on feature sets, package and deploy them using requirejs with relatively little effort. It also reduces the possibility of dependencies occurring between feature sets, so if you want to remove a feature or update it with brand new code, you can just replace a folder of 'stuff' rather than hunting for every file. Also, in IDE's, it just makes the files you're working on easier to find.
My two cents.
Edit: What about the spec files?
A few thoughts - you'll just have to pick the one that seems most natural to you I think.
You could follow the same 'feature folder' pattern with the spec files. The upside being that all of the specs are in one place. The downside is that now, much like what you're currently doing, you have to places for one feature's files.
You could put the specs in a 'spec' folder of the feature folder. The upside is that you now have actual packages that can be wrapped up in a single zip file with no chance of clobbering other work. It's also easier to find directly related files for writing tests - they're all in the same parent folder. The downside is that now your production code and test code is in the same folder, publishing it (possibly) to the world. Granted you'll probably end up compiling the production javascript down to one file at some point.. so I'm not sure that's much of an issue.
My suggestion - if this is a large application and you figure you're going to have a few hands touching the files, leave something like a 'package.json/yml/xml' file in the folder. In there, list out the production, spec, and any data files you need for testing (you can most likely write a quick shell script to do this for you). Then write out a quick script to look through your source folder for 'package.whateverYouChose' files, get the test files and then build your unit testing page with it. So, let's say you add another package.. run 'updateSpecRunner' or whatever you name the script, and it'll generate you another SpecRunner.html file (or whatever you named the file your running the specs on). Then you can manually test it in a browser, or automate it using phantomjs/rhino.
Does that make sense?
You can find a good example how to organize your application to this link
Backbone Jasmine examples
It looks more or less like your implementation.
Cake PHP stores everything under the /app/tmp/logs folder and if you have multiple servers to see what is happening at each you have to check on each server logs folder.
Is there any solution that I can use with cakephp to centralize in one place the logging for Cakephp with the log files being saved and reset in a daily basis.
Cake allows you to set a parameter in the Controller::log() function.
http://book.cakephp.org/view/159/Using-the-log-function
Basically, when you have an error:
$this->log( 'some message describing the error', 'allserverslog' );
// second param can also be LOG_ERROR or LOG_DEBUG, 2 predefined constants that identify the default logging files
Some quick research shows that a clean method would be to redefine the TMP constant (by default define('TMP', APP.'tmp'.DS)) in /app/webroot/index.php to point the whole temp directory someplace else. This is not a good solution if the folder is supposed to be shared though, since different apps may step on each others feet with their temp files.
The only apparent way to point only the log directory someplace else seems to be to edit /cake/config/paths.php.
If your goal is only to make it easy to skim through log files of different apps quickly, you could simply put a bunch of symlinks to those logs into one directory.
Or, the other way around, you can make each /app/tmp/logs folder a symlink to some shared folder. Not sure I'd recommend that though; having different apps write to the same log may get confusing, since you may not always be sure which app a message came from.