I'm new here, so hello everyone!
I wrote a few things in Processing language and now I need to switch to Processing.js. I need to write an app that first scans the sketch folder to prepare a list of provided files. And what was straightforward in Processing is not in PJS.
I'm currently searching the web but I only found solutions for classic Processing. I know that JavaScript has restrictions and in general can't access the user-side files, but is there any way to list the sketch-itself files?
The only way that comes to my mind is to list them on server side via PHP and generate the .pde file dynamically depending on the sketch folder. But the catch is to not use any other language.
Thanks in advance for help!
Processing.js running on a website can only get information that URLs can provide it, and since there are no "dir listings" on the web, it can't grab dir listing content for a URL for you work with. However, depending on what you really want to do, there might be a way to make it work without resorting to PHP.
Assuming you have your Pjs page running on www.example.org/index.html, and you want to list content for www.example.org/sketch/, one option is to simply have a file www.example.org/sketch/list.txt containing all the filenames that the sketch can access, and simply grab that with a
String[] fileNames = loadStrings("./sketch/list.txt")
instruction.
If you can give an example of what you mean with "I need to write an app that first scans the sketch folder to prepare a list of provided files", a more specific solution is probably possible (i.e., what are the files, what does the user need them for, etc)
Related
Changing MacDevices seems to confuse the RStudio. I want to always have the working directory be a folder on my external hard drive. Any tips?
I think you need to read a bit about .Rproj and how to use them.
R projects are a way to define your working directory and save memory depending on the project you openned. It allow to make different project without getting files together. Another advantage is that if all your data and script are within a directory linked to a Rproject, you can move it around and share it easily.
Here are some information on how to use them.
An other way of thinking could be modification of your .Rprofile so it setwd('where/you/work') at every R session. Some info on how to customize your .Rprofile. Note that there is drawbacks in this options, because your code may become not reproductible when you give it to someone else.
Im trying to fuzz some tools but i need a huge amount of .zip or .jpg files for that. I ve tried crawlers like webripper but its not very effective (or im doing it wrong). Is there a better way to get lots of different files?
Ok, for the offchance that someone else might need sth like this:
In the end i used Webripper and instead of generating links to google/bing results with the "filetype" parameter i just put some upload/freeware pages as targeted rip job with the max link depth.
Webripper might crash sometimes and it will take quite some time but well it works somewhat.
A possible better solution would probably be to use the google API (e.g. c#SearchAPI ). Then extract the clean links from the results and call asynch download for those. Using the direct result link most likely wont work because google will block it after some files "Unusual datatransfer".
I would like to know your opinion about how you would organize the files/directores in a big web application using MVC (backbone for example).
I would make the following ( * ). Please tell me your opinion.
( * )
js
js/models/myModel.js
js/collections/myCollection.js
js/views/myView.js
spec/model/myModel.spec.js
spec/collections/myCollection.spec.js
spec/views/myView.spec.js
This is how I've traditionally organized my files. However, I've found that with larger applications it really becomes a pain to keep everything organized, named uniquely, etc. A 'new' way that I've been going about it is organizing my files by feature rather than type. So, for example:
js/feature1/someView.js
js/feature1/someController.js
js/feature1/someTemplate.html
js/feature1/someModel.js
But, oftentimes there are global "things" that you need, like the "user" or a collection of locations that the user has built. So:
js/application/model/user.js
js/application/collection/location.js
This pattern was suggested to me because then you can work on feature sets, package and deploy them using requirejs with relatively little effort. It also reduces the possibility of dependencies occurring between feature sets, so if you want to remove a feature or update it with brand new code, you can just replace a folder of 'stuff' rather than hunting for every file. Also, in IDE's, it just makes the files you're working on easier to find.
My two cents.
Edit: What about the spec files?
A few thoughts - you'll just have to pick the one that seems most natural to you I think.
You could follow the same 'feature folder' pattern with the spec files. The upside being that all of the specs are in one place. The downside is that now, much like what you're currently doing, you have to places for one feature's files.
You could put the specs in a 'spec' folder of the feature folder. The upside is that you now have actual packages that can be wrapped up in a single zip file with no chance of clobbering other work. It's also easier to find directly related files for writing tests - they're all in the same parent folder. The downside is that now your production code and test code is in the same folder, publishing it (possibly) to the world. Granted you'll probably end up compiling the production javascript down to one file at some point.. so I'm not sure that's much of an issue.
My suggestion - if this is a large application and you figure you're going to have a few hands touching the files, leave something like a 'package.json/yml/xml' file in the folder. In there, list out the production, spec, and any data files you need for testing (you can most likely write a quick shell script to do this for you). Then write out a quick script to look through your source folder for 'package.whateverYouChose' files, get the test files and then build your unit testing page with it. So, let's say you add another package.. run 'updateSpecRunner' or whatever you name the script, and it'll generate you another SpecRunner.html file (or whatever you named the file your running the specs on). Then you can manually test it in a browser, or automate it using phantomjs/rhino.
Does that make sense?
You can find a good example how to organize your application to this link
Backbone Jasmine examples
It looks more or less like your implementation.
I'm a new user on box.net site and I've uploaded A LOT of .zip files that I want to use in my project.
The problem is that, normally, the share link is something like: box.net/1.zip .. so I can predict that the 100th file will be box.net/100.zip ... but this is not the case in box.net..
I cant obviously copy every files link manually since what I uploaded and need is ~1000 small .zip files and copying each files link will take ages.
So is there a way to fix this?
We recently released a new feature, where you can give your share a custom name. See the blog entry for more details on how to use it.
Right now, we have not exposed an API to set these custom links, but that will be coming soon.
I have an idea for a site that involves uploading files to the site. But what I'd like - and wondering if it's possible - is when a user clicks on "Browse", and selects the file, if it's possible for the site to automatically scan the site's database for similar files before they upload the file to the site. Kind of similar to the automatic "Related Questions" when you act a question on this site.
Sure, that's possible. But you'll have to come up with your own definition, as well as algorithm for finding what's similar.
File Type differences
Different file types should be compared differently. For example a text file would be well suited to a diff to find similar files, but comparing images or videos that are similar is considerably more difficult.
Difficulty of comparisons
Also, comparing against a large number of files is a very expensive thing to do since it's typically done pair-wise. Some indexing methods could help the efficiency of the search though, but I don't see an easy way to do this quickly.
Crowd Source Alternative
Another alternative would be to have the users of the site point out the similarities, that way you simply display a list of the most popular files that were voted similar. Of course, this doesn't help when uploading a new file, but it can help you gain insight as to what users find similar.
What many sites do to compare similarity of content is to allow users to tag items. If one item shares many of the same tags with another, they're likely similar. This is probably the easiest approach.
This also has the benefit that any content type can be compared to any other content type. So text files that have the same tags as a video can be presented as similar.
It's possible to get the file name without uploading the file so you can do the search based on the file name. The content would only be available after the upload.