I was just wondering if anyone had any good, and most importantly - quick ways of deleting everything in their blocks / media libraries without having to go through every individual item, and moving it to Trash.
This is something I've been having to do before importing fresh data into my site, the only way I have found of doing it efficiently is to put everything into one folder with sub folders within it, this then allows me to delete the folder and all of its contents. But doesn't really help me with my current project where we already have a ton of assets that I don't really want to go through, moving etc..
I'm surprised there isn't a way to simply clear your library, or even select all and then move to trash?
Thanks,
Giuseppe
There is no easy way to do it from the GUI, as far as I know, but you do have the DeleteChildren method on IContentRepository. So creating a scheduled job or a plugin to do it for you should be pretty straightforward.
IContentRepository _contentRepository; // Get this from whereever
var blockRoot = new ContentReference(123); // Get this from whereever, i.e. a property on your startpage
_contentRepository.DeleteChildren(blockRoot, true, AccessLevel.NoAccess);
Also, if you do this on your media data, remember to run the "Remove abandoned BLOBs" scheduled job afterwards (unless it's autorunning on intervals) to clear up disk space.
Related
I have been developing some things and you know during early prototyping types and tables change quickly... it would be nice to cleanup the old data and start again in certain meshes.
For now I was using the example HTTP server so I deleted data.json; but I forgot the localStorage in the browser also needs to be cleared.
One might suppose you could put(null)
I asked on gitter and got
https://github.com/amark/gun/wiki/delete
except for deletes , lol, our excuse is "It works like your OS, when you delete >something it just gets tossed in the trash/recycle bin. That's all."
better safe than sorry though
if you are trying to "delete" stuff because you messed up while developing >something, follow this three step process: 1) localStorage.clear() in every >browser tab you have up, 2) Crash the server and rm data.json, 3) restart >everything. You should then have a clean slate. Often times while I'm >devleoping something I put localStorage.clear() at the top of my code so I only >have to worry about clearing the server.
Welcome to the gun community! Thanks for asking questions.
Yes, deleting data is most simply done with gun.put(null). Take:
var gun = Gun();
var users = gun.get('users');
users.put({alice: {name: 'alice'}, bob: {name: 'bob'}});
// now let's delete bob
users.path('bob').put(null);
If (as you mentioned in the question) however, you mean "delete data" as in wanting to clear out mistakes while developing your app. You'll want to do what you mentioned: localStorage.clear() in all browsers, crash the all servers and rm data.json.
For other developers, it might be useful to know that gun uses a type of tombstone method. You can't actually delete nodes themselves, they just get de-referenced, kinda like how your OS just moves files into a trash/recycle-bin. This tombstone method is very important in a distributed environment, such that the "delete" operation is replicated to every peer.
Thanks for answering your own question though! As always, if you get lost or need help hop on the https://gitter.im/amark/gun .
I'm looking for a way to conditionally handle messages based on the aggregation of messages. I've looked into a lot of ways to do this, but it seems that Apache Camel doesn't support it. I'll explain the scenario and then the solutions I tried.
Scenario:
I'm trying to conditionally clean a directory. I poll from the directory every x days and fetch all the files (file://...). I route this into an aggregation, that aggregates the files into a single size (directorySize). I then check if this size passes a certain threshold.
Here is where the problem lies. I now want to remove certain files if this condition passes, but I don't have access to the original messages anymore because they were aggregated in a new exchange.
Solutions:
I tried to fetch the files again to process them. Problem is that you can't make a consumer fetch on demand as far as I know. I tried using pollEnrich, but that will only fetch a single file and not all files in the directory.
I tried to filter/stop the parent route. The problem here is that filter()/choice...stop()/end() will only stop the aggregated route with the directory size and not the parent route with the file messages. I can't conditionally process these.
I tried to move the aggregated condition to another route that I would call, but this causes the same problem as the first solution.
Things I consider doing:
Rewrite the aggregation strategy to not only aggregate the size, but also the files itself into a groupedExchange. This way I can split the aggregation again after the check. I don't really like this solution because it causes a lot boilerplate, both in code as during runtime.
Move the file size calculator to a processor instead of the aggregator. This would defeat the purpose of using camel in the first place.. I would manually be fetching the files and adding the sizes.. And that for every single file..
Use a ControlBus to dynamically start the delete route on that directory. Once again a lot of workaround to achieve something that I feel should be able to be done in a simple route.
I would like to set the calculated size on every parent message, but I have no clue how this could be achieved?
Another way to stop the parent route that I haven't thought of?
I'm a bit stunned that you can't elegantly filter messages based on the aggregation of these messages. Is there something that I missed in Camel that would provide an elegant solution? Or is this a case of the least bad solution?
Simple Schema
Message(File)
Message(File) --> AggregatedMessage(directorySize) --> delete certain Files?
Message(File)
Camel is really awesome, but sometimes it's sure difficult to see exactly which design pattern to use ;)
Firstly, you need to keep a copy of the file objects, because you don't know whether to delete them or not until you reach your threshold - there are basically (at least) two ways to do this.
Alternative 1
The first way is to use a List in an exchange property. This property will hang around no matter what you do with the exchange body. If you have a look at the source code for GroupedExchangeAggregationStrategy, it does precisely this:
list = new ArrayList<Exchange>();
answer.setProperty(Exchange.GROUPED_EXCHANGE, list);
// ...
list.add(newExchange);
Or you could do the same thing manually on your own exchange property. In any case, it's completely fine to use the Grouped aggregation strategy as you have done.
Alternative 2
The second way to "keep" old messages is to send a copy to a stopped SEDA queue. So you would do to("seda:xyz"). You define this queue as .noAutoStartup(). Then you can send messages to it and they will queue up on an internal queue, managed by camel. When you want to process the messages, you simply start it up via controlbus and stop it again afterwards.
Generally, messing around with starting and stopping queues should be avoided unless absolutely necessary, but that's certainly another way to do it
Suggested solution
I suggest you do as you have done (i.e. alternative 1):
aggregate via GroupedExchangeAggregationStrategy to keep the individual files in a list
Compute the total file size (use a processor, or do it along the way with a custom aggregation strategy)
Use a filter(simple("${body} < 123"))
"Unwind" your aggregation via a splitter(simple("${property.CamelGroupedExchange}"))
Delete your files one by one
Please let me know if this doesn'y makes sense, or if I have misunderstood your problem in any way.
I want to be able to store multiple FlowDocuments within a single package, including images, etc. within each document. However, none of the methods I've seen for saving (and loading) Xaml FlowDocuments seem capable of this.
TextRange.Save with DataFormats.Xaml strips images and other embedded content
TextRange.Save with DataFormats.XamlPackage creates a whole new package, rather than allowing me to treat the document and included images as parts within the package I'd be storing it in
XamlWriter looks like it could be good for this, but I can't figure out how to find all the embedded objects for putting in their own parts (although I certainly know how to handle them once I've found them). On the other end, I haven't a clue how to make everything load properly later on.
It's pretty annoying that there's no one-stop way of serializing a FlowDocument and its images, etc. to a PackagePart. If anyone's figured out a good way of doing this, how'd you pull it off?
UPDATE 2011-07-03 00:22: Using XamlWriter and some extra code from this question I've been able to build a happy little OPC-compliant package which can hold multiple FlowDocuments including their images, as PackageParts. However, going the other way (from PackagePart to FlowDocument) is failing, because no matter how I try to load the document, I get XamlParseExceptions telling me that
'Initialization of 'System.Windows.Media.Imaging.BitmapImage' threw an exception.'
So, the question now becomes, how do I manhandle XamlReader.Load and/or my part's stream in order to get the related images loaded properly?
Figured it out. The solution is to manually process the Xaml document before handing it over to XamlReader. Images (and other elements stored as their own PackageParts) need to have the BitmapImage.UriSource property set to include the package Uri (for example, "./Image1.png" in /Content/Document.xaml to "pack://file:,,,C:,Projects,Package.pak/Content/Image1.png").
Two caveats, however:
There's an issue with PackUriHelper.Create(Uri,Uri) however. Instead of using
PackUriHelper.Create(packUri, part.Uri))
you have to use
new Uri(packUri.ToString() + value)
where value is part.Uri with the initial / removed. If you don't do this, instead of getting a proper Uri like above, you get one with an additional comma after the package file name, which confuses and annoys XamlReader.
You need to use FileShare.Read when opening the package, as XamlReader will try and open it itself. By default, Package.Open locks out anyone else trying to open the package, and XamlReader.Load will throw a WebException if it can't get into the package itself.
In the last few years I've encountered "ghost files" in Netbeans, but I didn't have proof of it, so I had to live with it and when I tried to explain the situation, it's hard to believe, now I have proof of it and it's a show stopper, any fix for it ?
It goes like this, I have a Java class that I've been using for many years, sort of a tool, I add a bit as I have more experience, but once in a while, after I added a new method, and used it in another class, Netbeans couldn't recognize it, it seemed to me Netbeans was still looking at the old copy of the class where the newly added method didn't exist. And yet if I copied this updated class to another project, the new method works fine and Netbeans can find it. In NB 6.7 it just acted like the class froze in time, any new additions to it wouldn't be recognized, now when I tried it in NB 6.9 I could catch the "ghost" !
It happened by accident, yesterday after I updated the class, I tried to use the new method in another class in the same project, the red flag went up, it couldn't find the new method, so I moused over the new method call, and right clicked on it, "Navigate" => "Go to source", bang the ghost showed up ! If I do this in NB 6.7, it just rang a bell as if it's telling me it couldn't find it. But in NB 6.9 it goes to the "source" which is not my java class source file[Get_Time.java], it's another generated file, so I moused over the opened "ghost" file name in the editor, the name was "C:\Users\USER.netbeans\6.9\var\cache\index\s117\java\14\gensrc\Get_Time.java(read-only)", the content seemed like a skeleton of my source file Get_Time.java , but definitely different, and I am pretty sure it's this "ghost file" that's been causing problems.
During the course of development I occasionally changed the system time to test different functions in the class, could this caused the ghost file to mess up, if I change the current time to 2016 and modified the source file, then NB might record the file last changed in 2016, and if I change the time back to 2011, and add a new function, it wouldn't accept it, because it might compare the dates of different versions of source file and stick with the "latest time stamp" ?!
I wish NB never keep ghost files, "Always Use The Actual Source File", this would avoid a lot of such problems. I did try to delete that ghost file, but the next time I compiled, it's generated again. I don't want to delete too much content from "C:\Users\USER.netbeans\6.9...", it might mess up my NB setting. Anyhow, it's now a show stopper, I can't add more changes to the class, it's frozen in time, what's the fix ?
Just some suggestions as I got stung by this problem before.
Did you built a jar and added dependency to this jar manually?
e.g.
1) project A is packaged into A.jar with a class Time.
2) project B depends on A.jar and project A
3) Time.java in project A is changed
4) project B will not see the changes as it'll always read from the A.jar built before the change happen.
Try deleting NetBeans' cache (~/.netbeans/6.9/var/cache/index/ directory) when you go back to the future and forward to the past. NetBeans is probably getting a bit confused by the file timestamps. Since it is somewhat of an edge case to be hopping around dates like that, I doubt it would be something NetBeans would give a high priority in attempting to fix/handle.
how much time does it take to create a simple dance performing bot in second life using Linden Scripting Language ?? , i have no prior knowledge of lsl , but i know various object oriented and event driven programming languages .
Well, it's pretty simple to animate avatar: you'll need a dance animation (those are pretty easy to find, or you could create your own), put it in a prim (which is basic building object in SL), and then create a simple script which first requests permission to animate desired avatar:
llRequestPermissions(agent_key, PERMISSION_TRIGGER_ANIMATION);
You receive response in run_time_permissions event and trigger your animation by name:
run_time_permissions(integer param)
{
llStartAnimation("my animation");
}
Those are the essentials; you'll probably want to request permissions when an avatar touches your object, and stop animation on second touch... or you could request for permissions for each avatar which is in certain range.
As for the "bot" part, Second Life viewer code is open source; if you don't want to build it yourself, there are several customizable bots available. Or you could simply run an official SL viewer and leave it on; there is a way to run several instances of SL viewer at the same time. It all depends on what exactly you need it for.
Official LSL portal can be found here, though I prefer slightly outdated LSL wiki.
Slight language mismatch: To make an object perform a dance is currently known as "puppetry" in a SecondLife context. The term "bot" currently means control of an avatar by an external script api.
Anyway in one instance, it took me about two hours to write, when I did one for a teddy bear a few weeks back, but that was after learning alot from taking apart some old ones, and i never did finish making the dance, it just waggles the legs or hugs with the arms, but the script is capable for however many steps and parts you can cram in memory.
Puppetry of objects has not improved much in the past decade. It is highly restricted by movement update rates and script limitations. Movement is often delayed under server load and the client doesn't always get updates, which produces pauses and skips in varying measure. Scripts have a maximum size of 64k which should be plenty but in actuality runs out fast with the convolutions necessary in lsl. Moving each linked prim in an object seperately used to need a script in each prim, until new fuctions were introduced to apply attributes by linknumber, still many objects use old scripts which may never be updated. Laggy puppets make for a pitiful show, but most users would not know how to identify a good puppetry script.
The popular way to start making a puppet script is to find an older open source puppet script online, and update it to work from one script. Some archane versions are given as 'master' and 'slave' scripts which need to be merged placing the slave actions as a function into the master, changing llMessageLinked( ) for the function name. Others use an identical script for each prim. I said popular, not the simplest or easiest, and it will never be the best.
Scripting from scratch, the active flow needs to go in a timer event with nothing else in it. Use a different state for animating if a timer is needed when waiting because it's a heavy activity and you don't need any more ifs or buts.
The most crucial task is create a loop to build parameters from a list of linknumbers, positions, and rotations into a parameter list before the llSetLinkPrimitiveParamsFast( ). Yes, that's what they called it because it's a heavy list based function, you may just call it SLPPF inworld but not in the script. Because SLPPF requires the call to have certain constants for each parameter, the parameter list for each step will need to include PRIM_LINK_TARGET, linknumber, PRIM_POS, position, PRIM_ROT, rotation for each linked part in the animation step.
Here's an example of putting a list of a single puppetry step into SLPPF.
list parameters;
integer index;
while ( index < total ) { // total number of elements in each list
parameters += [
PRIM_LINK_TARGET, llList2Integer( currentstep, index ),
PRIM_POS, llList2Vector( currentstep, index+1 ),
PRIM_ROT, llList2Rotation( currentstep, index+2 )
];
index += 3;
}
llSetLinkPrimitiveParamsFast( 0, parameters );
How you create your currentstep list is another matter but the smoothest movement over many linked parts will only be possible if the script isn't moving lists around. So if running the timer at 0.2 seconds isn't any improvement over 0.3, it's because you've told lsl to shovel too many lists. This loop with three list calls may handle about 20 links at 0.1 seconds if the weather's good.
That's actually the worst of it over, just be careful of memory if cramming too many step lists into memory at once. Oh and if your object just vanishes completely, it's going to be hanging around near <0,0,0> because a 1 landed in the PRIM_LINK_TARGET hole.