manage Asynchronous sequence in silverlight? - silverlight

In my project I want remove some rows first then afterwards insert new rows.
But some times what happens is it inserts the new rows first then afterwards removes the starting rows.
To solve this problem I need to manage the operations in a proper sequence.
Please help me out.

This is a common pattern/problem with Silverlight as pretty much "everything" is asynchronous (for good reasons).
Depending on how your Adds and Removes are triggered, you could queue up tasks (e.g. a list of delegates) and have each task execute the next one off the list when they complete.
The alternative is going to sound a little complex, but the solution we came up with is to create a SequentialAsynchronousTaskManager class that operates in a similar way to the SilverlightTest class which uses EnqueueConditional() methods to add wait conditions and EnqueueCallback()s to execute code.
It basically holds a list of delegates (which can be simple Lambda expressions) and either executes it regularly until it returns true (EnqueueConditional) or just executes some code (EnqueueCallback).

Related

How do you create a deconstructor (or similar) in APEX that runs right before an object is destroyed?

My application has many aggregate fields that need to be updated when any related record is changed, added or deleted. The relationships and calculations are somewhat involved, so I created a class that handles all of the calculations for all of the related tables. There is some SOQL and DML overhead involved in the calculations, so the class handles everything in bulk.
I would like to have the updateAll() method on this class run no more than once per request on all of the records that have been added to its queue. But, there doesn't appear to be "deconstructor-like" functionality in APEX that would automatically get called right before this calculator object was destroyed.
What is the best way to implement this pattern in APEX?
Yes, there is no way to detect or predict object destruction, since its essentially JSP in the background (shhh, they don't want you to know, it's the "no software" thing ;) ir probably follows its lifetime mechnisms but you can't rely on that.
We actually handle our aggregation in triggers or in te reporting (depending on whether aggregation needs to be stored). Triggers also receive batches as List rather than one-by-one row which allows for batch aggregation and allows us to satisfy the pesky governor. Unfortunately if you have multi-table aggregates you'll need to have triggers for all of them and rerun them for every batch
Here's what I did. I created a Calculator class that recalcs every related aggregate/calculated field in a ~10 table/object relationship. I used triggers on each of those objects to make the calculator class run on the set of related object families to the objects that were changed. I used a static variable on the calculator class to check if the calculator was running in each of the triggers so that they would only call the calculator if it wasn't currently running. It works well enough. A bit inefficient, but stays below governor limits and works in bulk very well. And, I can grow with it...

When does the defered execution occur?

I've got a situation which I want to fetch data from a database, and assign it to the tooltips of each row in a ListView control in WPF. (I'm using C# 4.0.) Since I've not done this sort of thing before, I've started a smaller, simpler app to get the ideas down before I attempt to use them in my main WPF app.
One of my concerns is the amount of data that could potentially come down. For that reason I thought I would use LINQ to SQL, which uses deferred execution. I thought that would help and not pull down the data until the user passes their mouse over the relevant row. To do this, I'm going to use a separate function to assign the values to the tooltip, from the database, passed upon the parameters I need to pass to the relevant stored procedures. I'm doing 2 queries using LINQ to SQL, using 2 different stored procedures, and assigning the results to 2 different DataGrids.
Even though I know that LINQ to SQL does use deferred execution, I'm beginning to wonder if some of the code I'm writing may defeat my whole intent of using LINQ to SQL. For example, in testing in my simpler app, I am choosing several different values to see how it works. One selection of values brought no data back, as there was no data for the given parameters. I thought this could potentially cause the user confusion, so I thought I would check the Count property of the list that I assign from running the DBML associated method (related to the stored procedure). Thinking about it, I would think it would be necessary for LINQ to run the query, in order to give me a result for the Count property. Am I not correct?
If I eliminate the call to the list's Count property, I'm still wondering if I might have a problem; if LINQ may still be invoked, because I'm associating the tooltip to the control via a function call?
You are correct, when you call the Count property it iterates over the result set. Not clear on your last question, but the LINQ probably gets called at the point where you populate your DataGrids, way after the tooltip comes into play.
EDIT: however, this does not mean there is anything wrong with deffered execution or your use of it, it executes at the latest possible stage, right when you need the data. If you still want to check the Count ahead of actually fetching all the data, you could have a simple LINQ to SQL function that checks for Any() rows. (Actually Any() is probably what you want more than Count > 0)
You should use Any(), not Count(), but even Any() will cause the query to be executed - after all, it can't determine whether or not there are any rows in the result set without executing the query. But there's executing the query, and there's fetching the result set. Any() will fetch one row, Count() will fetch them all.
That said, I think that having a non-instantaneous operation that occurs on mouseover is just a bad idea. There was a build of Outlook, once, that displayed a helpful tooltip when you moused over the Print button. Less helpfully, it got the data for that tooltip by calling the system function that finds out what printers are available. So you'd be reaching for a menu, and the button would grab the mouse pointer and the UI would freeze for two seconds while it went out and figured out how to display a tooltip that you weren't even asking for. I still hate this program today. Don't be this guy.
A better approach would be to get your tooltip data asynchronously after populating the visible data on the screen. It's easy enough to create a BackgroundWorker that fetches the data into a DataTable, and then make the DataTable available to the view models in the RunWorkerCompleted event handler. (Do it there so that you don't do any updates to UI-bound data on the UI thread.) You can implement a ToolTip property in your view model that returns a default value (probably null, but maybe something like "Fetching data...") if the DataTable containing tool tip data is null, and that calculates the value if it's not. That should work admirably. You can even implement property-change notification so that the ToolTip will still get updated if the user keeps the mouse pointer over it while you're fetching the data.
Alex is correct that calling Count() or Any() will enumerate the LINQ expression causing the query to execute. I would recommend re-thinking your design as you probably don't want a query to the database executed every time the user moves his/her mouse. There is also the issue of the delay to query the database. What might be instantaneous on your dev box with a local database might have a multi-second delay on a heavily loaded server. I would recommend creating a DisplayTooltip() function that takes a lazily evaluated LINQ expression. You can then cache the results or apply other heuristics to decide whether you should actually be querying the database or not.

time to create a bot for second life using linden scripting language?

how much time does it take to create a simple dance performing bot in second life using Linden Scripting Language ?? , i have no prior knowledge of lsl , but i know various object oriented and event driven programming languages .
Well, it's pretty simple to animate avatar: you'll need a dance animation (those are pretty easy to find, or you could create your own), put it in a prim (which is basic building object in SL), and then create a simple script which first requests permission to animate desired avatar:
llRequestPermissions(agent_key, PERMISSION_TRIGGER_ANIMATION);
You receive response in run_time_permissions event and trigger your animation by name:
run_time_permissions(integer param)
{
llStartAnimation("my animation");
}
Those are the essentials; you'll probably want to request permissions when an avatar touches your object, and stop animation on second touch... or you could request for permissions for each avatar which is in certain range.
As for the "bot" part, Second Life viewer code is open source; if you don't want to build it yourself, there are several customizable bots available. Or you could simply run an official SL viewer and leave it on; there is a way to run several instances of SL viewer at the same time. It all depends on what exactly you need it for.
Official LSL portal can be found here, though I prefer slightly outdated LSL wiki.
Slight language mismatch: To make an object perform a dance is currently known as "puppetry" in a SecondLife context. The term "bot" currently means control of an avatar by an external script api.
Anyway in one instance, it took me about two hours to write, when I did one for a teddy bear a few weeks back, but that was after learning alot from taking apart some old ones, and i never did finish making the dance, it just waggles the legs or hugs with the arms, but the script is capable for however many steps and parts you can cram in memory.
Puppetry of objects has not improved much in the past decade. It is highly restricted by movement update rates and script limitations. Movement is often delayed under server load and the client doesn't always get updates, which produces pauses and skips in varying measure. Scripts have a maximum size of 64k which should be plenty but in actuality runs out fast with the convolutions necessary in lsl. Moving each linked prim in an object seperately used to need a script in each prim, until new fuctions were introduced to apply attributes by linknumber, still many objects use old scripts which may never be updated. Laggy puppets make for a pitiful show, but most users would not know how to identify a good puppetry script.
The popular way to start making a puppet script is to find an older open source puppet script online, and update it to work from one script. Some archane versions are given as 'master' and 'slave' scripts which need to be merged placing the slave actions as a function into the master, changing llMessageLinked( ) for the function name. Others use an identical script for each prim. I said popular, not the simplest or easiest, and it will never be the best.
Scripting from scratch, the active flow needs to go in a timer event with nothing else in it. Use a different state for animating if a timer is needed when waiting because it's a heavy activity and you don't need any more ifs or buts.
The most crucial task is create a loop to build parameters from a list of linknumbers, positions, and rotations into a parameter list before the llSetLinkPrimitiveParamsFast( ). Yes, that's what they called it because it's a heavy list based function, you may just call it SLPPF inworld but not in the script. Because SLPPF requires the call to have certain constants for each parameter, the parameter list for each step will need to include PRIM_LINK_TARGET, linknumber, PRIM_POS, position, PRIM_ROT, rotation for each linked part in the animation step.
Here's an example of putting a list of a single puppetry step into SLPPF.
list parameters;
integer index;
while ( index < total ) { // total number of elements in each list
parameters += [
PRIM_LINK_TARGET, llList2Integer( currentstep, index ),
PRIM_POS, llList2Vector( currentstep, index+1 ),
PRIM_ROT, llList2Rotation( currentstep, index+2 )
];
index += 3;
}
llSetLinkPrimitiveParamsFast( 0, parameters );
How you create your currentstep list is another matter but the smoothest movement over many linked parts will only be possible if the script isn't moving lists around. So if running the timer at 0.2 seconds isn't any improvement over 0.3, it's because you've told lsl to shovel too many lists. This loop with three list calls may handle about 20 links at 0.1 seconds if the weather's good.
That's actually the worst of it over, just be careful of memory if cramming too many step lists into memory at once. Oh and if your object just vanishes completely, it's going to be hanging around near <0,0,0> because a 1 landed in the PRIM_LINK_TARGET hole.

Synchronizing Asynchronous request handlers in Silverlight environment

For our senior design project my group is making a Silverlight application that utilizes graph theory concepts and stores the data in a database on the back end. We have a situation where we add a link between two nodes in the graph and upon doing so we run analysis to re-categorize our clusters of nodes. The problem is that this re-categorization is quite complex and involves multiple queries and updates to the database so if multiple instances of it run at once it quickly garbles data and breaks (by trying to re-insert already used primary keys). Essentially it's not thread safe, and we're trying to make it safe, and that's where we're failing and need help :).
The create link function looks like this:
private Semaphore dblock = new Semaphore(1, 1);
// This function is on our service reference and gets called
// by the client code.
public int addNeed(int nodeOne, int nodeTwo)
{
dblock.WaitOne();
submitNewNeed(createNewNeed(nodeOne, nodeTwo));
verifyClusters(nodeOne, nodeTwo);
dblock.Release();
return 0;
}
private void verifyClusters(int nodeOne, int nodeTwo)
{
// Run analysis of nodeOne and nodeTwo in graph
}
All copies of addNeed should wait for the first one that comes in to finish before another can execute. But instead they all seem to be running and conflicting with each other in the verifyClusters method. One solution would be to force our front end calls to be made synchronously. And in fact, when we do that everything works fine, so the code logic isn't broken. But when it's launched our application will be deployed within a business setting and used by internal IT staff (or at least that's the plan) so we'll have the same problem. We can't force all clients to submit data at different times, so we really need to get it synchronized on the back end. Thanks for any help you can give, I'd be glad to supply any additional information that you could need!
I wrote a series to specifically address this situation - let me know if this works for you (sequential asynchronous workflows):
Part 2 (has a link back to the part1):
http://csharperimage.jeremylikness.com/2010/03/sequential-asynchronous-workflows-part.html
Jeremy
Wrap your database updates in a transaction. Escalate to a table lock if necessary

WPF and Active Objects

I have a collection of "active objects". That is, objects that need to preiodically update themselves. In turn, these objects should be used to update a WPF-based GUI.
In the past I would just have each object include it's own thread, but that only makes sense when working with a finite number of objects with well-defined life-cycles. Now I'm using objects that only exist when needed by a form so the life cycle is unpredicable. Also, I can have dozens of objects all making database and web service calls.
Under normal circumstances the update interval is 1 second, but it can take up to 30 seconds due to timeouts.
So, what design would you recommend?
You may use one dispatcher (scheduler) for all or group of active objects. Dispatcher can process high priority tasks at the first place then other ones.
You can see this article about the long-running active objects with code to find out how to do it. In additional I recommend to look at Half Sync/ Half Async pattern.
If you have questions - welcome.
I am not an expert, but I would just have the objects fire an event indicating when they've changed. The GUI can then refresh the necessary parts of itself (easy when using data binding and INotifyPropertyChanged) whenever it receives an event.
I'd probably try to generalize out some sort of data bus, if possible, and when objects are 'active' have them add themselves to a list of objects to be updated. I'd especially be tempted to use this pattern if the objects are backed by a database, as that way you can aggregate multiple queries, instead of having to do a single query per each object.
If there end up being no listeners for a specific object, no big deal, the data just goes nowhere.
The core updater code can then use a single timer (or multiple, or whatever is appropriate) to determine when to get updates. Doing this as more of a dataflow, and less of a 'state update' will probably save a lot of sanity in the end.

Resources