I have instance of operation entity. I'd like to execute it after some row was successfully deleted from store (ui has been gotten OK response from server).
1st Issue. How determine that row was deleted on server and execute some logic after it.
There are no callbacks for store.remove/removeAt. And it seams (from debugging) that listening 'remove' event on store is not option.
2nd Issue. It's possible just to call execute() on operation instance. But how to get know view and even store about this operation. Because simple operation.execute() does not give any effect except request to back-end.
Solution to 1st issue:
store.suspendAutoSync();
store.removeAt(this.rowIndex);
this.getView().getStore().sync({
success:function(){
...
},
failure:function(){
...
}
});
Related
I am developing a web app using Oracle ADF. I have a bounded task flow. In that I have a search page like below.
I have created the above two forms using view object data controls.
Searching is performing well. But my problem is when I go some where else in my application using menus provided left side and come back to the search page , the page is not getting refreshed. I am getting a search page that contains old search results. At this point of time if I am trying to make any changes am getting some error called "Another user with this id already modifed data ....". After this error my app is not running. Means what ever am trying to do its showing the same error.
So I need to make this: "When ever the user come to this form, He should get fresh form. It should not contain old search results.
Please help me. How do I achieve this.
Thank you.
There are 2 ways of doing it:
1) Set your task flow as ISOLATED, from Task Flow Overview tab -> Behaviour -> Share Data Control with calling task flow -> unchecked (or isolated, if you are using JDev 12c)
This will ensure you always start FRESH when accessing the page, but it will potentially create a performance overhead because entire View Object cache will be recreated (requeried) on page load. Nevertheless, it is the quickest solution.
2) You may create a default Method Call Activity in your task flow from where you may call a AM's custom method that resets the view criteria. The method will be placed on application module's implementation class and it may look like this:
public void initTaskFlow() {
this.getViewObject1().executeEmptyRowSet();
}
This will clean the result data. If you want to reset the querying parameters as well, you can use this example:
http://www.jobinesh.com/2011/04/programmatically-resetting-and-search.html
When you made any changes to any viewObject then excute this viewObject to match entity state and viewState , i think excuting viewObject will solve your issue
Ashish
Suppose that one needs to send the same collection of 10,000 documents down to every client for a Meteor app.
At a high level, I'm aware that the server does some bookkeeping for every client subscription - namely, it tracks the state of the subscription so that it can send the appropriate changes for the client. However, this is horribly inefficient if each client has the same large data set where each document has many fields.
It seems that there used to be a way to send a "static" publish down the wire, where the initial query was published and never changed again. This seems like a much more efficient way to do this.
Is there a correct way to do this in the current version of Meteor (0.6.5.1)?
EDIT: As a clarification, this question isn't about client-side reactivity. It's about reducing the overhead of server-side tracking of client collections.
A related question: Is there a way to tell meteor a collection is static (will never change)?
Update: It turns out that doing this in Meteor 0.7 or earlier will incur some serious performance issues. See https://stackoverflow.com/a/21835534/586086 for how we got around this.
http://docs.meteor.com/#find:
Statics.find({}, {reactive: false} )
Edited to reflect comment:
Do you have some information that the reactive: false param is only client side? You may be right, it's a reasonable, maybe likely interpretation. I don't have time to check, but I thought this may also be a server side directive, saying not to poll the mongo result set. Willing to learn...
You say
However, this is horribly inefficient if each client has the same large data set where each document has many fields.
Now we are possibly discussing the efficiency of the server code, and its polling of the mongo source for updates that happen outside of from the server. Please make that another question, which is far above my ability to answer! I doubt that is happening once per connected client, more likely is a sync between app server info and mongo server.
The client requests you issue, including sorting, should all be labelled non-reactive. That is separate from whether you can issue them with sorting instructions, or whether they can be retriggered through other reactivity, but which need not include a trip to the server. Once each document reaches the client side, it is cached. You can still do whatever minimongo does, no loss in ability. There is no client asking server if there are updates, you don't need to shut that off. The server pushes only when needed.
I think using the manual publish ( this.added ) still works to get rid of overhead created by the server observing data for changes. The observers either need to be added manually or are created by returning a Collection.curser.
If the data set is big you might also be concerned about the overhead of a merge box holding a copy of the data for each client. To get rid of that you could copy the collection locally and stop the subscription.
var staticData = new Meteor.Collection( "staticData" );
if (Meteor.isServer ){
var dataToPublish = staticData.find().fetch(); // query mongo when server starts
Meteor.publish( "publishOnce" , function () {
var self = this;
dataToPublish.forEach(function (doc) {
self.added("staticData", doc._id, doc); //sends data to client and will not continue to observe collection
});
});
}
if ( Meteor.isClient ){
var subHandle = Meteor.subscribe( "publishOnce" ); // fills client 'staticData' collection but also leave merge box copy of data on server
var staticDataLocal = new Meteor.Collection( null ); // to store data after subscription stops
Deps.autorun( function(){
if ( subHandle.ready() ){
staticData.find( {} ).forEach( function ( doc ){
staticDataLocal.insert( doc ); // move all data to local copy
});
subHandle.stop(); // removes 'publishOnce' data from merge box on server but leaves 'staticData' collection empty on client
}
});
}
update: I added comments to the code to make my approach more clear. The meteor docs for stop() on the subscribe handle say "This will typically result in the server directing the client to remove the subscription's data from the client's cache" so maybe there is a way to stop the subscription ( remove from merge box ) that leaves the data on the client. That would be ideal and avoid the copying overhead on the client.
Anyway the original approach with set and flush would also have left the data in merge box so maybe that is alright.
As you've already pointed out yourself in googlegroups, you should use a Meteor Method for sending static data to the client.
And there is this neat package for working with Methods without async headaches.
Also, you could script out the data to a js file, as either an array or an object, minimize it, then link to it as a distinct resource. See
http://developer.yahoo.com/performance/rules.html for Add an Expires or a Cache-Control Header. You probably don't want meteor to bundle it for you.
This would be the least traffic, and could make subsequent loads of your site much swifter.
as a response to a Meteor call, return an array of documents (use fetch()) No reactivity or logging. On client, create a dep when you do a query, or retrieve the key from the session, and it is reactive on the client.
Mini mongo just does js array/object manipulation with an syntax interpreting dsl between you and your data.
The new fast-render package makes one time publish to a client collection possible.
var staticData = new Meteor.Collection ('staticData');
if ( Meteor.isServer ){
FastRender.onAllRoutes( function(){
this.find( staticData, {} );
});
}
I am having the following issues: my cakephp app is not handling the cache thing properly. As suggested by every result in google, I created a function in the model to manually delete the cache:
public function afterSave($created) {
Cache::clear();
clearCache();
}
Unfortunately, this is doing nothing. Doesn't delete anything, and I still have the problem.
In case I have no explained myself properly, I will give an example of what happens:
I go with my browser to a page that shows a list of the last 5 records in my database. Then I go and add another record. I come back to the page that shows the last 5, and the information is not updated. It uses the cache and comes back with outdated info. If I press F5, then he page trully reloads and I see the trully 5 last records.
And that's it, I don't know what to do. The whole app works like crap, because you do something and it never appears unless you refresh the page with F5, which is something of course users are unaware, leading them to think "nothing was added" when it actually was.
Cache::clear() will only clear entries that have expired.
Try Cache::clear(FALSE). Works if you have CakePHP 2.x.
I did this to solve the problem: In the controllers, inside beforefilter function I made a check, if the action is something I disable the cache.
The actions you choose won't have browser cache.
function beforeFilter(){
if ($this->action == 'youraction'){
$this->disableCache();
}
}
Use of caching required lots of thinking, where to use where to not. If your update is frequent, don't use caching there.
We use caching where data rarely change, at that moment it is win-win situation.
Cache::clear($check, $config = 'default')
Destroy all cached values for a cache configuration.
cakephp Caching
I want to listen change in my legacy system whenever there is any change in SF object (add/update/delete). So I have created outbound message and workflow. But in workflow I don't see any way to fire if object is deleted.
Is there anyway I can trigger outbound message on record delete? I know have heard that it can be done by trigger. But I don't want to write apex code for this.
To the best of my knowledge it cannot be done, the workflow actions are decoupled from the workflow rule (you can even reuse them) so they probably do not receive the transaction scope and when they execute the record is already gone and any reference inside action would point to a non-existing data. Thus the only way I know how to do it is via trigger.
Here is a workaround. However this will only be able to capture deletion made via std. Salesforce UI.
1.Create a custom checkbox field "Is Deleted"
2.Override the Del link with a custom VF page, that first updates the record status to "Is Deleted", and deletes the record.
3.Write workflow rule using the "Is Deleted" field.
Perhaps a compromise architecture would be to write an extremely small and simple after delete trigger that simply copies the deleted records in question to some new custom object. That new custom object fires your workflow rule and thus sends the outbound message you're looking for. The only issue with this would be to periodically clean up your custom object data that would grow in size as you deleted records from your other object. In other words, your "scratch" object would just need periodic cleaning - which could be done on a nightly schedule with batch Apex.
Here's a delete trigger that would do the trick using Opportunity as an example:
trigger AfterDelete on Opportunity (after delete)
{
List<CustObj__c> co = new List<CustObj__c>();
for(Opportunity o : Trigger.old)
{
CustObj__c c = new CustObj__c();
c.Name = o.Name;
c.Amount__c = o.Amount;
c.CloseDate__c = o.CloseDate;
c.Description__c = o.Description;
// etc.
co.add(c);
}
insert co;
}
It's not ideal but at least this would save you from having to code your own trigger-based outbound messages. These can only be done using the #Future annotation, btw, since callouts directly from triggers are forbidden. Hope that helps.
write a single email send in the trigger delete event. You have it in less than 1 hour.
I am missing something very fundamental when working with SL4 RIA entities.
I have a Domain Service with User entities. On the service context, I have a method:
EntityQuery<User> GetUsersQuery()
I perform a load like so:
context.Load(context.GetUsersQuery(), (loadOp)=>
{
// Things done when the load is completed
}, null);
When the Completed action executes, the loadOp.Entities collection is full of the User entities, but they are not attached to the context.Users entity set. It also appears that I can't attach them manually from the callback. What important step am I missing to get these tracked in the entity set?
Just to elaborate, in the completed handler, I tried:
foreach (var user in loadOp.Entities)
context.Users.Attach(user);
And I get an exception that says an entity with that name is already attached.
Yet, both context.Users and context.EntityContainer are empty.
Are you sure you are using the same instance of the context in all cases? What does context.EntityContainer.GetEntitySet<User>().Count say?
Does LoadOperation<User>.HasError return true? If so, what is the error?