Perform a single server call in lightning instead of multiple calls - salesforce

In the helper I have methods that each call an #AuraEnabled method in the component controller.
Some of these calls are only during the 'init' event.
From a performance point of view I should make only one call during 'init'.
What is an elegant way of achieving this?
The methods called during 'init' return a list of strings, a decimal, and respectively a string.

Define a custom Apex class in your controller that encapsulates all of the information you wish to source in a single call from your init event:
public class InitializationWrapper {
#AuraEnabled
public List<String> myStringList {get; set;}
#AuraEnabled
public Decimal myDecimal {get; set;}
#AuraEnabled
public String myString {get; set;}
}
The, return an instance of this wrapper class from your server-side Apex call to your init handler. You only need to make a single round-trip.

I'd swear you don't have to do anything, no fancy coding, SF would do it for you as part of Aura framework...
Used to work for me like a charm a while ago (in fact even other way around, it was for sure 1 Apex call = all methods I had were using same governor limits, I had to rework some queries)
https://developer.salesforce.com/docs/atlas.en-us.lightning.meta/lightning/controllers_server_actions_call.htm
$A.enqueueAction(action) adds the server-side controller action to the
queue of actions to be executed. All actions that are enqueued will
run at the end of the event loop. Rather than sending a separate
request for each individual action, the framework processes the event
chain and batches the actions in the queue into one request. (...) The
framework batches the actions in the queue into one server request.
The request payload includes all of the actions and their data
serialized into JSON. The request payload limit is 4 MB.
https://developer.salesforce.com/docs/atlas.en-us.lightning.meta/lightning/controllers_server_actions_queue.htm
The framework queues up actions before sending them to the server.
This mechanism is largely transparent to you when you’re writing code
but it enables the framework to minimize network traffic by batching
multiple actions into one request (XHR). The batching of actions is
also known as boxcar’ing, similar to a train that couples boxcars
together.
The framework uses a stack to keep track of the actions to send to the
server. When the browser finishes processing events and JavaScript on
the client, the enqueued actions on the stack are sent to the server
in a batch.
If this doesn't work as described for you (can you post some code?) and request size is < 4 MB... Maybe they broke something and you've found a platform bug. You're sure you see separate entries in Debug Log? Or in network traffic monitoring in browser?
Maybe you need to play with background actions. I mean this is supposed to work without extra crutches in Apex to bundle multiple calls into one, create complex response wrapper class and every callback unpicks only data it cares about, that's lots of unneccessary code :/

Related

How to handle multiple entity update in the same transaction in Spring Data REST

Is anyone having an idea on how to handle multiple entity updates within the same transaction in Spring Data REST ? The same thing can be handle within Spring controller methods using the #Transactional annotation. If I am correct, Spring Data REST executes every execution event within separate transactions. So multiple entity updates cannot be handled in a proper way.
I am having issues updating 2 entities (ABC and PQR) within the same transaction and rolling back the ABC entity when the PQR entity is failed.
// ABC repository
#RepositoryRestResource
public interface ABCEntityRepository extends MongoRepository<ABC, String> {
}
// PQR repository
#RepositoryRestResource
public interface PQREntityRepository extends MongoRepository<PQR, String> {
}
// ABC repository handler
#RepositoryEventHandler
public class ABCEventHandler {
#Autowired
private PQREntityRepository pqrEntityRepository;
#HandleBeforeSave
public void handleABCBeforeSave(ABC abc) {
log.debug("before saving ABC...");
}
#HandleAfterSave
public void handleABCAfterSave(ABC abc) {
List<PQR> pqrList = pqrEntityRepository.findById(abc.getPqrId());
if (pqrList != null && !pqrList.isEmpty()) {
pqrList.forEach(pqr -> {
// update PQR objects
}
}
// expect to fail this transaction
pqrEntityRepository.saveAll(pqrList);
}
}
since #HandleAfterSave method is executed in a separate transaction, calling HandleAfterSave method means the ABC entity updation is already completed and cannot rollback, therefore. Any suggestion to handle this ?
Spring Data REST does not think in entities, it thinks in aggregates. Aggregate is a term coming from Domain-Driven Design that describes a group of entities for which certain business rules apply. Take an order along side its line items for example and a business rule that defines a minimum order value that needs to be reached.
The responsibility to govern constraints aligns with another aspect that involves aggregates in DDD which is that strong consistency should/can only be assumed for changes on an aggregate itself. Changes to multiple (different) aggregates should be expected to be eventually consistent. If you transfer that into technology, it's advisable to apply the means of strong consistency – read: transactions – to single aggregates only.
So there is no short answer to your question. The repository structure you show here virtually turns both ABCEntity and PQREntity into aggregates (as repositories only exist for aggregate roots). That means, OOTB Spring Data REST does not support updating them in a single transactional HTTP call.
That said, Spring Data REST allows the declaration of custom resources that can take responsibility of doing that. Similarly to what is shown here, you can simply add resources on additional routes to completely implement what you imagine yourself.
Spring Data REST is not designed to produce a full HTTP API out of the box. It's designed to implement certain REST API patterns that are commonly found in HTTP APIs and will very likely be part of your API. It's build to avoid you having to spend time on thinking about the straight-forward cases and only have to plug custom code for scenarios like the one you described, assuming what you plan to do here is a good idea in the first place. Very often requests like these result in the conclusion that the aggregate design needs a bit of rework.
PS: I saw you tagged that question with spring-data-mongodb. By default, Spring Data REST does not support MongoDB transactions because it doesn't need them. MongoDB document boundaries usually align with aggregate boundaries and updates to a single document are atomic within MongoDB anyway.
I'm not sure I understood your question correctly, but I'll give it a try.
I'd suggest to have a service with both Repositories autowired in, and a method annotated with #Transactional that updates everything you want.
This way, if the transaction fails anywhere inside the method, it will all rollback.
If this does not answer your question, please clarify and I'll try to help.

Handle Scenarios when exposing route as a restlet service

I have used rest servlet binding to expose route as a service.
I have used employeeClientBean as a POJO , wrapping the actual call to employee REST service within it, basically doing the role of a service client.
So, based on the method name passed, I call the respective method in employee REST service, through the employeeClientBean.
I want to know how how I can handle the scenarios as added in commments in the block of code.
I am just new to Camel, but felt POJO binding is better as it does not couple us to camel specific APIs like exchange and processor or even use
any specific components.
But, I am not sure how I can handle the above scenarios and return appropriate JSON responses to the user of the route service.
Can someone help me on this.
public void configure() throws Exception {
restConfiguration().component("servlet").bindingMode(RestBindingMode.json)
.dataFormatProperty("prettyPrint", "true")
.contextPath("camelroute/rest").port(8080);
rest("/employee").description("Employee Rest Service")
.consumes("application/json").produces("application/json")
.get("/{id}").description("Find employee by id").outType(Employee.class)
.to("bean:employeeClientBean? method=getEmployeeDetails(${header.id})")
//How to handle and return response to the user of the route service for the following scenarios for get/{id}"
//1.Passed id is not a valid one as per the system
//2.Failure to return details due to some issues
.post().description("Create a new Employee ").type(Employee.class)
.to("bean:employeeClientBean?method=createEmployee");
//How to handle and return correct response to the user of the route service for the following scenarios "
//1. Employee being created already exists in the system
//2. Some of the fields of employee passed are as not as per constraints on them
//3. Failure to create a employee due to some issues in server side (For Eg, DB Failure)
}
I fear you are putting Camel to bad use - as per the Apache documentation the REST module is supporting Consumer implementations, e.g. reading from a REST-endpoint, but NOT writing back to a caller.
For your use case you might want to switch framework. Syntactically, Ratpack goes in that direction.

What's the workaround for not being able to pass heap objects to a future method?

This seriously is one of the biggest thorns in my side. SFDC does not allow you to use complex objects or collections of objects as parameters to a future call. What is the best workaround for this?
Currently what I have done is passed in multiple parallel arrays of primitives which form a complete object based on the index. Meaning if I need to pass a collections of users, I may pass 3 string arrays, say - Name[], Id[], and Role[]. Name[0], Id[0]. and Role[0] are the first user, etc. This means I have to build all these arrays and build the future method to reconstruct the relevant objects on the other end as well.
Is there a better way to do this?
As to why, once an Apex "transaction" is complete, the VM is destroyed. And generally speaking, salesforce will not serialize your object graph for resuming at a future time.
There may be a better way to get this task done. Can the future method query for the objects it needs to act on? Perhaps you can pass the List of Ids and the future method can use this in a WHERE clause. If it's a large number of objects, batch apex may be useful to avoid governor limits.
I would suggest creating a new custom object specifically for storing the information required in your custom apex class. You can then insert these into the database and then query for the records in the #future method before using them for the callout.
Then, once the callout has completed successfully you can then delete those records from the database to keep things nice and tidy.
My answer is essentially the same. What I do is prepare a custom queue object with all relevant Ids (User/Contact/Lead/etc.) along with my custom data that then gets handled from the #Future call. This helps with governor limits since you can pull from the queue only what your callout and future limitations will permit you to handle in a single thread. For Facebook, for example, you can batch up 20 profile updates per single callout. Each #Future allows 10 callouts and each thread permits 10 #Future calls which equals 2000 individual Facebook profile updates - IF you're handling your batches properly and IF you have enough Salesforce seats to permit this number of #Future calls. It's 200 #Future calls per user per 24 hours last I checked.
The road gets narrow when you're performing triggered callouts, which is what I assume you're trying to do based on your need to callout in an #Future method in the first place. If you're not in a trigger, then you may be able to handle your callouts as long as you do them before processing any DML. In other words, postpone any data saves in any particular thread until you're done calling out.
But since it sounds like you need to call out from a trigger, batching it up in sObjects is really the way to go. It's a bit of work, but essentially serializing your existing heap data is the road to travel here. Also consider doing this from an hourly scheduled Batch Apex call since with the queue approach you'll be able to process all of your callouts eventually. If you run into governor limits (or rather, avoid hitting them) in a particular thread, it will wake up an hour later and finish the work left in your queue. Launching that process looks something like this:
String jobId = System.schedule('YourScheduleName', '0 0 0-23 * * ?', new ScheduleableClass());
This will instantiate an instance of ScheduleableClass once an hour which would pull the work from your queue object and process the maximum amount of callouts.
Good luck and sorry for the frustration.
Just wanted to give my answer on how I do this very easily in case anyone else stumbles across this question. Apex has functions to easily serialize and de-serialize objects to and from JSON encoding. Let's say I have a list of cases that I need to do something with in a future call:
String jsonCaseList = '';
List<Case> caseList = [SELECT id, Other fields FROM Case WHERE some conditions];
//Populate the list
//Serialize your list
jsonCaseList = JSON.serialize(caseList);
//Pass jsonCaseList as a string parameter to your future call
futureCaseActivity(jsonCaseList);
#future
public static void futureCaseActivity(string jsonCases){
//De-serialize the string back into a list of cases
List<Case> futureCaseList = (List<Case>)JSON.deserialize(jsonCases, List<Case>);
//Do whatever you want with your cases
for(Case c : futureCaseList){
//Stuff
}
Update futureCaseList;
}
Anyway, seems like a much better option than adding database clutter with a new custom object and prevents needing to query the database again for info you already have, which just makes me hurt inside.
Almost forgot to add the link: https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_json_json.htm

Asynchronous WCF Services in WPF - events

I am using WCF services asynchronously in a WPF application. So I have class with all the web service. The view models call the method in this proc, which in-turn calls the web service.
So the view Model code looks like this:
WebServiceAgent.GetProductByID(SelectedProductID, (s, e)=>{States = e.Result;});
And the WebService agent looks like:
public static void GetProductByID(int ProductID, EventHandler<GetProductListCompletedEventArgs> callback)
{
Client.GetProductByIDCompleted += callback;
Client.GetProductByIDAsync(ProductID);
}
Is this a good approach? I am using MVVM light toolkit. So the View Model static, so in the lifetime of the application, the view model stays. But each time the view model calls this WebServiceAgent, I think I am registering an event. But that event is not being unregistered.
Is this a problem. Lets say the view Model is called for 20 - 30 times. I am inserting some kind of memory leak?
Some helpful information, based on the mistakes I learned from myself:
The Client object seems to be re-used all the time. When not unregisering event handlers, they will stack up when future invokations of the same operations finish and you'll get unpredictable results.
The States = e.Result statement is executed on the event handler's thread, which is not the UI dispatcher thread. When updating lists or complex properties this will cause problems.
In general not unregistering event handlers when they are invoked is a bad idea as it will indeed cause hard to find memory leaks.
You should probably refactor to create or re-use a clean client, wrap the viewmodel callback inside another callback that will take care of unregistering itself, cleaning up the client, and invoking the viewmodel's callback on the main dispatcher thread.
If you think all this is tedious, check out http://blogs.msdn.com/b/csharpfaq/archive/2010/10/28/async.aspx and http://msdn.microsoft.com/en-us/vstudio/async.aspx. In the next version of C# an async keyword will be introduced to make this all easier. A CTP is available already.
Event handlers are death traps and you will leak them if you do not "unsubscribe" with "-=".
One way to avoid is to use RX (Reactive Extensions) that will manage your event subscriptions. Take a look at http://msdn.microsoft.com/en-us/data/gg577609 and specifically creating Observable by using Observable.FromEvent or FromAsync http://rxwiki.wikidot.com/101samples.
This is unfortunaltely not a good approach.
I learned this the hard way in silverlight.
Your WebserviceAgent is probably a long-life object, whereas the model or view is probably short-life
Events give references, and in this case the webservice agent, and wcf client a reference to the model. A long lifeobject has a reference to a short life object, this means the short life object will not be collected, and so will have a memory leak.
As Pieter-Bias said, the async functionality will make this easier.
Have you looked at RIA services? This is the exact problem that RIA services was designed to solve
Yes, the event handlers are basically going to cause a leak unless removed. To get the near-single line equivalent of what you're expressing in your code, and to remove handlers you're going to need an instance of some sort of class that represents the full lifecycle of the call and does some housekeeping.
What I've done is create a Caller<TResult> class that uses an underlying WCF client proxy following this basic pattern:
create a Caller instance around an existing or new client proxy (the proxy's lifecycle is outside of the scope of the call to be made (so you can use a new short-lived one or an existing long-lived one).
use one of Caller's various CallAsync<TArg [,...]> overloads to specify the async method to call and the intended callback to call upon completion. This method will choose the async method that also takes a state parameter. The state parameter will be the Caller instance itself.
I say intended because the real handler that will be wired up will do a bit more housekeeping. The real callback is what will be called at the end of the async call, and will
check that ReferenceEquals(e.UserState, this) in your real handler
if not true, immediately return (the event was not intended to be the result of this particular call and should be ignored; this is very important if your proxy is long lived)
otherwise, immediately remove the real handler
call your intended, actual callback with e.Result
Modify Caller's real handler as needed to execute the intended callback on the right thread (more important for WPF than Silverlight)
The above implementation should also have separate handlers for cases where e.Error is non-null or e.Cancelled is true. This gives you the advantage of not checking these cases in your intended callback. Perhaps your overloads take in optional handlers for those cases.
At any rate, you end up cleaning up handlers aggressively at the expense of some per-call wiring. It's a bit expensive per-call, but with proper optimization ends up being far less expensive than the over-the-wire WCF call anyway.
Here's an example of a call using the class (you'll note I use method groups in many cases to increase the readability, though HandleStuff could have been result => use result ). The first method group is important, because CallAsync gets the owner of that delegate (i.e. the service instance), which is needed to call the method; alternatively the service could be passed in as a separate parameter).
Caller<AnalysisResult>.CallAsync(
// line below could also be longLivedAnalyzer.AnalyzeSomeThingsAsync
new AnalyzerServiceClient().AnalyzeSomeThingsAsync,
listOfStuff,
HandleAnalyzedStuff,
// optional handlers for error or cancelled would go here
onFailure:TellUserWhatWentWrong);

Entity Framework: Context in WPF versus ASP.Net... how to handle

Currently for ASP.Net stuff I use a request model where a context is created per request (Only when needed) and is disposed of at the end of that request. I've found this to be a good balance between not having to do the old Using per query model and not having a context around forever. Now the problem is that in WPF, I don't know of anything that could be used like the request model. Right now it looks like its to keep the same context forever (Which can be a nightmare) or go back to the annoying Using per query model that is a huge pain. I haven't seen a good answer on this yet.
My first thought was to have an Open and Close (Or whatever name) situation where the top level method being called (Say an event handling method like Something_Click) would "open" the context and "close" it at the end. Since I don't have anything on the UI project aware of the context (All queries are contained in methods on partial classes that "extend" the generated entity classes effectively creating a pseudo layer between the entities and the UI), this seems like it would make the entity layer dependent on the UI layer.
Really at a loss since I'm not hugely familiar with state programming.
Addition:
I've read up on using threads, but the
problem I have with a context just
sitting around is error and recovery.
Say I have a form that updates user
information and there's an error. The
user form will now display the changes
to the user object in the context
which is good since it makes a better
user experience not to have to retype
all the changes.
Now what if the user decides to go to
another form. Those changes are still
in the context. At this point I'm
stuck with either an incorrect User
object in the context or I have to get
the UI to tell the Context to reset
that user. I suppose that's not
horrible (A reload method on the user
class?) but I don't know if that
really solves the issue.
Have you thought about trying a unit of work? I had a similar issue where I essentially needed to be able to open and close a context without exposing my EF context. I think we're using different architectures (I'm using an IoC container and repository layer), so I have to cut up this code a bit to show it to you. I hope it helps.
First, when it comes to that "Something_Click" method, I'd have code that looked something like:
using (var unitOfWork = container.Resolve<IUnitOfWork>){
// do a bunch of stuff to multiple repositories,
// all which will share the same context from the unit of work
if (isError == false)
unitOfWork.Commit();
}
In each of my repositories, I'd have to check to see if I was in a unit of work. If I was, I'd use the unit of work's context. If not, I'd have to instantiate my own context. So in each repository, I'd have code that went something like:
if (UnitOfWork.Current != null)
{
return UnitOfWork.Current.ObjectContext;
}
else
{
return container.Resolve<Entities>();
}
So what about that UnitOfWork? Not much there. I had to cut out some comments and code, so don't take this class as working completely, but... here you go:
public class UnitOfWork : IUnitOfWork
{
private static LocalDataStoreSlot slot = Thread.AllocateNamedDataSlot("UnitOfWork");
private Entities entities;
public UnitOfWork(Entities entities)
{
this.entities = entities;
Thread.SetData(slot, this);
}
public Entities ObjectContext
{
get
{
return this.Entities;
}
}
public static IUnitOfWork Current
{
get { return (UnitOfWork)Thread.GetData(slot); }
}
public void Commit()
{
this.Entities.SaveChanges();
}
public void Dispose()
{
entities.Dispose();
Thread.SetData(slot, null);
}
}
It might take some work to factor this into your solution, but this might be an option.

Resources