Javascript admitted newbie and hacker here, so I suspect this question is very ignorant and a learning experience.
I'm essentially converting an old Apex/Visualforce page to LWC. The Apex code employs Database.setSavepoint() and Database.rollback() and I have no idea how something similar would be accomplished in LWC, or even if it needs to be.
Google provides nothing, and I know it's not done through #wire and #AuraEnabled.
The main pain point of migrating VF apex controllers ("regular ones", oldschool, which were working with massive hidden html field for "viewstate" and weren't using #remoteaction to be a bit more mobile-friendly) to Aura/LWC server-side controllers is the static keyword next to the method definition. You don't have "viewstate" of whole controller passed to the method magically, you need to explicitly pass the params the method needs. From LWC/Aura or (re)query them. It's bit like everything was marked transient if you know what this keyword does.
savepoint and rollback have to happen in one transaction anyway (1 interation, 1 button click). User triggered something and it either succeeds or fails. You can't save the game ;) and 3 interactions later go "gotcha, rollback". The state was written to database, the ship has sailed. And savepoints can't be serialised (you can't grab the variable, pass it to VF/Aura/LWC client-side and later back to the server). If you're passing something back - it means the transaction is almost over and they wouldn't be needed anyway.
So there are no technical problems with it. The question is what is your business logic doing with the rollbacks. Is it some funky "validate before save"? Some way to try firing all validation rules, triggers etc and see what explodes"?
LWC shouldn't care and you should be able to pass the result of this operation back nicely. It's a good question whether you still need this functionality but it's biz question, not technical.
Related
I am new to Salesforce apex coding. My first class that I am developing has 10 methods and is some 800 lines.
I haven’t added much of exception handling, so the size should swell further.
I am wondering, what the best practice for Apex code is... should I create 10 classes with 1 method instead of letting 1 class with 10 methods.
Any help on this would be greatly appreciated.
Thanks
Argee
What do you use for coding? Try to move away from Developer Console. VSCode has some decent plugins like Prettier or Apex PMD that should help you with formatting and making methods too complex. ~80 lines/method is so-so. I'd worry about passing long lists of parameters and having deeply nested code in functions rather than just their length.
There are general guidelines (from other languages, there's nothing special about Apex!) that ideally function should fit on 1 screen so programmer can see it whole without scrolling. Read this one, maybe it'll resonate with you: https://dzone.com/articles/rule-30-%E2%80%93-when-method-class-or
I wouldn't split it into separate files just for sake of it, unless you can clearly define some "separation of concerns". Say 1 trigger per object, 1 trigger handler class (ideally derived from base class). Chunkier bits not in the handler but maybe in some "service" style class that has public static methods and can operate whether called from trigger, visualforce, lightning web component, maybe some one-off data fix would need these, maybe in future you'd need to expose part of it as REST service. And separate file for unit tests (as blasphemous as it sounds - try to not write too many comments. As you're learning you'll need comments to remind yourself what built-in methods do but naming your functions right can help a lot. And a well-written unit test is better at demonstrating the idea behind the code, sample usage and expected errors than comments that can be often overlooked).
Exception handling is an art. Sometimes it's good to just let it throw an exception. If you have a method that creates Account, Contact and Opportunity and say Opportunity fails on validation rule - what should happen? Only you will know what's good. Exception will mean the whole thing gets rolled back (no "widow" Accounts) which sucks but it's probably "more stable" state for your application. If you naively try-catch it without Database.rollback() - how will you tell user to not create duplicates with 2nd click. So maybe you don't need too much error handling ;)
I am writing a little "fun" Scala/Scala.js project.
On my server I have Entities which are referenced by uuid-s
(inside Ref-s).
For the sake of "fun", I don't want to use flux/redux architecture but still use React on the client (with ScalaJS-React).
What I am trying to do instead is to have a simple cache, for example:
when a React UserDisplayComponent wants the display the Entity User with uuid=0003
then the render() method calls to the Cache (which is passed in as a prop)
let's assume that this is the first time that the UserDisplayComponent asks for this particular User (with uuid=0003) and the Cache does not have it yet
then the Cache makes an AjaxCall to fetch the User from the server
when the AjaxCall returns the Cache triggers re-render
BUT ! now when the component is asking for the User from the Cache, it gets the User Entity from the Cache immediately and does not trigger an AjaxCall
The way I would like to implement this is the following :
I start a render()
"stuff" inside render() asks the Cache for all sorts of Entities
Cache returns either Loading or the Entity itself.
at the end of render the Cache sends all the AjaxRequest-s to the server and waits for all of them to return
once all AjaxRequests have returned (let's assume that they do - for the sake of simplicity) the Cache triggers a "re-render()" and now all entities that have been requested before are provided by the Cache right away.
of course it can happen that the newly arrived Entity-s will trigger the render() to fetch more Entity-s if for example I load an Entity that is for example case class UserList(ul: List[Ref[User]]) type. But let's not worry about this now.
QUESTIONS:
1) Am I doing something really wrong if I am doing the state handling this way ?
2) Is there an already existing solution for this ?
I looked around but everything was FLUX/REDUX etc... along these lines... - which I want to AVOID for the sake of :
"fun"
curiosity
exploration
playing around
I think this simple cache will be simpler for my use-case because I want to take the "REF" based "domain model" over to the client in a simple way: as if the client was on the server and the network would be infinitely fast and zero latency (this is what the cache would simulate).
Consider what issues you need to address to build a rich dynamic web UI, and what libraries / layers typically handle those issues for you.
1. DOM Events (clicks etc.) need to trigger changes in State
This is relatively easy. DOM nodes expose callback-based listener API that is straightforward to adapt to any architecture.
2. Changes in State need to trigger updates to DOM nodes
This is trickier because it needs to be done efficiently and in a maintainable manner. You don't want to re-render your whole component from scratch whenever its state changes, and you don't want to write tons of jquery-style spaghetti code to manually update the DOM as that would be too error prone even if efficient at runtime.
This problem is mainly why libraries like React exist, they abstract this away behind virtual DOM. But you can also abstract this away without virtual DOM, like my own Laminar library does.
Forgoing a library solution to this problem is only workable for simpler apps.
3. Components should be able to read / write Global State
This is the part that flux / redux solve. Specifically, these are issues #1 and #2 all over again, except as applied to global state as opposed to component state.
4. Caching
Caching is hard because cache needs to be invalidated at some point, on top of everything else above.
Flux / redux do not help with this at all. One of the libraries that does help is Relay, which works much like your proposed solution, except way more elaborate, and on top of React and GraphQL. Reading its documentation will help you with your problem. You can definitely implement a small subset of relay's functionality in plain Scala.js if you don't need the whole React / GraphQL baggage, but you need to know the prior art.
5. Serialization and type safety
This is the only issue on this list that relates to Scala.js as opposed to Javascript and SPAs in general.
Scala objects need to be serialized to travel over the network. Into JSON, protobufs, or whatever else, but you need a system for this that will not involve error-prone manual work. There are many Scala.js libraries that address this issue such as upickle, Autowire, endpoints, sloth, etc. Key words: "Scala JSON library", or "Scala type-safe RPC", depending on what kind of solution you want.
I hope these principles suffice as an answer. When you understand these issues, it should be obvious whether your solution will work for a given use case or not. As it is, you didn't describe how your solution addresses issues 2, 4, and 5. You can use some of the libraries I mentioned or implement your own solutions with similar ideas / algorithms.
On a minor technical note, consider implementing an async, Future-based API for your cache layer, so that it returns Future[Entity] instead of Loading | Entity.
In his book on DbContext, #RowanMiller shows how to use the DbSet.Local property to avoid 1.) unnecessary roundtrips to the database and 2.) passing around collections (created with e.g. ToList()) in the application (page 24). I then tried to follow this approach. However, I noticed that from one using [} – block to another, the DbSet.Local property becomes empty:
ObservableCollection<Destination> destinationsList;
using (var context = new BAContext())
{
var query = from d in context.Destinations …;
query.Load();
destinationsList = context.Destinations.Local; //Nonzero here.
}
//Do stuff with destinationsList
using (var context = new BAContext())
{
//context.Destinations.Local zero here again;
//So no way of getting the in-memory data from the previous using- block here?
//Do I have to do another roundtrip to the database here to get the same data I wanted
//to cache locally???
}
Then, what is the point on page 24? How can I avoid the passing around of my collections if the DbSet.Local is only usable inside the using- block? Furthermore, how can I benefit from the change tracking if I use these short-lived context instances not handing over any cache data to each others under the hood? So, if the contexts should be short-lived for freeing resources such as connections, have I to give up the caching for this? I.e. I can’t use both at the same time (short-lived connections but long-lived cache)? So my only option would be to store the results returned by the query in my own variables, exactly what is discouraged in the motivation on page 24?
I am developing a WPF application which maybe will also become multi-tiered in the future, involving WCF. I know Julia has an example of this in her book, but I currently don’t have access to it. I found several others on the web, e.g. http://msdn.microsoft.com/en-us/magazine/cc700340.aspx (old ObjectContext, but good in explaining the inter-layer-collaborations). There, a long-lived context is used (although the disadvantages are mentioned, but no solution to these provided).
It’s not only that the single Destinations.Local gets lost, as you surely know all other entities fetched by the query are, too.
[Edit]:
After some more reading in Julia Lerman’s book, it seems to boil down to that EF does not have 2nd level caching per default; with some (considerable, I think) effort, however, ones can add 3rd party caching solutions, as is also described in the book and in various articles on MSDN, codeproject etc.
I would have appreciated if this problem had been mentioned in the section about DbSet.Local in the DbContext book that it is in fact a first level cache which is destroyed when the using {} block ends (just my proposal to make it more transparent to the readers). After first reading I had the impression DbSet.Local would always return the same reference (Singleton-style) also in the second using {} block despite the new DbContext instance.
But I am still unsure whether the 2nd level cache is the way to go for my WPF application (as Julia mentions the 2nd level cache in her article for distributed applications)? Or is the way to go to get my aggregate root instances (DDD, Eric Evans) of my domain model into memory by one or some queries in a using {} block, disposing the DbContext and only holding the references to the aggregate instances, this way avoiding a long-lived context? It would be great if you could help me with this decision.
http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
http://www.codeproject.com/Articles/435142/Entity-Framework-Second-Level-Caching-with-DbConte
http://blog.3d-logic.com/2012/03/31/using-tracing-and-caching-provider-wrappers-with-codefirst/
The Local property provides a “local view of all Added, Unchanged, and Modified entities in this set”. Like all change tracking it is specific to the context you are currently using.
The DB Context is a workspace for loading data and preparing changes.
If two users were to add changes at the same time, they must not know of the others changes before they saved them. They may discard their prepared changes which suddenly would lead to problems for other other user as well.
A DB Context should be short lived indeed, but may be longer than super short when necessary. Also consider that you may not save resources by keeping it short lived if you do not load and discard data but only add changes you will save. But it is not only about resources but also about the DB state potentially changing while the DB Context is still active and has data loaded; which may be important to keep in mind for longer living contexts.
If you do not know yet all related changes you want to save into the database at once then I suggest you do not use the DB Context to store your changes in-memory but in a data structure in your code.
You can of course use entity objects for doing so without an active DB Context. This makes sense if you do not have another appropriate data class for it and do not want to create one, or decide preparing the changes in them make more sense. You can then use DbSet.Attach to attach the entities to a DB Context for saving the changes when you are ready.
Weird question, but I'm not sure if it's anti-pattern or not.
Say I have a web app that will be rendering 1000 records to an html table.
The typical approach I've seen is to send a query down to the database, translate the records in some way to some abstract state (be it an array, or a object, etc) and place the translated records into a collection that is then iterated over in the view.
As the number of records grows, this approach uses up more and more memory.
Why not send along with the query a callback that performs an operation on each of the translated rows as they are read from the database? This would mean that you don't need to collect the data for further iteration in the view so the memory footprint shrinks, and you're not iterating over the data twice.
There must be something implicitly wrong with this approach, because I rarely see it used anywhere. What's wrong with this approach?
Thanks.
Actually, this is exactly how a well-developed application should behave.
There is nothing wrong with this approach, except that not all database interfaces allow you to do this easily.
If we talk about tabularizing 10 records for a yet another social network, there is no need to mess with callbacks if you can get an array of hashes or whatever with a single call that is already implemented for you.
There must be something implicitly wrong with this approach, because I rarely see it used anywhere.
I use it. Frequently. Even when i wouldn't use too much memory repeatedly copying the data, using a callback just seems cleaner. In languages with closures, it also lets you keep relevant code together while factoring out the messy DB stuff.
This is a "limited by your tools" class of problem: Most programming languages don't allow to say "Do something around this code". This was solved in recent years with the advent of closures. Think of a closure as a way to pass code into another method which is then executed in a context. For example, in GSQL, you can write:
def l = []
sql.execute ("select id from table where time > ?", time) { row ->
l << row[0]
}
This will open a connection to the database, create a statement and a result set and then run the l << it[0] for each row the DB returns. Note that the code runs inside of sql.execute() but it can access local variables (l) and variables defined in sql.execute() (row).
With this kind of code, you can even generate the result of a HTTP request on the fly without keeping much of the page in RAM at any time. In my case, I'd stream a 2MB document to the browser using only a few KB of RAM and the browser would then chew 83s to parse this.
This is roughly what the iterator pattern allows you to do. In many cases this breaks down on the interface between your application and the database. Technologies like LINQ even have solutions that can send back code to the database.
I've found it easier to use an interface resolver than deep callback where its hooked up through several classes. MS has a much fancier version than mine called Unity. This provides a much cleaner way of accessing classes that should not be tightly coupled
http://www.codeplex.com/unity
I have a User object that, upon successful authentication, is tucked into the session (sans security info) for easy recall and for determining whether we have an authenticated user or anonymous session. There are several paths by which the user can alter some or all of his or her information and I'd like to keep that session value up to date. The obvious answer is to update the value in the afterSave() callback, but that, of course, violates MVC.
Is there another way of capturing every change in one place so that I don't have to drop session writes all over the place? I can't think of anything, nor have I been able to find any other ideas. Am I the only person trying to do something like this?
Thanks.
Final Solution: I marked neilcrookes' response as the answer, frankly, because there doesn't seem to be the better way. Since this way violates my OCD senses, though, I took a slightly different path. I decided to have my User::authenticate() method return the authenticated user object to the caller so it can do whatever it wants with it. One of the things that the callers "want" to do is to drop that value in the session. It's redundancy, but it's very, very limited. In my mind, that felt better than accessing the session from the model (though it's certainly a damned if you do, damned if you don't scenario).
//in users controller
if ($this->User->save()) {
$this->Auth->login($this->User->read());
$this->Session->setFlash[.. etc]
And for the record, I do not agree with the answer of neilcrooks, but I will refrain from feeding the troll.
Some might disagree but I'd screw MVC, do it in Model::afterSave() and use $_SESSION - test for the session before writing to it, in case it's not started for example you are saving against the model in a shell or something.
MVC is a general pattern - a guideline, you can bang your head against it trying to figure out how to achieve something that doesn't quite fit, or just do it another way and move onto to something more important.
Bring on the flames.
after save
Use Like this
$this->Session->write('Auth.User.mmid', $kinde['Kindle']['id']);
You should be able to just use AppController to create the necessary callback(s) that keep your session data up to date. So, for instance, you could have your User model afterSave() set a property called changed to true. Then in your AppController->afterFilter() you check that property and update the session data as necessary.
Alternatively, you could write a component through which to update your user info and also your session data. Then any controller that needs to change user info just needs to include that component.
There's no need to write redundant code or break MVC.