So, I've narrowed my nasty problem down to this ...
class TestModel(ndb.Model):
json1 = ndb.JsonProperty(default={})
entity1 = TestModel()
entity1.json1['val1'] = 'added via entity1'
entity2 = TestModel()
entity2.json1['val2'] = 'added via entity2'
logging.warn('entity2.json1 = {}'.format(entity2.json1))
In the log, I see this:
... entity2.json1 = {'val2': 'added via entity2', 'val1': 'added via entity1'}
Surprisingly, and EXTREMELY dangerously, I see that a value set in the first instance, entity1, has leaked into the second instance, entity2.
Is it unreasonable of me to expect the second instantiation of TestModel to provide me with a "clean" instance, especially since I have default={} for the JsonProperty? Should I be doing something that I'm not. Or might this be a bug with ndb?
UPDATE: My best workaround so far: always do TestModel(json1={}). But I guess I'm worrying that if one of our developers forgets to do this, we could get one customer's data leaking into another.
UPDATE: Seems to have bugs already reported to Google. 35898756 shows that this (mis)behaviour can happen between requests. It was opened 3 years ago; still awaiting a fix.
For anyone interested in this problem, I've settled on a workaround that'll let me sleep better at night. It won't fit into a comment above, so I'm answering my own question (hope it's okay) ...
This does seem to be a bug (see 35898756) that's 3 years old so likely not going to get fixed soon. The workarounds above include doing TestModel(json1={}) always or sub-classing JsonProperty and using my custom class always, never ndb's class (and I would have to repeat for all other similar properties like PickleProperty). These work, but worry me because EVERY developer on the project has to do the right thing EVERYWHERE in the code base ALL THE TIME. Ha!
So, here's a workaround that means "doing the right thing" is localized to just my Models (much less code to worry about).
class TestModel(ndb.Model):
json1 = ndb.JsonProperty(default={})
def __init__(self, **kwargs):
kwargs.setdefault('json1', {}) # <---- ADDED THIS!
super(TestModel, self).__init__(**kwargs)
In the constructor of my model, if it isn't already there, add a key word arg to set property to {}. This seems to prevent values leaking from one instance into another.
Related
I am new to Salesforce apex coding. My first class that I am developing has 10 methods and is some 800 lines.
I haven’t added much of exception handling, so the size should swell further.
I am wondering, what the best practice for Apex code is... should I create 10 classes with 1 method instead of letting 1 class with 10 methods.
Any help on this would be greatly appreciated.
Thanks
Argee
What do you use for coding? Try to move away from Developer Console. VSCode has some decent plugins like Prettier or Apex PMD that should help you with formatting and making methods too complex. ~80 lines/method is so-so. I'd worry about passing long lists of parameters and having deeply nested code in functions rather than just their length.
There are general guidelines (from other languages, there's nothing special about Apex!) that ideally function should fit on 1 screen so programmer can see it whole without scrolling. Read this one, maybe it'll resonate with you: https://dzone.com/articles/rule-30-%E2%80%93-when-method-class-or
I wouldn't split it into separate files just for sake of it, unless you can clearly define some "separation of concerns". Say 1 trigger per object, 1 trigger handler class (ideally derived from base class). Chunkier bits not in the handler but maybe in some "service" style class that has public static methods and can operate whether called from trigger, visualforce, lightning web component, maybe some one-off data fix would need these, maybe in future you'd need to expose part of it as REST service. And separate file for unit tests (as blasphemous as it sounds - try to not write too many comments. As you're learning you'll need comments to remind yourself what built-in methods do but naming your functions right can help a lot. And a well-written unit test is better at demonstrating the idea behind the code, sample usage and expected errors than comments that can be often overlooked).
Exception handling is an art. Sometimes it's good to just let it throw an exception. If you have a method that creates Account, Contact and Opportunity and say Opportunity fails on validation rule - what should happen? Only you will know what's good. Exception will mean the whole thing gets rolled back (no "widow" Accounts) which sucks but it's probably "more stable" state for your application. If you naively try-catch it without Database.rollback() - how will you tell user to not create duplicates with 2nd click. So maybe you don't need too much error handling ;)
I am looking at these two examples from two different repositories:
public virtual void Delete(T entity)
{
DbEntityEntry dbEntityEntry = DbContext.Entry(entity);
if (dbEntityEntry.State != EntityState.Deleted)
{
dbEntityEntry.State = EntityState.Deleted;
}
else
{
DbSet.Attach(entity);
DbSet.Remove(entity);
}
}
public virtual void Delete(T entity) {
dbset.Remove(entity);
}
Can someone explain the difference. Why did the author of the first add all the additional lines?
What is the best way for me to perform a delete using EF 6 and a repository?
I can't answer your question. Here's why.
If you want to delete an object through Entity Framework two things must be true:
The object must be attached to the context.
The object must be in a Deleted state.
The aim of the first Delete method seems to be to ensure that both conditions will be met after it has run. But that's not as simple as it seems. It has to sniff out the object's current state, attach it if necessary, set its state if necessary. And, not even accounted for yet, it also has to make sure it's not attached to any other context. This, and Anthony Chu's comments show that the first method isn't nearly complex enough yet.
So keep it simple and use the second method? Well, the second method presumes that the calling code knows it's alright to just call dbset.Remove, it knows all the things the first method tries to find out. But if it's that smart already, why would it be too dumb too call DbSet.Remove directly? (Aside from the fact that the second method itself is pretty dumb, as Timothy Walters points out).
And then, there are three ways to mark an object as Deleted:
Remove it from a DbSet
Mark the state of the DbEntityEntry as Deleted
(Under certain conditions) Remove it from a parent's child collection
Trying to confine all deletes to calling one repository method may be a great restriction to writing the best code.
That's why I can't answer your question. I don't believe in repository layers on top of DbSet anyway. So as for me, the best way to perform deletes depends on what the code of each of your use cases looks like.
i have a wpf application with login window before displaying the mainwindow.
i use mef to load all modules/parts. before the mainwindow start i check the user login data against the parts which i display then. the parts a Shared and NonShared.
[ImportMany]
private IEnumerable<Lazy<IComponent, IComponentMetadata>> _components;
[ImportMany("Resourcen", typeof(ResourceDictionary))]
private IEnumerable<ResourceDictionary> _importResourcen;
var catalog = new AggregateCatalog();
catalog.Catalogs.Add(new AssemblyCatalog(Assembly.GetExecutingAssembly()));
catalog.Catalogs.Add(new DirectoryCatalog(AppDomain.CurrentDomain.BaseDirectory));
_mefcontainer = new CompositionContainer(catalog);
_mefcontainer.ComposeParts(somepartwithaSharedExport, this);
this all works fine. but now i tried the "relogin".
_mefcontainer.Dispose();
_mefcontainer = null;
//here the stuff that works from above
first i thought it works, but it seems that the parts i create the first time still exist in memory and i have no chance to "kill" them. so i got OutOfMemory Exception when i relogin enough times.
that why i use this approach now
System.Diagnostics.Process.Start(Application.ResourceAssembly.Location);
App.ShutDown();
i dont feel happy with this.
is there a way to cleanup the Compositioncontainer and create a new one?
You could try to call _mefcontainer.RemovePart(somepartwithaSharedExport). More details here: http://mef.codeplex.com/wikipage?title=Parts%20Lifetime
For the non-shared part you can call CompositionContainer.ReleaseExport:
_mefcontainer.ReleaseExport(nonSharedExport);
For more info have a try the sample code from this answer.
As far as I know, the shared parts cannot be released without disposing the container. If you go with that path, then you will also have to make sure that no references to these objects are kept to allow for the GC to collect them. The documentation reference from mrtig's answer provides a lot of useful details concerning the lifetime of parts and you should probably study it along with the answer by weshaggard to a similar question. It also explains what happens to disposable parts.
I'm trying to save an object and verify that it is saved right after, and it doesn't seem to be working.
Here is my object
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
#Entity
public class PlayerGroup {
#Id public String n;//sharks
public ArrayList<String> m;//members [39393,23932932,3223]
}
Here is the code for saving then trying to load right after.
playerGroup = new PlayerGroup();
playerGroup.n = reqPlayerGroup.n;
playerGroup.m = reqPlayerGroup.m;
ofy().save().entity(playerGroup).now();
response.i = playerGroup;
PlayerGroup newOne = ofy().load().type(PlayerGroup.class).id(reqPlayerGroup.n).get();
But the "newOne" object is null. Even though I just got done saving it. What am I doing wrong?
--Update--
If I try later (like minutes later) sometimes I do see the object, but not right after saving. Does this have to do with the high replication storage?
Had the same behavior some time ago and asked a question on google groups - objectify
Here the answer I got :
You are seeing the eventual consistency of the High-Replication
Datastore. There has been a lot of discussion of this exact subject
on the Objecify list in google groups , including several links to the
Google documentation on the subject.
Basically, any kind of query which does not include an ancestor() may
return results from a stale view of the datastore.
Jeff
I also got another good answer to deal with the behavior
For deletes, query for keys and then batch-get the entities. Make sure
your gets are set to strong consistency (though I believe this is the
default). The batch-get should return null for the deleted entities.
When adding, it gets a little trickier. Index updates can take a few
seconds. AFAIK, there are three ways out of this: 1; Use precomputed
results (avoiding the query entirely). If your next view is the user's
recently created entities, keep a list of those keys in the user
entity, and update that list when a new entity is created. That list
will always be fresh, no query required. Besides avoiding stale
indexes, this also speeds up your app. The more you result sets you
can reliably manage, the more queries you can avoid.
2; Hide the latency by "enhancing" the query results with the recently
added entities. Depending on the rate at which you're adding entities,
either inject only the most recent key, or combine this with the
solution in 1.
3; Hide the latency by taking the user through some unaffected views
before landing on your query-based view. This strategy definitely has
a smell over it. You need to make sure those extra steps are relevant
to the user, or you'll give a poor experience.
Butterflies, Joakim
You can read it all here:
How come If I dont use async api after I'm deleting an object i still get it in a query that is being done right after the delete or not getting it right after I add one
Another good answer to a similar question : Objectify doesn't store synchronously, even with now
Lets say I have this
_articlesService.SaveAsync(Model, AddressOf OnSaveCompleted)
The OnSaveCompleteMethod do a couple of things, obviously. Its a
Protected Overridable Sub OnSaveCompleted(ByVal asyncValidationResult As AsyncValidationResult)
In my unittest. I need to run a mocked SaveAsync, and have OnSaveCompleted called in anyway, because the method sends out events that I need to know have been sent.
Right now, the code just walks past that method, thus its never executed.
Need help solving this because I'm stuck right now.
If I understand your context right:
you have a class under test which uses an ArticlesService
your ArticlesService (a collaborating class) is responsible for sending some events
you want to verify that your class under test is behaving correctly
you want to do that by checking for the events.
If that's the case, you may be making your class responsible for more than it needs to be. You only need to verify that the ArticlesService was asked to SaveAsync. You don't need to worry about what the ArticlesService then went off and did.
Think of it this way. You are a Class-Under-Test. You have too much work to do, so you've asked some other people to help you. You have two choices. You can either chase them up, worrying about whether they're doing it right, or you can just trust them.
Rather than micro-managing classes, you can write a separate test which gives some examples of the way the ArticlesService will work, which will check that the ArticlesService is doing its job correctly. Your CUT's responsibility is to delegate that work effectively.
If you actually need the events to be raised so that your CUT can respond, that's a separate aspect of its behaviour, and you can do it with Moq's "Raise" method, documented in "Events", here:
http://code.google.com/p/moq/wiki/QuickStart
Edit: You can also use "CallBack", documented on the same link, to do stuff with the args being passed to you, including OnSaveCompleted. Not sure if it's going to help or not; it's tricky to see what you're doing without both the code and the failing test. Good luck anyway!
Close, but not exactly like that.
We don't actually send out an event in the ArticleService.
The method SaveAsync takes an Article to be saved, and a method to be called once the saving is complete.
The problem is that the "OnSaveCompleted"-method isnt being called. (This method exists in the View Model Base class, so the service isnt sending the event, the viewmodel is.).
But we have our own implementation of WCF-service proxies so this is probably what's messing with us, since we dont use the generated code.
Think we will have to rework our infrastructure on the services abit to solve this.
So it's a special case, just wanted to throw the question out just in case. :)
Thanks anyway for the answer.