JDO, unowned relationship and persisting multiple objects at once - google-app-engine

I would like to have an unowned relationship between two persistence capable classes, created at once. The relationship has to be unowned, because my app can’t really guarantee that the two instances I want to persist will stay in the same entity group for good. My relationship is a bidirectional one-to-many:
// in first class
#Persistent
private Set<Key> instancesOfSecondClass;
// in second class
#Persistent
private Key instanceOfFirstClass;
Then, in one servlet doPost() call, I need to persist one instance per these classes. I actually made a nice methods to maintain both sides of the relationship. First, the Key id (I use …
#PrimaryKey
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Key id;
… as primary keys everywhere) is added to the Set in first class, then a convenience method is called on the instance of second class to notify of this change and update the one Key property accordingly, comparing old values, null checks and everything. Indeed I don’t yet have any Key when the instance is fresh, so I need to persist them both first with pm.makePersistent() call, then do the assignment and wait for pm.close() to happen later.
But, what really happens, is pretty confusing. After running the servlet (whose sole purpose is to return serialized Keys of these two instances for use elsewhere), I go check the datastore viewer and see:
the instance of first class got persisted (actually have one more issue here throwing at me NullPointerException from org.datanucleus.store.mapped.mapping.PersistenceCapableMapping.postInsert(PersistenceCapableMapping.java:1039) at the very moment I call pm.makePersistent() on the instance with Set<Key>)
the instance of second class got persisted (so far so good)
the instance of second class has the Key reference to the first instance persisted, yay
the set of Keys on the first instance is… wait for it… empty
The local datastore shows just empty space, while the one online shows <null>, even though my constructor for the class (a one with arguments) creates a new instance of HashSet<T> for the relation.
Using Google App Engine for Java 1.6.4, happened in 1.6.3 too.
Spent a whole day trying to solve this, putting transactions in between, using two PersistenceManagers, different order of persisting calls, cross-group transactions enabling, nothing helped.
Generically, I would be happy to find a working way to create and update two instances in two separate entity groups with unowned relationship between them, no matter of the possible inconsistence (doesn’t worry me that much, based on the frequency of possible updates).
In the mean time, I found two more possible solutions, but haven’t tried them yet: 1. create not only two, but four PersistenceManagers (one to create first instance, second to create second instance, third to update one side of relationship, fourth to update the second side of relationship), or 2. detach the instances and make them persistent again after update. Not sure which way to go now.

v2 of the plugin provides "real" unowned relations with none of this Key hackery needed before. You can persist both sides in the same transaction if you have the multiple-Entity-Groups flag set, or you can persist them non-transactional.
Just look at the tests for ideas http://code.google.com/p/datanucleus-appengine/source/browse/trunk/tests/com/google/appengine/datanucleus/jdo/JDOUnownedOneToManyTest.java
I have no control over what goes in their docs, just what goes in their code ;-)

Related

How to design a DataBase for well structured Java API Documentation?

I have a scenario to design a database for Java API Documentation, in which I have to present the information about every class and method in a given piece of code. For example, consider:
1. main()
2. {
3. String foo="test";
4. foo.substring(1,2);
5. }
Here, I have to show documentation for class String and method substring from Java docs (The classes/methods can be any valid class/method).
My Observations:
The classes may repeat in various packages, so they can't be unique. Same goes for methods.
The method name foo() can be :
1) The method of this class
2) Overrides the method of some parent class
3) Simply inherits the a method.
With this info, I have following tables:
1)
CREATE TABLE "JAVACLASSDESCRIPTION"
( "CLASSFULLNAME" VARCHAR2(400) NOT NULL ENABLE,
"CLASSNAME" VARCHAR2(400),
"CLASSDEFINATION" CLOB,
"CLASSDECLARATION" CLOB,
"INHERITEDCLASSES" CLOB,
CONSTRAINT "JAVACLASSDESCRIPTION_PK" PRIMARY KEY ("CLASSFULLNAME") ENABLE
) ;
INHERITEDCLASSES is a multi-valued attribute.I know it's a really poor thing, but I have reasons.
1) 1st check if the method is available in JAVAMETHODDESCRIPTION table (Either as the class method itself, or a override method ).
2) If not, it has to be a method which is inherited for some parent class. So we have to show the documentation of method of this parent class.To save multiple searching, the value INHERITEDCLASSES contains is as follows(for some random class):
java.lang.Object: clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
so that it's parent class is java.lang.Object followed by the list of methods, so that it's easy to match the method name.
2)
CREATE TABLE "JAVAMETHODDESCRIPTION"
( "CLASSFULLNAME" VARCHAR2(400) NOT NULL ENABLE,
"METHODNAME" VARCHAR2(400) NOT NULL ENABLE,
"METHODDECLARATION" VARCHAR2(400),
"METHODDEFINATION" CLOB
) ;ALTER TABLE "JAVAMETHODDESCRIPTION" ADD CONSTRAINT "JAVAMETHODDESCRIPTION_FK" FOREIGN KEY ("CLASSFULLNAME")
REFERENCES "JAVACLASSDESCRIPTION" ("CLASSFULLNAME") ON DELETE CASCADE ENABLE;
Sample output:
I know there are lot of design issues.How can I improve my database design?
EDIT:
About the multi-valued entry, if decomposed to another table may result into redundant entries.For eg. Object class is the super class for all.
Link for documentation page
hard to tell everything because a lot of things is unspecified, but:
a few things to consider: loaded java class is identified by unique full name (x.y.z.Myclass$Inner.class) and it's classloader. if you don't care about loaded class but only the source code (javadoc) then you can probably skip classloader.
javadoc can be at method, class, package and field level (each of them can be uniquely identified by signature)
if you want to support javadoc for inherited methods then you need to model multiple inheritance (javadoc can be inherited also from interfaces) and in your application traverse that tree top on request to display the javadoc. other option is to do the traversing during saving the content to db
another thing is versioning of library and jdk. don't know if you want to support different versions.
The best method for going about something like this, is to get to the 3rd normal form. You may not stay there, but at least by "seeing it" you learn a few things along the way about your system, the relationships, and such.
http://en.wikipedia.org/wiki/Third_normal_form
Once you get to that stage, then start looking at what you have, and ask yourself how it will behave with your specific situation. ie how are you accessing it .. is it going to perform poorly, would it help to denormalize things a little to help improve query performance, etc, etc.

Google App Engine: Datastore parent key confusion

I am currently learning more about the Google App Engine datastore, and I have some doubts regarding my understanding of the concept of defining a parent key. Now, here's the code for defining a parent key from the GAE documentation:
def guestbook_key(guestbook_name="default"):
"""Constructs a Datastore key for a Guestbook entity with guestbook_name."""
return ndb.Key('Guestbook', guestbook_name)
Note: this code is included in the source code of an application which accepts entries from a user and stores it in a datastore and displays them collectively on the homepage.
Now, this is what I understand from this code(please correct me if my understanding of this concept is not what it is supposed to be):
The 'guestbook_key' function defines a parent key, which we have named as 'default', for all the posts that the user submits into the datastore. So basically, all the posts that are submitted by the user are stored in an entity named 'Guestbook', and we define a key for it's parent(which is non-existent) named 'default'.
Please correct me wherever I went wrong with my understanding.
It really depends on how you use this key. Right now, it is just a name. If you put() it, you are putting a Guestbook type with the name "default".
However, if you are using it as a parent, then you may have code that looks like this:
post = Post(parent=guestbook_key())
post.comment = "This is a new post!"
post.put()
In this case, the new Post object will have the Guestbook object with the name "default" as a parent. Then, you could use an ancestor query to get all Posts for a given guestbook.
The reason you might choose to do this (rather than, for example, have a property on each post with the name of the guestbook) is that it guarantees strongly consistent results. This basically allows all requests to see a consistent view of a guestbook. If you didn't have strongly consistent results, you may see cases where a user writes a post but when they look at the guestbook it doesn't appear.
You are correct that you never actually need to create the Guestbook entity. When you define a parent key, it is actually creating a key where the parent key is a prefix of the child key.

Short lived DbContext in WPF application reasonable?

In his book on DbContext, #RowanMiller shows how to use the DbSet.Local property to avoid 1.) unnecessary roundtrips to the database and 2.) passing around collections (created with e.g. ToList()) in the application (page 24). I then tried to follow this approach. However, I noticed that from one using [} – block to another, the DbSet.Local property becomes empty:
ObservableCollection<Destination> destinationsList;
using (var context = new BAContext())
{
var query = from d in context.Destinations …;
query.Load();
destinationsList = context.Destinations.Local; //Nonzero here.
}
//Do stuff with destinationsList
using (var context = new BAContext())
{
//context.Destinations.Local zero here again;
//So no way of getting the in-memory data from the previous using- block here?
//Do I have to do another roundtrip to the database here to get the same data I wanted
//to cache locally???
}
Then, what is the point on page 24? How can I avoid the passing around of my collections if the DbSet.Local is only usable inside the using- block? Furthermore, how can I benefit from the change tracking if I use these short-lived context instances not handing over any cache data to each others under the hood? So, if the contexts should be short-lived for freeing resources such as connections, have I to give up the caching for this? I.e. I can’t use both at the same time (short-lived connections but long-lived cache)? So my only option would be to store the results returned by the query in my own variables, exactly what is discouraged in the motivation on page 24?
I am developing a WPF application which maybe will also become multi-tiered in the future, involving WCF. I know Julia has an example of this in her book, but I currently don’t have access to it. I found several others on the web, e.g. http://msdn.microsoft.com/en-us/magazine/cc700340.aspx (old ObjectContext, but good in explaining the inter-layer-collaborations). There, a long-lived context is used (although the disadvantages are mentioned, but no solution to these provided).
It’s not only that the single Destinations.Local gets lost, as you surely know all other entities fetched by the query are, too.
[Edit]:
After some more reading in Julia Lerman’s book, it seems to boil down to that EF does not have 2nd level caching per default; with some (considerable, I think) effort, however, ones can add 3rd party caching solutions, as is also described in the book and in various articles on MSDN, codeproject etc.
I would have appreciated if this problem had been mentioned in the section about DbSet.Local in the DbContext book that it is in fact a first level cache which is destroyed when the using {} block ends (just my proposal to make it more transparent to the readers). After first reading I had the impression DbSet.Local would always return the same reference (Singleton-style) also in the second using {} block despite the new DbContext instance.
But I am still unsure whether the 2nd level cache is the way to go for my WPF application (as Julia mentions the 2nd level cache in her article for distributed applications)? Or is the way to go to get my aggregate root instances (DDD, Eric Evans) of my domain model into memory by one or some queries in a using {} block, disposing the DbContext and only holding the references to the aggregate instances, this way avoiding a long-lived context? It would be great if you could help me with this decision.
http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
http://www.codeproject.com/Articles/435142/Entity-Framework-Second-Level-Caching-with-DbConte
http://blog.3d-logic.com/2012/03/31/using-tracing-and-caching-provider-wrappers-with-codefirst/
The Local property provides a “local view of all Added, Unchanged, and Modified entities in this set”. Like all change tracking it is specific to the context you are currently using.
The DB Context is a workspace for loading data and preparing changes.
If two users were to add changes at the same time, they must not know of the others changes before they saved them. They may discard their prepared changes which suddenly would lead to problems for other other user as well.
A DB Context should be short lived indeed, but may be longer than super short when necessary. Also consider that you may not save resources by keeping it short lived if you do not load and discard data but only add changes you will save. But it is not only about resources but also about the DB state potentially changing while the DB Context is still active and has data loaded; which may be important to keep in mind for longer living contexts.
If you do not know yet all related changes you want to save into the database at once then I suggest you do not use the DB Context to store your changes in-memory but in a data structure in your code.
You can of course use entity objects for doing so without an active DB Context. This makes sense if you do not have another appropriate data class for it and do not want to create one, or decide preparing the changes in them make more sense. You can then use DbSet.Attach to attach the entities to a DB Context for saving the changes when you are ready.

Objects not saving using Objectify and GAE

I'm trying to save an object and verify that it is saved right after, and it doesn't seem to be working.
Here is my object
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
#Entity
public class PlayerGroup {
#Id public String n;//sharks
public ArrayList<String> m;//members [39393,23932932,3223]
}
Here is the code for saving then trying to load right after.
playerGroup = new PlayerGroup();
playerGroup.n = reqPlayerGroup.n;
playerGroup.m = reqPlayerGroup.m;
ofy().save().entity(playerGroup).now();
response.i = playerGroup;
PlayerGroup newOne = ofy().load().type(PlayerGroup.class).id(reqPlayerGroup.n).get();
But the "newOne" object is null. Even though I just got done saving it. What am I doing wrong?
--Update--
If I try later (like minutes later) sometimes I do see the object, but not right after saving. Does this have to do with the high replication storage?
Had the same behavior some time ago and asked a question on google groups - objectify
Here the answer I got :
You are seeing the eventual consistency of the High-Replication
Datastore. There has been a lot of discussion of this exact subject
on the Objecify list in google groups , including several links to the
Google documentation on the subject.
Basically, any kind of query which does not include an ancestor() may
return results from a stale view of the datastore.
Jeff
I also got another good answer to deal with the behavior
For deletes, query for keys and then batch-get the entities. Make sure
your gets are set to strong consistency (though I believe this is the
default). The batch-get should return null for the deleted entities.
When adding, it gets a little trickier. Index updates can take a few
seconds. AFAIK, there are three ways out of this: 1; Use precomputed
results (avoiding the query entirely). If your next view is the user's
recently created entities, keep a list of those keys in the user
entity, and update that list when a new entity is created. That list
will always be fresh, no query required. Besides avoiding stale
indexes, this also speeds up your app. The more you result sets you
can reliably manage, the more queries you can avoid.
2; Hide the latency by "enhancing" the query results with the recently
added entities. Depending on the rate at which you're adding entities,
either inject only the most recent key, or combine this with the
solution in 1.
3; Hide the latency by taking the user through some unaffected views
before landing on your query-based view. This strategy definitely has
a smell over it. You need to make sure those extra steps are relevant
to the user, or you'll give a poor experience.
Butterflies, Joakim
You can read it all here:
How come If I dont use async api after I'm deleting an object i still get it in a query that is being done right after the delete or not getting it right after I add one
Another good answer to a similar question : Objectify doesn't store synchronously, even with now

Self Tracking Entities Traffic Optimization

I'm working on a personal project using WPF with Entity Framework and Self Tracking Entities. I have a WCF web service which exposes some methods for the CRUD operations. Today I decided to do some tests and to see what actually travels over this service and even though I expected something like this, I got really disappointed. The problem is that for a simple update (or delete) operation for just one object - lets say Category I send to the server the whole object graph, including all of its parent categories, their items, child categories and their items, etc. I my case it was a 170 KB xml file on a really small database (2 main categories and about 20 total and about 60 items). I can't imagine what will happen if I have a really big database.
I tried to google for some articles concerning traffic optimization with STE, but with no success, so I decided to ask here if somebody has done something similar, knows some good practices, etc.
One of the possible ways I came out with is to get the data I need per object with more service calls:
return context.Categories.ToList();//only the categories
...
return context.Items.ToList();//only the items
Instead of:
return context.Categories.Include("Items").ToList();
This way the categories and the items will be separated and when making changes or deleting some objects the data sent over the wire will be less.
Has any of you faced a similar problem and how did you solve it or did you solve it?
We've encountered similiar challenges. First of all, as you already mentioned, is to keep the entities as small as possible (as dictated by the desired client functionality). And second, when sending entities back over the wire to be persisted: strip all navigation properties (nested objects) when they haven't changed. This sounds very simple but is not at all trivial. What we do is to recursively dig into the entities present in trackable collections of say the "topmost" entity (and their trackable collections, and theirs, and...) and remove them when their ChangeTracking state is "Unchanged". But be carefull with this, because in some cases you still need these entities because they have been removed or added to trackable collections of their parent entity (so then you shouldn't remove them).
This, what we call "StripEntity", is also mentioned (not with any code sample or whatsoever) in Julie Lerman's - Programming Entity Framework.
And although it might not be as efficient as a more purist kind of approach, the use of STE's saves a lot of code for queries against the database. We are not in need for optimal performance in a high traffic situation, so STE's suit our needs and takes away a lot of code to communicate with the database. You have to decide for your situation what the "best" solution is. Good luck!
You can find an Entity Framework project item at http://selftrackingentity.codeplex.com/. With version 0.9.8, I added a method called GetObjectGraphChanges() that returns an optimized entity object graph with only objects that have changes.
Also, there are two helper methods: EstimateObjectGraphSize() and EstimateObjectGraphChangeSize(). The first method returns the estimate size of the whole entity object along with its object graph; and the later returns the estimate size of the optimized entity object graph with only object that have changes. With these two helper methods, you can decide whether it makes sense to call GetObjectGraphChanges() or not.

Resources