Save an entity which has a cyclic reference with another entity in Spring Data Mongo will be failed - spring-data-mongodb

I have two entities with both JPA annotations and Spring Data Mongo annotations. And they reference with each other. Like Parent and Child
#Entity
#Document
class Parent {
#OneToMany
public Set<Child> getChildren() {
return children;
}
}
#Entity
class Child {
#ManyToOne
public Parent getParent() {
return parent;
}
}
So apparently, these two entities reference with each other. With JPA, they are ok. And with Spring Data Mongo 1.8.4, the query is also ok, just has a INFO level message says that there is a cyclic reference has been detected.
But when I try to save data, Spring Data Mongo is failed. The console outputs exceptions round and round. Then finally these exceptions make the stackoverflow.
So whether it is an issue needs to be fixed. Since when query Spring Data Mongo can protect from the cyclic references but the save action cannot.

The INFO level message concerning the cycle provides a hint that there might be cyclic references that cannot be handled based on type information. Since it depends on the actual data used, this just points out a potential problem that might occur during mapping.
Please refer to the documentation on using references for information on splitting your data into multiple collections and referencing them or register a custom converter for your types that knows how how to deal with your types.

Related

How to handle multiple entity update in the same transaction in Spring Data REST

Is anyone having an idea on how to handle multiple entity updates within the same transaction in Spring Data REST ? The same thing can be handle within Spring controller methods using the #Transactional annotation. If I am correct, Spring Data REST executes every execution event within separate transactions. So multiple entity updates cannot be handled in a proper way.
I am having issues updating 2 entities (ABC and PQR) within the same transaction and rolling back the ABC entity when the PQR entity is failed.
// ABC repository
#RepositoryRestResource
public interface ABCEntityRepository extends MongoRepository<ABC, String> {
}
// PQR repository
#RepositoryRestResource
public interface PQREntityRepository extends MongoRepository<PQR, String> {
}
// ABC repository handler
#RepositoryEventHandler
public class ABCEventHandler {
#Autowired
private PQREntityRepository pqrEntityRepository;
#HandleBeforeSave
public void handleABCBeforeSave(ABC abc) {
log.debug("before saving ABC...");
}
#HandleAfterSave
public void handleABCAfterSave(ABC abc) {
List<PQR> pqrList = pqrEntityRepository.findById(abc.getPqrId());
if (pqrList != null && !pqrList.isEmpty()) {
pqrList.forEach(pqr -> {
// update PQR objects
}
}
// expect to fail this transaction
pqrEntityRepository.saveAll(pqrList);
}
}
since #HandleAfterSave method is executed in a separate transaction, calling HandleAfterSave method means the ABC entity updation is already completed and cannot rollback, therefore. Any suggestion to handle this ?
Spring Data REST does not think in entities, it thinks in aggregates. Aggregate is a term coming from Domain-Driven Design that describes a group of entities for which certain business rules apply. Take an order along side its line items for example and a business rule that defines a minimum order value that needs to be reached.
The responsibility to govern constraints aligns with another aspect that involves aggregates in DDD which is that strong consistency should/can only be assumed for changes on an aggregate itself. Changes to multiple (different) aggregates should be expected to be eventually consistent. If you transfer that into technology, it's advisable to apply the means of strong consistency – read: transactions – to single aggregates only.
So there is no short answer to your question. The repository structure you show here virtually turns both ABCEntity and PQREntity into aggregates (as repositories only exist for aggregate roots). That means, OOTB Spring Data REST does not support updating them in a single transactional HTTP call.
That said, Spring Data REST allows the declaration of custom resources that can take responsibility of doing that. Similarly to what is shown here, you can simply add resources on additional routes to completely implement what you imagine yourself.
Spring Data REST is not designed to produce a full HTTP API out of the box. It's designed to implement certain REST API patterns that are commonly found in HTTP APIs and will very likely be part of your API. It's build to avoid you having to spend time on thinking about the straight-forward cases and only have to plug custom code for scenarios like the one you described, assuming what you plan to do here is a good idea in the first place. Very often requests like these result in the conclusion that the aggregate design needs a bit of rework.
PS: I saw you tagged that question with spring-data-mongodb. By default, Spring Data REST does not support MongoDB transactions because it doesn't need them. MongoDB document boundaries usually align with aggregate boundaries and updates to a single document are atomic within MongoDB anyway.
I'm not sure I understood your question correctly, but I'll give it a try.
I'd suggest to have a service with both Repositories autowired in, and a method annotated with #Transactional that updates everything you want.
This way, if the transaction fails anywhere inside the method, it will all rollback.
If this does not answer your question, please clarify and I'll try to help.

Transactional behavior of Grails Service classes with chained calls

I am parsing xml files in my application and I am struggling a bit on how to design this.
I allow for uploading of "incomplete" trees of our xml schemas, meaning that as long as the tree is well formed, it can be any sub-child of the root node. Most of the nodes have child nodes that contain only some text (properties), but I have not included any of that in my small xml structure example.
<root>
<childFoo>
<childBar>
</childBar>
</childFoo>
</root>
Any one of those nodes are allowed.
Now, i have designed a XmlInputService that has methods to parse the various nodes. And i just detect in the controller what kind of node it is, and hand it over to the service method accordingly.
So to keep my code DRY and nice, I re-use my methods in the higher levels. If I pass of a document of type Root to the service, it will parse whatever fields in root that belong directly in root in there, and pass of the children nodes (that represent children in my domain class structure) to the appropiate parsing method in the service.
Now, if a user uploads xml that contains constraint violations, i.e. an element with a non-unique name etc, I obviously want to roll this back.
Lets say i call parseRoot() and go downwards, calling parseChildFoo().
In there i call parseChildBar() for every Bar child in there. If one of the Bar children cannot validate because of constraints or whatever, I obviously want to cascade the roll back of the transaction all the way up to parseFoo().
How would I achieve this?
If you have a grails Service that has method that takes care of the parsing, you should throw a an exception that extends java.lang.RuntimeException from your service so that the user can be informed that they need to modify their xml. So, your controller will catch that exception and provide the user with a meaningful error message
The rolling back of any database-modifications will be done automatically by Grails/Spring whenever a runtimeexception is thrown from a service method.
The advantage of the approach that I am describing over Victor's answer is that you don't have to write any code to let the transaction roll-back in case of failure. Grails will do it for you. IMO, to use the withTransaction closure inside a service method makes no sense.
More info here
Make those validity rules validation constraints on domain objects.
When save() violates the constraints, throw an exception and catch it on top parse level, then roll back an entire transaction.
Like:
meServiceMethod() {
...
FooDomainClass.withTransaction { status ->
try {
parseRoot(xml)
}
catch (FooBarParentException e) {
status.setRollbackOnly()
// whatever error diagnostics
}
}
...
}
Or you can simply let the exception fly out of service method to controller - service methods are transactional by default.

Serializing Entities with RIA Services

I've got a Silverlight application that requires quite a bit of data to operate and it requires it all up-front. It's using RIA Services (and the Entity Framework) to get all that information. It takes 10-15 seconds to get all the data, but the data only changes about once a month.
What I'd like to do is toss that data into Isolated Storage so that the next time they load up the app, I can just grab it, see if its updated, and if not use that data they've already got and save a ton of time sending things over the wire.
The structure of the graph I need to store is (more-or-less) a typical tree structure. A model has components, a component has features, a feature has options. The issue that I'm coming up against is that when I ask to have this root entity (the model) serialized, it's only serializing the top-level object and ignoring all of the "child" objects.
Does anyone know of a convenient way to get it to serialize/deserialize the whole graph?
IF RIA services is the problem then i might have a hint.
Do transfer collecitons of objects through RIA you need to do alittle tweaking of the domain model.
Lets say you have a receipt with a list of ReceiptEntries. Then you'd do this.
public Receipt {
public guid Id;
public List<ReceiptEntry> Entries;
}
public ReceiptEntry {
public guid ReceiptId;
}
you have to tell RIA how to associate these objects.
[Include()]
[Composition()]
[Association("ReceiptEntries", "Id", "ReceiptId"]
public Receipt {
public guid Id;
public List<ReceiptEntry> Entries;
}
Then it will serialize the list of objects.
I might write weird syntax cause I'm used to VB.net or have some minor faults in the sample code, just threw it up. But if the problem is that RIA doesnt send over the objects the way it shuold, then you should investigate this scenario. If you didnt already.

Google AppEngine JDO Persistence FK Arrays

I'm hoping someone's seen this. I've found no clues on Google.
I'm using Google AppEngine with JDO to persist my objects.
I have two objects, Parent and Child. Each Parent has n Child objects.
I initially stored the Child objects in an ArrayList data member in the Parent class.
I got the exception "java.lang.UnsupportedOperationException: FK Arrays not supported" when persisting the Parent object.
I put this down to my storing more than one Child key references, so changed it around so that the Child objects store key references to the Parent object instead. In this way, there is only one key reference per Child object instead of n key references per Parent object.
Yet the exception still gets thrown when persisting the Parent object. So I suspect I was mistaken about the probable cause of this exception.
Has anyone seen this exception or know what it means?
According to DataNucleus a lot of things are persisted by default... and they even had a complaint in their blog about the manual in the google app engine site, which said that you need to explicitly mark fields as #Persistent.
I figured out what was wrong.
It wasn't complaining about my ArrayList.
My Parent class had an array data member that I hadn't put an annotation on. Arrays are persisted by default in the absence of annotations.
I added the annotation #NotPersistent and this solved my problem.

How to pass a collection of Entities to .NET RIA Data Service?

Is it possible to pass a collection of objects to a RIA Data Service query? I have no issues sending an Entity, an Int or an array of primitive types, but as soon as i declare a method like this
public void GetLessonsConflicts(Lesson[] lessons)
{
}
i get a compilation error
" Operation named
'GetLessonsConflicts' does not conform
to the required signature. Parameter
types must be an entity type or one of
the predefined serializable
types"
I am just trying to do some validation on the server side before i save the data. I've tried List, IEnumerable etc.
Thanks
I think the problem is actually the lack of a return value. As I understand it, you can identify DomainOperations by convention or by attribute. You're not showing an attribute so RIA will be trying to match it by convention.
For example, by convention, an insert method must:
have Insert, Add or Create as the method name prefix, e.g. InsertEmployee
match the signature public void name(Entity e);
a query method must:
be public
return IEnumerable, IQueryable or T (where T is an entity).
a custom domain operation must
be public
return void
have an Entity as the first parameter.
EDIT: See Rami A's comment below. I believe this was true at the time but I'm not currently working with this technology so I'm not current enough on it to update this answer other than to note that it may be incorrect.
Or you can use Attributes such as [Insert],[Delete],[Update],[Query],[Custom]. From my docs, all the attributes do is remove the requirement for the name convention - it's not clear from them, to me, what the [Query] and [Custom] attributes achieve.
As well as DomainOperations, you can define ServiceOperations (using the [ServiceOperation] attribute) and InvokeOperations.
This article might help (although I think it's a bit out of date).

Resources