I am parsing xml files in my application and I am struggling a bit on how to design this.
I allow for uploading of "incomplete" trees of our xml schemas, meaning that as long as the tree is well formed, it can be any sub-child of the root node. Most of the nodes have child nodes that contain only some text (properties), but I have not included any of that in my small xml structure example.
<root>
<childFoo>
<childBar>
</childBar>
</childFoo>
</root>
Any one of those nodes are allowed.
Now, i have designed a XmlInputService that has methods to parse the various nodes. And i just detect in the controller what kind of node it is, and hand it over to the service method accordingly.
So to keep my code DRY and nice, I re-use my methods in the higher levels. If I pass of a document of type Root to the service, it will parse whatever fields in root that belong directly in root in there, and pass of the children nodes (that represent children in my domain class structure) to the appropiate parsing method in the service.
Now, if a user uploads xml that contains constraint violations, i.e. an element with a non-unique name etc, I obviously want to roll this back.
Lets say i call parseRoot() and go downwards, calling parseChildFoo().
In there i call parseChildBar() for every Bar child in there. If one of the Bar children cannot validate because of constraints or whatever, I obviously want to cascade the roll back of the transaction all the way up to parseFoo().
How would I achieve this?
If you have a grails Service that has method that takes care of the parsing, you should throw a an exception that extends java.lang.RuntimeException from your service so that the user can be informed that they need to modify their xml. So, your controller will catch that exception and provide the user with a meaningful error message
The rolling back of any database-modifications will be done automatically by Grails/Spring whenever a runtimeexception is thrown from a service method.
The advantage of the approach that I am describing over Victor's answer is that you don't have to write any code to let the transaction roll-back in case of failure. Grails will do it for you. IMO, to use the withTransaction closure inside a service method makes no sense.
More info here
Make those validity rules validation constraints on domain objects.
When save() violates the constraints, throw an exception and catch it on top parse level, then roll back an entire transaction.
Like:
meServiceMethod() {
...
FooDomainClass.withTransaction { status ->
try {
parseRoot(xml)
}
catch (FooBarParentException e) {
status.setRollbackOnly()
// whatever error diagnostics
}
}
...
}
Or you can simply let the exception fly out of service method to controller - service methods are transactional by default.
Related
I'm creating a design for a Twitter application to practice DDD. My domain model looks like this:
The user and tweet are marked blue to indicate them being a aggregate root. Between the user and the tweet I want a bounded context, each will run in their respective microservice (auth and tweet).
To reference which user has created a tweet, but not run into a self-referencing loop, I have created the UserInfo object. The UserInfo object is created via events when a new user is created. It stores only the information the Tweet microservice will need of the user.
When I create a tweet I only provide the userid and relevant fields to the tweet, with that user id I want to be able to retrieve the UserInfo object, via id reference, to use it in the various child objects, such as Mentions and Poster.
The issue I run into is the persistance, at first glance I thought "Just provide the UserInfo object in the tweet constructor and it's done, all the child aggregates have access to it". But it's a bit harder on the Mention class, since the Mention will contain a dynamic username like so: "#anyuser". To validate if anyuser exists as a UserInfo object I need to query the database. However, I don't know who is mentioned before the tweet's content has been parsed, and that logic resides in the domain model itself and is called as a result of using the tweets constructor. Without this logic, no mentions are extracted so nothing can "yet" be validated.
If I cannot validate it before creating the tweet, because I need the extraction logic, and I cannot use the database repository inside the domain model layer, how can I validate the mentions properly?
Whenever an AR needs to reach out of it's own boundary to gather data there's two main solutions:
You pass in a service to the AR's method which allows it to perform the resolution. The service interface is defined in the domain, but most likely implemented in the infrastructure layer.
e.g. someAr.someMethod(args, someServiceImpl)
Note that if the data is required at construction time you may want to introduce a factory that takes a dependency on the service interface, performs the validation and returns an instance of the AR.
e.g.
tweetFactory = new TweetFactory(new SqlUserInfoLookupService(...));
tweet = tweetFactory.create(...);
You resolve the dependencies in the application layer first, then pass the required data. Note that the application layer could take a dependency onto a domain service in order to perform some reverse resolutions first.
e.g.
If the application layer would like to resolve the UserInfo for all mentions, but can't because it doesn't know how to parse mentions within the text it could always rely on a domain service or value object to perform that task first, then resolve the UserInfo dependencies and provide them to the Tweet AR. Be cautious here not to leak too much logic in the application layer though. If the orchestration logic becomes intertwined with business logic you may want to extract such use case processing logic in a domain service.
Finally, note that any data validated outside the boundary of an AR is always considered stale. The #xyz user could currently exist, but not exist anymore (e.g. deactivated) 1ms after the tweet was sent.
Is anyone having an idea on how to handle multiple entity updates within the same transaction in Spring Data REST ? The same thing can be handle within Spring controller methods using the #Transactional annotation. If I am correct, Spring Data REST executes every execution event within separate transactions. So multiple entity updates cannot be handled in a proper way.
I am having issues updating 2 entities (ABC and PQR) within the same transaction and rolling back the ABC entity when the PQR entity is failed.
// ABC repository
#RepositoryRestResource
public interface ABCEntityRepository extends MongoRepository<ABC, String> {
}
// PQR repository
#RepositoryRestResource
public interface PQREntityRepository extends MongoRepository<PQR, String> {
}
// ABC repository handler
#RepositoryEventHandler
public class ABCEventHandler {
#Autowired
private PQREntityRepository pqrEntityRepository;
#HandleBeforeSave
public void handleABCBeforeSave(ABC abc) {
log.debug("before saving ABC...");
}
#HandleAfterSave
public void handleABCAfterSave(ABC abc) {
List<PQR> pqrList = pqrEntityRepository.findById(abc.getPqrId());
if (pqrList != null && !pqrList.isEmpty()) {
pqrList.forEach(pqr -> {
// update PQR objects
}
}
// expect to fail this transaction
pqrEntityRepository.saveAll(pqrList);
}
}
since #HandleAfterSave method is executed in a separate transaction, calling HandleAfterSave method means the ABC entity updation is already completed and cannot rollback, therefore. Any suggestion to handle this ?
Spring Data REST does not think in entities, it thinks in aggregates. Aggregate is a term coming from Domain-Driven Design that describes a group of entities for which certain business rules apply. Take an order along side its line items for example and a business rule that defines a minimum order value that needs to be reached.
The responsibility to govern constraints aligns with another aspect that involves aggregates in DDD which is that strong consistency should/can only be assumed for changes on an aggregate itself. Changes to multiple (different) aggregates should be expected to be eventually consistent. If you transfer that into technology, it's advisable to apply the means of strong consistency – read: transactions – to single aggregates only.
So there is no short answer to your question. The repository structure you show here virtually turns both ABCEntity and PQREntity into aggregates (as repositories only exist for aggregate roots). That means, OOTB Spring Data REST does not support updating them in a single transactional HTTP call.
That said, Spring Data REST allows the declaration of custom resources that can take responsibility of doing that. Similarly to what is shown here, you can simply add resources on additional routes to completely implement what you imagine yourself.
Spring Data REST is not designed to produce a full HTTP API out of the box. It's designed to implement certain REST API patterns that are commonly found in HTTP APIs and will very likely be part of your API. It's build to avoid you having to spend time on thinking about the straight-forward cases and only have to plug custom code for scenarios like the one you described, assuming what you plan to do here is a good idea in the first place. Very often requests like these result in the conclusion that the aggregate design needs a bit of rework.
PS: I saw you tagged that question with spring-data-mongodb. By default, Spring Data REST does not support MongoDB transactions because it doesn't need them. MongoDB document boundaries usually align with aggregate boundaries and updates to a single document are atomic within MongoDB anyway.
I'm not sure I understood your question correctly, but I'll give it a try.
I'd suggest to have a service with both Repositories autowired in, and a method annotated with #Transactional that updates everything you want.
This way, if the transaction fails anywhere inside the method, it will all rollback.
If this does not answer your question, please clarify and I'll try to help.
I have used rest servlet binding to expose route as a service.
I have used employeeClientBean as a POJO , wrapping the actual call to employee REST service within it, basically doing the role of a service client.
So, based on the method name passed, I call the respective method in employee REST service, through the employeeClientBean.
I want to know how how I can handle the scenarios as added in commments in the block of code.
I am just new to Camel, but felt POJO binding is better as it does not couple us to camel specific APIs like exchange and processor or even use
any specific components.
But, I am not sure how I can handle the above scenarios and return appropriate JSON responses to the user of the route service.
Can someone help me on this.
public void configure() throws Exception {
restConfiguration().component("servlet").bindingMode(RestBindingMode.json)
.dataFormatProperty("prettyPrint", "true")
.contextPath("camelroute/rest").port(8080);
rest("/employee").description("Employee Rest Service")
.consumes("application/json").produces("application/json")
.get("/{id}").description("Find employee by id").outType(Employee.class)
.to("bean:employeeClientBean? method=getEmployeeDetails(${header.id})")
//How to handle and return response to the user of the route service for the following scenarios for get/{id}"
//1.Passed id is not a valid one as per the system
//2.Failure to return details due to some issues
.post().description("Create a new Employee ").type(Employee.class)
.to("bean:employeeClientBean?method=createEmployee");
//How to handle and return correct response to the user of the route service for the following scenarios "
//1. Employee being created already exists in the system
//2. Some of the fields of employee passed are as not as per constraints on them
//3. Failure to create a employee due to some issues in server side (For Eg, DB Failure)
}
I fear you are putting Camel to bad use - as per the Apache documentation the REST module is supporting Consumer implementations, e.g. reading from a REST-endpoint, but NOT writing back to a caller.
For your use case you might want to switch framework. Syntactically, Ratpack goes in that direction.
I'm closely following John Papa's pluralsight course on Angular and Breeze. I also use Entity Framework 6.
At load time, I call a Prime function that clears the cache:
function clearCache() {
var cachedParents = manager.getEntities('Parent'); // all invoices in cache
cachedParents.forEach(function (parent) { manager.detachEntity(parent); });
zStorage.clear();
manager.clear();
}
and then, loads the info:
return EntityQuery.from('Parents')
.where('applicationUser.email', '==', userId)
.expand('Address, Children')
.toType(entityParent)
.using(self.manager)
.execute()
.then(querySucceeded, self._queryFailed);
that calls the controller with
[HttpGet]
public IQueryable<Parent> Parents()
{
return _repository.Parents;
}
That returns one record...
Later, on the loading of the view, in the same repository, I request the parent entity from the local cache as follows:
var cachedParents = EntityQuery.from('Parent')
.toType(entityParent)
.using(self.manager)
.executeLocally();
THIS ONE BRINGS TWO ENTITIES, the correct one with Id, name, addres, etc. but also brings an empty entity with Id 0.
I've checked and even if I call the local query right after the remote query, it still brings the correct record AND the empty record.
I also reviewed the response and the json object comes correctly formatted and with only one record.
I've tried clearing the zstorage, the entity manager, detachment of the object, but nothing seems to explain or clear the empty entity.
This behavior only happens in the Parent entity type. No other type shows anything wrong.
Thanks in advance.
One way to debug this is to subscribe to the entityManager.hasChangesChanged event.
This event will be fired when the "ghost" entity is added. This way, you can trace the call stack by putting a breakpoint inside the event.
So first, ensure that after the clearCache call, the entityManager is empty. (Side note: the call to individually detach the parent entity via manager.detachEntity is actually redundant since you're already calling the manager.clear method at the end)
Then, put a breakpoint inside the hasChangesChanged event as you debug.
Hope this helps.
My bet is that you have code that adds a "nullo" (a placeholder entity with id=0) to the EntityManager. There is such code in John's sample and you might be calling it unintentionally.
To demonstrate that Breeze is NOT adding such a nullo itself,
set a breakpoint before the query
set a breakpoint at the top of the querySucceeded
confirm that there are no entities in cache at all before you query for "Parents" (e.g., manager.getEntities() returns nothing).
confirm at the top of querySucceeded that the query results in exactly ONE entity in cache (e.g., manager.getEntities() returns the lone "Parent" entity).
FWIW, there is no need to detach individual entities of type "Parent" if you are going to call manager.clear(). That call detaches every entity in the manager's cache, including the "Parent" types.
I have a situation where, in a model's afterSave callback, I'm trying to access data from a distant association (it's a legacy data model with a very wonky association linkage). What I'm finding is that within the callback I can execute a find call on the model, but if I exit right then, the record is never inserted into the database. The lack of a record means that I can't execute a find on the related model using data that was just inserted into the current.
I haven't found any mention of when data is actually committed with respect to when the afterSave callback is engaged. I'm working with legacy code, but I see no indication that we're specifically engaging transactions, so I'm trying to figure out what my options might be.
Thanks.
UPDATE
The gist of the scenario is this: We're taking event registrations, but folks can be wait listed. A user can register (or be registered) for a given Date. After a registration is complete, I need to check the wait list for the existence of a record for the registering user (WaitList.user_id) on the date being registered for (WaitList.date_id). If such a record exists, it can be deleted because it's become an active registration.
The legacy schema puts me in a place where the registration isn't directly tied to a date so I can't get the Date.id easily. Instead, Registration->Registrant->Ticket->Date. Unintuitive, I know, but it is what it is for now. Even better (sarcasm included), we have a view named attendees that rolls all of this info up and from which I would be able to use the newly created Registration->id to return Attendee.date_id. Since the record doesn't exist, it's not available in the view.
Hopefully that provides a little more context.
What's the purpose of the find query inside of your afterSave?
Update
Is it at all possible to properly associate the records? Or are we talking about way too much refactoring for it to be worth it? You could move the check to the controller if it's not possible to modify the associations between the records.
Something like (in psuedo code)
if (save->isSuccessful) {
if (onWaitList) {
// delete record
}
}
It's not best practice, but it will get you around your issue.