I want to present you my windows forms application scenario:
I want to create an invoice, so I open new window with invoice details. Then I want to add new customer (to the database), that I want to use in my invoice. After entering all the information (including new customer info) I click Save to save my new document.
The question is:
should I do all the work in one NH session? So saving new customer and saving invoice in one unit of work.
Or maybe saving new customer should be done separately? If yes, then if I add new customer and click cancel in invoice details form, then invoice creation is canceled, but customer is still in database.
I use one unit of work for the whole conversation. Maybe I'm wrong.
should I do all the work in one NH
session? So saving new customer and
saving invoice in one unit of work.
Yes, use one NHibernate session. Mapping the life time of the session to a single unit of work is generally the easiest way to go.
Don't confuse sessions with transactions. If you want to rollback both creations if one creation fails, that requires a transaction and has (mostly) nothing to do with the NHibernate session.
Logically, it would make sense to create the customer in a Unit Of Work and then create the invoice in another one. However, seeing as you seem to want the customer and invoice creation together to be atomic, it makes sense to have them created in one commit.
I don't know how NHibernate deals with the associations though - if the customer needs to be persisted already in order to associate it with the invoice, then you have no choice but to commit the uow after creating the customer and then create the invoice.
Conversation per business transaction is your friend:
Conversation Per Business
Transaction
Implementing Conversation per Business Transaction
Using Conversation per Business Transaction
AOP Conversation per Business Transaction
The code is in unhaddins, and we have two examples for desktop applications. One of them is my "Chinook Media Manager" (look for the posts on google).
There is also an implementation using PostSharp in unhaddins's trunk.
Related
As a developer, how should a users deletion of their account from a website be handled?
In the case of someone guessing the users password and deleting their account, you would want to be able to reinstate the account, so deleting it instantly and fully can't be the answer.
On the other hand if you are sure you want to delete an account it would still be in a backup for some time.
Is there a best practice for handling account deletions?
This is usually handled via so-called "soft deletes", which mark a record as deleted but don't actually delete it.
This is a common scenario in many frameworks. For example Laravel uses a deleted_at timestamp property, so you can delete an account, and then run a scheduled task to really delete records that a were deleted more than a certain amount of time ago, say a week. "Undeleting" an account simply requires setting that field to null.
Soft deletes are useful for the inevitable "but I didn't mean to delete it" scenarios, but also when you perhaps have other tasks that need to be done when you delete a user account, for example removing data from 3rd party services that you have shared the data with, such as mailing list services.
While you may have a legal requirement to delete data under GDPR when requested, it's not absolute and depends on your basis for processing. For example you may be legally obliged to retain records for a certain amount of time, or you may have a contractual requirement to retain records until the account has paid all its bills.
You shouldn't be relying on soft deletes as a defence against password guessing – a strong password policy and 2FA should have a much higher priority against that.
I would add one more boolean field to the user account record called for example "IsDeleted". It will be an indicator that the user account has been deleted. Also, adding deleted DateTime could be useful to determine later if you should completely purged that user account record.
The best practice is to keep it simple.
I need to build a product that will have a database on the back end to store and retrieve data.
I just started gathering the user stories from my stakeholders and I am stuck...
If I have a Project Leader who has one user story like:
"As a project leader, I want to be able to see and modify the scope of my project so that I make sure my project is up to date"
This user story would require I had created the database and have a table before that having the data in the table.
Should I collect all the user stories and add the database component on the acceptance criteria?
Should I create user stories only for the back end and some for the front end?
I'm not sure how to separate or make them work together.
The idea behind SCRUM is that the architecture / design will emerge as you develop. With this in mind you still need the product backlog to reflect what will the product be. So somewhere in the backlog there should be a user story like... "As a user, I want an application that I can use the manage my projects". That story is rather large (epic) level. So that has to be broken out into smaller stories (like the "... the application must have ability x"). If that is indeed the user story, then another sub epic (still large needs breaking out) story would be... "As an application developer (notice the context change here), I need a database to store my Project application data". Then that story gets broken out for the person making db scripts (assuming you are creating the application database first, some applications are code first and ORM generates database schema). The main point here is that you start large and break it down until you get a full backlog with very small stories. Then you know you have a full backlog (groomed backlog) and you are ready to start planning your sprints.
I am developing a system with Java EE and JPA where users can make changes to entities. It is needed to trace back to the changes when needed. So the all the changes and the user have to be recorded for each occasion when en update is made. What is the best way to record the changes.
For example, there is an Entity called Investigation. It has attributes like Name, Category, Price, Volume, etc. A user can search a single investigation and change the name in one instance and in another instance, another user can change the price. All these occasions with the change done and the user who did it is needed to be traced back when needed.
One method described in this link is that to label objects as old edited and create a new object with updated values, but the problem there are several other objects from different entities referring to the old one.
Another method as described in this link is to use a versioning field in a new table. Than can be achieved in JPA by creating a new entity that extends the main entity.
Out of these methods what is the best practice? Is there any other optimized way to keep the record editing history in Java Persistence?
EclipseLink supports history.
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/History
If you don't mind using Hibernate, Envers might be interesting for you. It performs auditing automatically, optionally appending metadata like current user.
For each audited entity it creates a history table that holds previous versions.
I thinking about developing a new greenfield app using DDD/TDD/NHibernate with a new database schema reflecting the domain, where changes in the DB would need to be synchronized both ways with the old projects database. The requirement is that both projects will run in parallel, and once the new project starts adding more business value than the old project, the old projects would be shutted down.
One approach I have on my mind is to achieve the db synchronization via db triggers. Once you insert/update/delete in new database, the trigger for the table would need to correctly update the old database. The same for changes in the old database, its triggers would need update the new database.
Example:
old project has one table Quote, with columns QuoteId and QuoteVersion. The correct domain model is one Quote object, with many QuoteVersion objects. So the new database would have two tables, Quote and QuoteVersion. So, if you change Quote table in the new DB, the trigger would need to either update all records with that QuoteId in the old DB or the latest version. Next, if you update Quote record in the old DB, again you either update the record in the new DB or it might update it only if the latest version of the Quote in the old DB was updated.
So, there would need to be some logic in the triggers. Those sql statements might be kind of non-trivial. To ensure maintainability, there would need to be thorough tests for triggers (save data in one db, test data in the second db, for different cases).
The question: do you think this trigger idea for db synchronization is viable (not sure yet how to ensure one trigger wont trigger the other database trigger)? Anybody tried that and found out it goes to hell? Do you have a better idea how to fulfil the requirement of sync databases?
This is a non-trivial challenge, and I would not really want to use triggers - you've identified a number of concerns yourself, and I would add to this concerns about performance and availability, and the distinct likelihood of horrible infinite loop bugs - trigger in legacy app inserts record into greenfield app, causes trigger to fire in greenfield app to insert record in legacy app, causes trigger to fire in legacy app...
The cleanest option I've seen is based on a messaging system. Every change in the application fires a message, which is handled by a recipient at the receiving end. The recipient can validate the message, and - ideally - forward it to the "normal" code which handles that particular data item.
For example:
legacy app creates new "quote" record
legacy app sends a message with a representation of the new "quote"
message bus forwards message to greenfield app "newQuoteMessageHandler"
greenfield app "newQuoteMessageHandler" validates data
greenfield "newQuoteMessageHandler" instantiates "quote" domain entity, and populates it with data
greenfield domain entity deals with remaining persistence and associated business logic.
Your message handlers should be relatively easy to test - and you can use them to isolate each app from the crazy in the underlying data layer. It also allows you to deal with evolving data schemas in the greenfield app.
Retro-fitting this into the legacy app could be tricky - and may well need to involve triggers to capture data updates, but the logic inside the trigger should be pretty straightforward - "send new message".
Bi-directional sync is hard! You can expect to spend a significant amount of time on getting this up and running, and maintaining it as your greenfield project evolves. If you're working on MS software, it's worth looking at http://msdn.microsoft.com/en-us/sync/bb736753.
Hi i am developing an App in WPF who will have paginated records (i am doing the pagination myself depending on the filters or in the number of records per page the user wants to be shown).
So i have never worked serious with DataGrids and what i am asking is, what is the best approach and better politic when we work with a DataGrid to update the Table in the DB?
We detect the row who have been changed, or we update the whole Table in the DB, what is the better way?
Because the user can change one row, and then other, and imagine the user changes 50 rows, the App will have to connect 50 Times with the DB?
Unit of work is probably the most common infrastructure solution to this problem, basically it stores the changes applied to the data and when ready executes them in a transaction to the database. There are many ORM mappers like Entity Framework or nHibernate that already do this for you, so id start there.
EDIT
See this example implementation as it sounds like from your comments yould need to write your own version, but basically you build a list of inserts, updates, deletes that should happen and execute them all in a trasaction, first inserts, then updates, then deletes but Id recommend you look at an ORM like the ones i described above they already have this as a feature.