Camel how to update multiple tables - apache-camel

I am in the process of developing a message router and I reached a point in which I need to update more than one database tables when processing a message.
Say the classical example when you receive an order message in a JMS queue that consist in order header information such as customer name, order date etc plus a list of ordered items such as item name, quantity, etc. In this case the order header information will go to an ORDERS table whereas the order items will go to an ORDER_ITEMS table.
<order-header customer="John Doe" date="2015-07-21" delivery-address="John's address">
<order-item name="Camel in action" qty="2"/>
<order-item name="Linux cook book" qty="1"/>
</order-header>
The simplest idea that came to my mind would be to route the message to a bean that will actually do the whole work such as inserting the order and retrieving the order id and then use that id to insert each order items. I am confident this will work but to me it does not look like a pure camel approach.
Another idea that came to my mind but more complex to implement would be to:
Get the message from the JMS endpoint
Enrich it with an order id using one of the multiple flavors of content enricher EIP
Use splitter EIP to split the above enriched message into one Order header and multiple Order items messages (and keeping order id in the exchange header)
Use a Content based router to route the Order header to an JDBC/SQL endpoint that knows how to insert records into the ORDERS table or to route am Order item to another JDBC/SQL endpoint that knows how to insert records into the ORDER_ITEMS table
Would this actually seem like a workable solution? My concern is that I would expect a foreign key constraint on the database side between ORDER and ORDER_ITEMS table and what it is going to happen if for whichever reason the Order item instance will get processed faster end will reach its JDBC/SQL endpoint before the Order header. Obviously that would mean trouble.
What is your opinion in general? Would this work as a viable approach.? Is there any way to address my sequencing concerns above.
Please note the order scenario is just an example to better explain my use case. In reality the message to process can be much more complex than a parent children template with more than two tables being both inserted into or updated.
Any ideas would be much appreciated. Thank you in advance for your inputs.

Related

How to represent many-to-many relastionship in ERD

I am trying to build restaurant system.
First I am building ER diagram for the requirements.
I have two tables, customer, it has only attribute which is table_number, and
another table is Item, which is the dishes that the customer will choose from, and it has several attributes which are (id,name,category,price).
A part of requirements which I faced problem with it is :
when customer make the order and submit it, two things should be happened, first send the order details to the kitchen and then save the same order in history_order.
My question is :
how can i represent tow many-to-many relationship between these tables
I know how to represent the current order that will read it by chef, but i do not know
how to represent to kind of many-to-many relationship in the best way without break
the principles or best practices.
I downloaded my work in image to show you what i am talking about.
I hope it is clear and understandable.
If there is anything unclear, please just let me know by the comments.
the ERD diagram
The order is the order. It's sent to the kitchen, and it's archived in the history. Like everything else, you want to record it once. Possibly you'll refer to it from more than one table.
Let's say you have a table order_items with key attributes order_id and item_id. Now you can look up all items for an order. You could have another table to track the progress of each order. Call it orders with attributes order_id and status, which might have values such as ordered (sent to kitchen), ready (to be served), served, perhaps cancelled, and (hopefully) paid. You could also have an attribute status_time to record when the status was last updated. Your list of orders "in the kitchen" are those orders with status ordered; the history is those with status paid.
You probably don't need an order_id. From your description, it looks like table_number and order_time uniquely identify each order. You could use that pair instead of an opaque ID.

Mail-list implementation (database design)

I am currently refactoring a web-app. Right now there is a 'Contact' table that has a one-to-one correspondence with the main 'Client' table, with a bool indicating if clients want to receive mail. The mail-list is accessed about once per month, and the clients' profile page is accessed many times a day. I am thinking if it would be 'cleaner' to make a new table with the client ids of everyone in the mail-list, as querying if the key is in the table should take about the same time as accessing the information. Should I do that, or should I leave it as it is?
Thanks,
Joyce
Leave it as is. Why complicate? Keep it as simple as possible.
An association table with (clientid, emailid) is too much normalized form. I think its better to keep like this. Also if you want to show contact emailid in any ui screen, you can avoid an inner join overhead due to this new association table.
However in future if you came across a requirement to have multiple emailids associated with a clientid, you could think about creating an association table then.

Validate order in Prestashop regarding database

I'm currently wishing to validate my order in Prestashop.
In fact, for the need of a module, I don't use validateOrder() function from the file PaymentModule.php
By doing it manually, the order process is OK (with statut 'Payment accepted') but my order are not validated.
In database, which tables are filled in regarding the order process ? I would like to check my results in the database.
Thanks
When an order is placed in Prestashop, the data in entered in a few database tables.
1) ps_orders : Has the data regarding to the order like cart id, customer id, addresses ids, language in which order is placed, module which is used for payment and total order amount etc etc.
2) ps_order_detail: This table has all the products for a particular order.
3) ps_order_history: This table has the order statuses history. Whenever order status is changed, an entry is made here.
The above are the most commonly used tables. Please note that in particular prestashop versions, some additional tables may be used (new tables may be added to new versions), so you should check the data base tables started with order_ and so on.
You can verify it also by placing a normal order, and then by taking its id, you can check that what data is placed in what tables. Then in your module, you can enter the data for an order in that particular tables.
Hope this will help.
Thank you

Counting the number of occurances of something in the database

For my website, I want to make something that works a bit like the tags on Stackoverflow - so some fields will have an autocompleter, and the autocompleter will display the number of times that other users have selected each suggested value. I suppose I'd have a database structure like this:
Articles
ArticleID
Content
TagId
Tags
TagId
TagName
Occurances
With the idea being that Occurances represents the number of times each TagId is referenced from the Articles table.
What is the best way to implement this? I could add/subtract from the occurances column on each of the stored procedures that update the article table, but I might miss one, and anyway, there is are some difficulties with this if a user removes a tag from something (as its easy to add 1 to the field for the newly added tag, but harder to work out which tag is being replaced.)
There is lots I don't understand about sql-server. Is there a more robust way of counting occurances like this, that the database system will deal with itself? It would be ok if the data was cached once a day or something.
To be able to have more than one tag attached to an article, you will have to add another table that connects the article table to the tag table. It's called a 'many to many' relation.
article
article_id
content
article_tag
article_id
tag_id
tag
tag_id
tagname
Doing like this, article 1 can be attached to tag 2, and the next row can be 1 and 3 and so on, so one article points to many tags. To count a certain tag, you join the Article_Tag and Tag tables, and and count the rows in Article_Tag where Tag.tagname = 'mysql', for examle.
You can create an indexes view that aggregates all the counts you need and is automatically maintained:
create view TagCounts
with schemabinding
as select TagId, count_big(*) as Occurances
from dbo.ArticleTags
group by TagId;
go
create unique clustered index cdxTagCounts on TagCounts (TagId);
go
Now the TagCounts.Occurances field is automatically maintained by SQL Server whenever you insert/delete/update the Articles table. You can query it like:
select Occurances from dbo.TagCounts with (noexpand) where TagId = ...;
And you can cache the result with LinqToCache, as such a query matches the restrictions of Query Notifications.
The trade off of using a pre-aggregated indexed view is scalability: as update of any article updates the count of Occurances for the tags of the article, an exclusive lock is required to update this count. Which implies that only one transaction can use a TagId at any moment. Depending on your traffic and on other elements of your design this restriction may or may not be acceptable.
The other alternative is a table of counts. Front ends (your ASP.Net farm) read this counts and then they update the in-memory count for each operation, keeping track of the delta from the counts in the table. Periodically the front ends merge their deltas into the table (eg. every 5 minutes) and refresh the in-memory table. This way front ends see a stale version of the truth, but an user sees immediate feedback of its actions: because of session stickiness his HTTP requests are processed by the same front end, and thus he see immediately his own article updates triggering modifications to the tag counts. User though do no immediately see the updates from other users that are load-balanced to another front end. Because a crash of the front end (or a process recycle...) will loose the deltas kept so far, the count table will drift in time away from the truth and would have to be periodically updated to the true count in the database.
If you which even more accuracy (all users see the true count immediately) then you can do something based on fast in-memory key value stores, which would be basically the same as my first proposal but with much higher throughput/lower latency, perhaps something based on memcached + redis. I'm not acquainted with SO architecture, but I believe they may be doing something similar.
You could use this query to get the number of occurances by tag:
SELECT Tags.TagId, COUNT(Articles.TagId) as Occurances
FROM Articles
JOIN Tags ON Articles.TagId
GROUP BY Tags.TagId
It could be used in a view or stored procedure, and you can set up your website's cache to requery it as often as required.
If you are using a relational database, the correct way to handle this problem is to NOT store the occurrences on the table itself, but rather dynamically query the number of occurrences on the articles table.
If you don't do it this way, you're stuck coding update queries every time you add/delete a row...generally not nice. If you query dynamically, you won't have an occurrences column in the table, but rather will get that information in your eg. presentation/model layer code.
Use:
SELECT COUNT(*) FROM ARTICLES WHERE TagId = 'xxx' ;
This line is part of iterating code.

private message database design

I'm creating a simple private message system and I'm no sure which database design is better.
The first design is a table for messages, and a table for message comments:
Message
---------------
id
recipientId
senderId
title
body
created_at
MessageComment
---------------
id
messageId
senderId
body
created_at
the second design, is one table for both messages and comments, and an addition field messageId so i'll be able to chain messages as comments.
Message
---------------
id
recipientId
senderId
messageId
title
body
created_at
I'd like to hear your opinion!
In this case, I'd vote for one table.
In general, whenever the data in two tables is the same or very similar and the logical concepts they represent are closely related, I'd put them in a single table. If there are lots of differences in the data or the concepts are really different, I'd make them two tables.
If you make two tables and you find yourself regularly writing queries that do a union of the two, that's an indication that they should be combined.
If you make one table but you find there are many fields that are always null for case A and other fields that are always null for case B, or if you're giving awkward double-meanings to fields, like "for type A this field is the zip code but for type B it's the product serial number", that's an indication they should be broken out.
Using a single table is the most advantageous.
It allows better message threading possibilities and it reduces duplication of effort, i.e. what happens when you want to add a column.
I would rather use the first one and include an additional field del_code to both tables. So, you'll be able to hide deleted messages and still have them in your database.

Resources