How to implement web applications more efficiently? - database

I've been developing a website using JSF & PrimeFaces. At the time of development, I noticed that there're 2 bottlenecks for any web applications. Correct me if I'm wrong.
The bottlenecks are:
I've used Hibernate framework for persistence layer. Now if a change
occurs in database, then there's no way to reflect that in scoped
beans. Hibernate has dynamic-update attribute which helps to update
only the affected records of the table [at the time of persisting]. But I've not found similar
kind of mechanism by which I can always get updated DAO. Here,
developer has to take responsibility of updating them by using
session.refresh() method, which just reloads the entire object
tree from database table. So,for each small database changes, I
think the caching functionality of DAO [in Hibernate] is missed
since every time they're evicted from session cache. In a word, database updates doesn't trigger DAO updates.
After updating DAO, if I want to reflect the changes in view level, then I had to take help of Primeface sockets [PrimePush] since refreshing the pages every time isn't a good implementation & primeface socket allows updating of specific ids. So, that means for each DAO fields, I've to take help of many different Primeface sockets, each one having unique channel. Also sending messages to those different sockets has to be done by developer in bean codes.
So, the question is how these can be handled in a efficient way? Is there any other technologies/framework which handles these issues so that developer doesn't have to worry about?

Ideally you shoule be doing like :
Hibernate Persistence Layer (have DAO performing CRUD operations)
Managed Beans which access your DAO
View (Primefaces) using BackBean updating the View.
You don't need PrimePush or something. It should be refreshed by actions in your Views

Related

Best approach to interact with same data base table from more than one microservices

I have a situation, where I need to add/update/retrieve records from same database table from more than one microservices. I can think of below three approaches, please help me pick up the best suitable approach.
Having a dedicated Microservices say database-data-manager which will interact with data base and & add/update/retrieve data and all the other microservices will call the end points of database-data-manager to add/update/retrieve data when required.
Having a maven library called database-data-manager and all the other microservices will use this library for the db interactions.
Having the same code(copy paste) in all the applications to take care of db interactions.
Approach - 1 seems expensive as we need to host a dedicated application for a basic functionality.
Approach - 2 would reduce boiler plate code but difficult to manage library version.
Approach - 3 would cause lot of boiler plate code and maintenance efforts to keep similar code in all the microservices.
Please suggest, Thanks in advance.
A strict definition of "microservice" would include the fact it's essentially self-contained... that would include any data storage it might need. So you really have a collection of services talking to a common database. Schematics aside...
Option 1 sounds like it's on the right track: you need to have something sitting between the microservices and database. This could be a cache or a dedicated proxy service. Let's say you have an old legacy system which is really fragile, controlling data in/out through a more capable service, acting as a proxy, is a well proven pattern.
Such a proxy might do a bulk read of the database, hold the data in memory to service high-volumes of reads, and handle updates.
Updating is non-trivial and there are various options:
The services cached data becomes the pseudo master - updates are applied to the cached data first, then go into a queue to apply to the underlying database.
The services data is used only for data-reads; updates are applied to the database first, and if the update is successful it is then applied to the cached data.
Option one is great for performance, on the assumption that the proxy service is really good at managing the data and satisfying service requests. But, depending on how you implement, it might be vulnerable to outages, in which case you might lose any data that has made it into the cache but not into the pipeline that gets it into the database.
Option 2 is good for ensuring a solid master set of data, but there's the risk that consuming services might read cached data that is now out of date because it's just being updated in the database.
In terms of implementation, a queue of some sort to handle getting updates to the database might be something you want to consider, as it would give you a place to control how updates (and which updates) get to the database.

Coherence Cache and WPF

A couple of very basis ques:
1) I want to try WPF and coherence cache. I do not have much idea about coherence, but have heard that it has some event mechanism that can tell WPF if some underlying data has changed. So using that, we should be able to update the view whenever the underlying data changes, correct?
2) For that to happen, should all the interaction with Coherence be running on a separate thread so that we can read the events coming in or will it work on the main UI thread?
Depending on what you are doing, Oracle Coherence may be overkill for this problem. Coherence is really good when you have multiple servers that need to keep data in sync, and/or when you have lots of clients that need to connect in to live data. It sounds like you might have that second use case, but it's unclear.
Coherence has several very handy abilities for clients that need to keep their data up to date. For example, a client can create a Continuous Query Cache, which means that the data result for that query gets cached in RAM on the client, and then whenever any other client (or any server) changes that data, it is automatically updated in the RAM of that client. Then if the UI needs to be updated, it is very simple, because the UI can sign up for the event when that data in RAM changes. This is used in applications like trading systems for financial services companies.
One more thing you might be asking about is when data in the database changes. There is a Coherence feature called Coherence Hot Cache, which uses event data that flows from the database (using Oracle GoldenGate technology) to update the cache servers, which in turn update the various Continuous Query caches and push out events (as described above). So basically, you can have data pushed all the way form a database change up into the GUIs that people are looking at. It's pretty cool stuff :)
(For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.)
MVVM is the answer. There are data bindings, events, notification.

Two Different MVC sites sharing one database Entity Framework?

I have a database-driven site developed with MVC and Entity Framework code first. The database is rather large and contains all the data that I would need for an additional web application. What are the implications of setting up a new website with database first using the same existing database? What I am really trying to ask is whether it would be a bad idea to share a database between two web applications where both are querying and doing updates to the data. Will this slow down processing on the original site or possibly lock up data, etc.? Both sites would be running on the same machine...
TIA
If sharing of the same data between both application is important i.e. you want the data to be shared between applications - than you have to use the same database. It'll slow down processing, but if it's the requirement, then you have to.
There's nothing stopping you from having two applications accessing the database. They are built to have multiple connections with multiple people accessing them. So there aren't many risks involved. You probably won't even notice the speed difference.
The two biggest risks I can think of
if both applications edit a record in the database, the one that submitted data last will win unless you put business logic in place to prevent that from happening
if the database schema is updated, both applications need to be updated to reflect the new schema to let it access the new data, or edit the data successfully

Data layer architecture for WPF Rich-Client?

Background
I need to build a Rich-Client application using .NET. The app needs to handle TreeViewControls and TableViewControls with about 100000 entities. GUI is build with WPF, very likely using Telerik Controls. My question is about the general architecture of the data-layer. I've got some coarse ideas of the concepts, but would highly appreciate your comments / thoughts and hints into which technology I should dig deeper. Here're my thoughts:
Conceptual Layers
Presentation Layer
just the WPF Controls, I'd need performant synchronizing of different controls on property changes, but I don't anticipate major problems here.
Business Layer
creating views (object selections to be displayed in the controls), CRUD operations (modifications done directly with the POCOs), searching (global search, but also limited to a view)
Repository
holds POCOs in an enitity map, decides weather to load from persistence store
Persistence-Manager
I'm thinking of using a LocalDB or simple Key-Value Store as (persistent) Client-Cache. So, the Persistence-Manager would try to get an object from the local store. Otherwise get the data from the server. Also, persisting data to the Client-Cache. The data would be available via a webservice. I'm happy to give WCF Data Services a try.
Persistence-Layers
There would be two parts:
- Local DB connection using an ORM like EF or OpenAccess; or a simple key-value store
- HTTP connection to consume the Web-Service
Questions
In a layering like this, how about lazy loading referenced objects? I know EF and other ORMs take care of a lot of the issues I have here, too. But I don't see yet how to plug these frameworks into the above layering. Also, where to track changes? Where to secure consistency when deleting objects? (e.g. deleting references to these objects as well)
I would eager load whole views (hierarchical structures) and perform Linq to objects to those collections of POCOs. Maybe implement a simple inverted index if Linq performance would become a matter. But how should I best implement global searches on the server? Are there libraries ("Linq to OData") available?
What do you think about a fully "diconnected" scenario? Holding all data a user needs in a local database. Sync on start / stop and user triggered. I could use an ORM directly on the local DB, with good chances to save a lot of headaches trying to implement a lot of consistency features by hand (using the above layering).
Or in contrast, forget about the local database and batch eager load most of the needed data. Here I'm concerned about the performance of the webservices (without having experience with OData, WCF). I've build an app using Redis and Python that loads about 200000 business objects quite fast (< 1 min) to the client (the objects are already serialized cached in Redis).
I'll certainly do some prototyping and benchmarking, but to get a good start, any thoughts and recommendations are highly appreciate.
Cheers,
Jan

Web services and database concurrency

I'm building a .NET client application (C#, WinForms) that uses a web service for interaction with the database. The client will be run from remote locations using a WAN or VPN, hence the idea of using a web service rather than direct database access.
The issue I'm grappling with right now is how to handle database concurrency. That is, if two people from different locations update the same data, how do I deal with it? I'm considering using timestamps on each database record and having that as part of the update where clauses, but that means that the timestamps have to move back and forth through the web service interface, which seems kind of ugly.
What is the best way to approach this?
I don't think you want your web service to talk directly to the database. You probably want your service to interact with some type of business components who in turn interact with a data access layer. Any concurrency exceptions can be passed from the DAL up to the business layer where they can be handled so that the web service never has to see the timestamps.
But if you are passing something like a data table up to the client and you want to avoid timestamps, you can do concurrency checking by comparing field by field. The Table Adapter wizards generate this type of concurrency checking by default if you ask for optimistic concurrency checking.
If your collisions occur infrequently enough that they can be resolved manually, a simple solution is to add an update trigger that copies a row's pre-update values to an audit table. This way the most recent write is the "winner", but no data is ever lost to an overwrite, and an administrator can restore an earlier row state or even combine them.
This technique has its downsides, and is not a very good solution where frequent overwrites are common.
Also, this is slightly off-topic, but using web services isn't necessarily the way to go just because the clients will be remoting into the network. ASP.NET web services are XML-based and very verbose. If your client application can count on being always connected, you'd be better off not using web services.

Resources