Transparent Database Cache - database

I have an application that calls a third party product. This product makes a large number of database calls as part of its processing, although the data in the database is generally static (unless the data is updated maybe once per day by a refresh mechanism).
As the data is very static, but the database is under a heavy load, I want to put a caching layer inbetween the application and the database. As the application is a third party product (written in C), I can't implement the cache in code myself. So...
is there a product, or tool that can transparently sit in between the application and the database that can act as a cache. Something that can intercept the requests, respond with cached data if it has it, or send it down to the database if it does not.
I know that databases have their caches, but I am looking to offload the workload to the application's server rather than the ratabase server to reduce the workload going on the DB server (which has significant licence costs).

Related

Make microservice application resilient to db downtime

We have a microservice application which is saving the data into an Oracle Db.
So far the DB is our single point of failure which we want to improve (we are using a single Oracle DB with a cold failover instance).
Now the company is asking us to upgrade the oracle DB, the issue is that it requires downtime.
For that reason we were thinking about:
add a global/geo replicated cache layer (e.g redis) between the microservice and the DB
for each new record that should be saved on the db:
Add the record in the cache (storing the entries on the HD in case the whole cache layer crashes)
throw an event to a queue (we have RabbitMQ). On the other side of the queue we can create a new service to consume the events and add them to the DB in an asynch way.
It's basically adding a write-behind cache layer.
In the above scenario we are confident that we can save easily 1 week data in the cache or more.
If the DB is down the new service which is listening to the queue will simply re-trying adding the rows in the DB, as soon as an event is added to the Db then the event can be ack and the next one will be consumed. In this way, if the DB is down or if we have to do some maintenance, it should not affect the main application: the users can still "save" the data and retrieve it (with the 1 week max constraint whenever the db is down).
The down side is that the architecture is more complex and we can have now data eventual consistency.
Is there another design pattern to better deal with database downtime without having the users feel that something is wrong?
Do you know any already-existing tools that we can use to automatically read an event from Rabbit and save it in the db? (we are already doing it with logstash to automatically forwards some rabbit events to elastic).
The next step would be to have a cluster of DB (cassandra,mongo etc) but for now we do not have the capacity for that.
Adding cache for increase availability is, probably, an awkward solution - as you will eventually get to the same issue of keeping cache available. Also, handling cold caches is not a simple task.
I am not familiar with Oracle, but most databases do support replication; and you have options for synchronous/asynchronous/semi-synchronous patterns.
Quick search helped me to discover "Oracle Data Guard" - seems that's the tool you need. Docs say that the Guard supports data replication and failover.
As for using Cassandra - I highly recommend to evaluate that first - Oracle gives you ACID properties and joins; this makes application code much simpler. Also, consistency patterns will be different. Lots of details to think about.
My general recommendation is to look into your data layer (oracle in this case) and follow their recommendation to achieve high availability. Oracle is mature product, and availability is well-supported.

When is SQL Server as a distributed caching mechanism worthwhile?

I have 2 web servers, and I'm running into an issue where I need to prematurely expire (remove) a cached item. Since I'm currently using IMemoryCache, a Remove(key) call only removes the cached item from one server. I don't have the ability to leverage Redis, Nache, etc. but the app is already using SQL server. I can easily set up distributed caching with a cache table, but it seems counter-intuitive because what I'm caching is user data that I don't want to hit the database for on every call (e.g., I cache 50 items of user data every 5 minutes which has cut down on 500 trips to the database). Is there something I'm missing which would make using SQL server as my distributed cache backend actually beneficial?
Sounds like you are having the typical problem of cache invalidation and expiry. You can use a grid-cache for distributed caching (e.g. Redis, Hazelcast) but it doesn't solve the invalidation problem. You may want to consider vendors like ScaleArc or Heimdall Data. They provide the caching logic. You choose the storage of choice (in-memory, Redis etc.) and it handles query caching and invalidation. The is SQL Server blog on it: https://www.itprotoday.com/industry-perspectives/reduce-sql-server-costs-heimdall-data-caching

What methods are available to implement a local cache of a large DB driven data?

My company maintains a number of large time series databases of process data. We implement a replica of a subset at a pseudo-central location. I access the data from my laptop. The data access over our internal WAN even to the pseudo-central server is fairly expensive (time).
I would like to cache data requests locally on my laptop so that when I access it for a second time I actually pull from a local db.
There is a fairly ugly client side DAO that I can wrap to maintain the cache but I'm unsure how I can get the "official" client applications to talk to the cache easily. I have the freedom to write my own "client" graphing/plotting system, and already have a custom application that does some data mining already implemented. The custom application dumps data into .csv files that are manually moved around on a very ad-hoc basis.
What is the best approach to this sort of caching/synchonization? What tools could implement the cache?
For further info, the raw data set I have estimated at approx 5-8Tb of RAW time series data per year, with at least half of the data being very compressible. I only want to cache say a few hundred Mb locally. When ad-hoc queries are made on the data it tends to be very repetitive over very small chunks of the data.

Improving speed in winform application and WCF with Caching

We provide a critical application for a customer. It's a clickonce winforms application which consumes several WCF services which communicates with an Oracle Database.
The service is hosted with Oracle Application Server with two Web Cache Servers in front for load balancing. The Database is on another separate machine.
Thing is, the application has now poor performance and we need to speed it up. We have tried many techniques: optimize queries with adding indexes when analyzing explain plans, reducing service calls from client and profiling the client application for pitfalls.
But I would really like two set up a caching layer over the database or the WCF. The data is critical and changed quite often so it's necessary to get the latest data at all requests.
So when data changes in the database the cache should immediately be expired. The queries are complex with up two 14-15 joins...
How is the right way to do this and which tools/frameworks should I use? I have heard of memcached.. is this good?
Because your code sees all updates to the data you can have a very effective caching layer as the cache can be updated at the same time as the database.
With your requirement for absolute cache coherency you need to make sure all servers see the same cache. There are two approaches you could take:
Have a cache server which uses something like the ASP.NET cache which the application servers talk to to get and update the data
Use a caching product to maintain the cache
If you use a caching product there are a number on market: memcached, gemfire, coherence, Windows Server AppFabric Caching and more
The nice thing about AppFabric Caching (project formally known as Velocity) is that it is free with Windows Server and is very .NET friendly (although it is newer than some of the others and so you might say less proven)
Before adding a new tool you should make sure you're correctly using all of the Oracle caching that is available to you.
There's the buffer cache, PL/SQL function result cache, client query result cache, sql query result cache, materialized views, and bind variables will help cache query plans.

Scaling out SQL Server for the web (Single Writer Multiple Readers)

Has anyone had any experience scaling out SQL Server in a multi reader single writer fashion. If not can anyone suggest a suitable alternative for a read intensive web application, that they have experience with
It depends on probably 2 things:
How big each single write is?
Do readers need real time data?
A write will block readers when writing, but if each write is small and fast then readers won't notice.
If you offload, say, end of day reporting then you batch your load onto a separate server because readers do not require real time data. This makes sense
A write on your primary server must be synched to your offload secondary server... which will block there as part of the synch process anyway + you add an overhead load to manage the synch.
Most apps are 95%+ read anyway all the time. For example, an update or delete is a read followed by a write.
My choice would be (probably, based on the low write volume and it's a web app) to scale up and stuff as much RAM as I could in the DB server with separate disk paths for the data and log files of the database.
I don't have any experience with scaling out SQL Server for your scenario.
However for a Read-Intensive application, I would be looking at reducing the load on the database and employ a Cache Strategy using something like Memcache or MS Velocity
There are two approaches that I'm aware of:
Have the entire database loaded into the Cache and manage Adding and Updating of items in the cache.
Add items to the cache only when they are requested and remove them when a write operation is performed.
Some kind of replication would do the trick.
http://msdn.microsoft.com/en-us/library/ms151827.aspx
You of course need to change your app code.
Some people use partitioned tables, with different row ranges being stored on different servers - united with views. This would be invisible to the app. Federation for this practice, I think.
By designing your database, application and server configuration (SQL particulars - location of data/log/system/sql binaries/tempdb), you should be able to handle a pretty good load. Try not to complicate things if you don't have to.

Resources