Is Redis cache effective for simple windows application(management software)? - wpf

I want to implement caching on a simple desktop application. And I am using WPF .NET Framework. So, is Redis cache could be effective considering the portability? Or is there any best option available?

If you need a cache for a simple solution, I suggest you use the InMemory cache. But if you are making a complex solution you should definitely consider a Distributed Cache (e.g. Distributed Redis Cache
), and the reason is that Distributed Cache is:
Is coherent (consistent) across requests to multiple servers.
Survives server restarts and app deployments. And that's because your cache is
usually in a different location (e.g. Azure)
Doesn't use local memory.
Is scalable
For more details for InMemory cache read
For more details for Distributed cache read

I don't suggest using Redis cache for the WPF desktop app. Why? it makes your application depends on a third-party app. it seems in .Net there is a solution for caching in-memory and persisting on a file. You can find more about it on this LINK

Memurai could be an option. It is a native Windows port of Redis.

Related

Caching with Kotlin Exposed?

I'm wondering how to add caching when using the Kotlin-Exposed library for SQL access.
For experimentation I've written a small application using both Spring Boot + Hibernate, and KTOR + Exposed.
I did some load-testing and when POSTing to both versions of the application, performance is quite similar with the KTOR + Exposed version having the edge.
However when GETting an existing record from both versions the difference is shocking especially when the database is getting larger - and all time is in Postgres.
My conclusion is that the difference can only be in Hibernate second-level caching that Spring Boot configures.
Seeing the value of caching for items that are repeatedly queried in multiple transactions / sessions, I'm wondering how to configure this in into the low level Exposed framework?
At the moment Exposed supports only per-transaction level.
Also, there are ImmutableCachedEntityClass which allows you to define some entities (mostly dictionary-like) as cached and share them among application.
You have to manage cache invalidation manually with expireCache() function or actualize entities with forceUpdateEntity.
Proper caching in the age of distributed systems is not so easy to implement. You may use any caching library (e.g. caffeine) and invalidate a cache if you know when your data changes (maybe with help of Exposed StatementInterceptors).
If you'll be able to implement a good caching solution feel free to send PR to the project.

Database for a java application in cluster

I'd like to play around with kubernetes, I'm able to start a simple app, but now I'd like to design something more complex. Nevertheless I can't figure out, how to handle the database access in such architecture.
Let's say I have 100 pod replicas of some simple chat application. They all need to access the same database (or more like data set) and perform CRUD operations upon them. How to design it to keep the data consistent and eliminate the risk of deadlocks?
If possible, I'd like to use SQL-like database, so I can comfortably use hibernate and other tools I'm familiar with.
Is this even possible or do I have to use totally a different approach? What is the name of the technology or architecture I'm searching for?
1) You can use a connection pool to reduce this number and make the connection settings more aggressive/elastic;
2) Split your microservices in such way the access to the persistence is a microservice exposing your CRUD service to your persistence(mysql/rdms/nosql/etc). In that way you most likely don't need hundreds of replicas of your pods.
3) Deadlocks / locking strategies - as Andrew mentioned in the comments, it's more related to your software development architecture rather than K8s itself. There are plenty of ways to deal with that with pros/cons.

To CouchDB or not to?

Note: (I have investigated CouchDB for sometime and need some actual experiences).
I have an Oracle database for a fleet tracking service and some status here are:
100 GB db
Huge insertion/sec (our received messages)
Reliable replication (via Oracle streams on 4 servers)
Heavy complex queries.
Now the question: Can CouchDB be used in this case?
Note: Why I thought of CouchDB?
I have read about it's ability to scale horizontally very well. That's very important in our case.
Since it's schema free we can handle changes more properly since we have a lot of changes in different tables and stored procedures.
Thanks
Edit I:
I need transactions too. But I can tolerate other solutions too. And If there is a little delay in replication, that would be no problem IF it is guaranteed.
You are enjoying the following features with your database:
Using it in production
The data is naturally relational (related to itself)
Huge insertion rate (no MVCC concerns)
Complex queries
Transactions
These are all reasons not to switch to CouchDB.
Of course, the story is not so simple. I think you have discovered what many people never learn: complex problems require complex solutions. We cannot simply replace our database and take the rest of the month off. Sure, CouchDB (and BigCouch) supports excellent horizontal scaling (and cross-datacenter replication too!) but the cost will be rewriting a production application. That is not right.
So, where can CouchDB benefit you?
I suggest that you begin augmenting your application with CouchDB applications. Deploy CouchDB, import your data into it, and build non mission-critical applications. See where it fits best.
For your project, these are the key CouchDB strengths:
It is a small, simple tool—easy for you to set up on a workstation or server
It is a web server. It integrates very well with your infrastructure and security policies.
For example, if you have a flexible policy, just set it up on your LAN
If you have a strict network and firewall policy, you can set it up behind a VPN, or with your SSL certificates
With that step done, it is very easy to access now. Just make http or http requests. Whether you are importing data from Oracle with a custom tool, or using your web browser, it's all the same.
Yes! CouchDB is an app server too! It has a built-in administrative app, to explore data, change the config, etc. (like a built-in phpmyadmin). But for you, the value will be building admin applications and reports as simple, traditional HTML/Javascript/CSS applications. You can get as fancy or as simple as you like.
As your project grows and becomes valuable, you are in a great position to grow, using replication
Either expand the core with larger CouchDB clusters
Or, replicate your data and applications into different data centers, or onto individual workstations, or mobile phones, etc. (The strategy will be more obvious when the time comes.)
CouchDB gives you a simple web server and web site. It gives you a built-in web services API to your data. It makes it easy to build web apps. Therefore, CouchDB seems ideal for extending your core application, not replacing it.
I don't agree with this answer..
I think CouchDB suits especially well fleet tracking use case, due to their distributed nature. Moreover, the unreliable nature of gprs connections used for transmitting position data, makes the offline-first paradygm of couchapps the perfect partner for your application.
For uploading data from truck, Insertion-rate can take a huge advantage from couchdb replication and bulk inserts, especially if performed on ssd-based couchdb hosting.
For downloading data to truck, couchdb provides filtered replication, allowing each truck to download only the data it really needs, instead of the whole database.
Regarding complex queries, NoSQL database are more flexible and can perform much faster than relation databases.. It's only a matter of structuring and querying your data reasonably.

Access one database from multiple ORMs :: Caching issue

I know this is not a good idea and the best would be to let the applications talk Web Services. But I have a situation where the legacy application is accessing a database with an ORM and I need to access the same database from the new .net application using Fluent nHibernate.
So the question is what problems this will make and how to solve them?
I guess the main issue is the caching. I need to disable the caching on one of the applications (which would be the new app).
So how can I disable caching in nHibernate?
Is there anything else that should be careful about?
Caching is not enabled by default in NHibernate.
One thing you need to consider is how to handle concurrent updates. Suggested read: http://nhibernate.info/doc/nh/en/index.html#transactions-optimistic

simplest fail over support for standard rdbms?

Currently I am working on failover support of an existing application.
The application uses postgres to store data but does not use any special feature (view/trigger etc). The database is more of a configuration storage rather than real data storage. When the application starts, it loads the data in memory and seldom goes back to database only when the configurations are changed. Trying to configure postgres failover solution for this simple task feels like overkill.
Is there any lightweight database which has built in failover support and simple to configure and use in production? Most of my data model is single table and there are like 5 transactions per minute or so.
BerkeleyDB is a very simple key/value store, probably it is perfectly adequate for your application, and it has support for hot failover.

Resources