Preloading the Oracle Jet Offline Persistence Toolkit cache - mobile

What is the best way to 'preload' my data model for the Oracle Jet (OJET) Offline Persistence Toolkit (OPT)?
I know certain frequently used resources and queries that will be required when offline. Also, a user while online can select specific 'projects' they want to be preloaded for offline work.
Is the best way to do this with OPT to write JavaScript that identifies and calls all the necessary REST Get operations for all the necessary individual resources and required queries thus prepopulating the offline cache?

Related

Restricting data in PouchDB

I have an offline ready application that I am currently building in electron.
The core requirements are that all data is restricted (have to be a user to read or write) and that within that data some data is further restricted to a user, (account information, messages, etc...)
Now I do not want to replicate any data offline that a user should not have access to (this is because all the data can be seen using the devtools regardless of restriction) so essentially I only want to sync data to PouchDB's offline store if that user has access to it as well as all the data all users have access to.
Now I have read the following posts/guides but I am still a little confused.
https://pouchdb.com/2015/04/05/filtered-replication.html
https://www.joshmorony.com/creating-a-multiple-user-app-with-pouchdb-couchdb/
Restricting Access to local PouchDB
From my understanding filtering is a bad choice performance wise even though it could do what I want.
Setting up a proxy would work but it then essentially becomes a REST api and the data synchronization falls apart.
And the final option which I think is what I want is to have a database for every user that would contain their private information and then additional databases to hold the information that is available to every user.
The only real question I have with this approach is how is data handled that is private but shared between two users (messages, etc...)
I am more after an overarching view of how the data should be stored as opposed to code examples, just really struggling with the conceptual architecture of the application.
There are many solutions to your problem. One solution looks very promising: IBM Cloudant has started work on Cloudant Envoy, a proxy simulating the CouchDB interface instead of a simple REST API. You can read more about it on the site for Envoy over at ibm.com. A custom replicator for PouchDB is also available on Github.
There's also a blog post on Medium.com on this.
The idea is the same as the much older Couchbase Sync Gateway. Although Couchbase has common roots with CouchDB, I have not tracked if they still support replication with CouchDB.
The easiest way to start would be to create a single database per user on the server, and a common database that you just pull the shared data from. Let me know if you need more info on this solution.

Data layer architecture for WPF Rich-Client?

Background
I need to build a Rich-Client application using .NET. The app needs to handle TreeViewControls and TableViewControls with about 100000 entities. GUI is build with WPF, very likely using Telerik Controls. My question is about the general architecture of the data-layer. I've got some coarse ideas of the concepts, but would highly appreciate your comments / thoughts and hints into which technology I should dig deeper. Here're my thoughts:
Conceptual Layers
Presentation Layer
just the WPF Controls, I'd need performant synchronizing of different controls on property changes, but I don't anticipate major problems here.
Business Layer
creating views (object selections to be displayed in the controls), CRUD operations (modifications done directly with the POCOs), searching (global search, but also limited to a view)
Repository
holds POCOs in an enitity map, decides weather to load from persistence store
Persistence-Manager
I'm thinking of using a LocalDB or simple Key-Value Store as (persistent) Client-Cache. So, the Persistence-Manager would try to get an object from the local store. Otherwise get the data from the server. Also, persisting data to the Client-Cache. The data would be available via a webservice. I'm happy to give WCF Data Services a try.
Persistence-Layers
There would be two parts:
- Local DB connection using an ORM like EF or OpenAccess; or a simple key-value store
- HTTP connection to consume the Web-Service
Questions
In a layering like this, how about lazy loading referenced objects? I know EF and other ORMs take care of a lot of the issues I have here, too. But I don't see yet how to plug these frameworks into the above layering. Also, where to track changes? Where to secure consistency when deleting objects? (e.g. deleting references to these objects as well)
I would eager load whole views (hierarchical structures) and perform Linq to objects to those collections of POCOs. Maybe implement a simple inverted index if Linq performance would become a matter. But how should I best implement global searches on the server? Are there libraries ("Linq to OData") available?
What do you think about a fully "diconnected" scenario? Holding all data a user needs in a local database. Sync on start / stop and user triggered. I could use an ORM directly on the local DB, with good chances to save a lot of headaches trying to implement a lot of consistency features by hand (using the above layering).
Or in contrast, forget about the local database and batch eager load most of the needed data. Here I'm concerned about the performance of the webservices (without having experience with OData, WCF). I've build an app using Redis and Python that loads about 200000 business objects quite fast (< 1 min) to the client (the objects are already serialized cached in Redis).
I'll certainly do some prototyping and benchmarking, but to get a good start, any thoughts and recommendations are highly appreciate.
Cheers,
Jan

Pluggable database interface

I am working on a project where we are scoping out the specs for an interface to the backend systems of multiple wholesalers. Here is what we are working with,
Each wholesaler has multiple products, upwards of 10,000. And each wholesaler has customized prices for their products.
The list of wholesalers being accessed will keep growing in the future, so potentially 1000s of wholesalers could be accessed by the system.
Wholesalers are geographically dispersed.
The interface to this system will allow the user to select the wholesaler they wish and browse their products.
Product price updates should be reflected on the site in real time. So, if the wholesaler updates the price it should immediately be available on the site.
System should be database agnostic.
The system should be easy to setup on the wholesalers end, and be minimally intrusive in their daily activities.
Initially, I thought about creating databases for each wholesaler on our end, but with potentially 1000s of wholesalers in the future, is this the best option as far as performance and storage.
Would it be better to query the wholesalers database directly instead of storing their data locally? Can we do this and still remain database agnostic?
What would be best technology stack for such an implementation? I need some kind of ORM tool.
Java based frameworks and technologies preferred.
Thanks.
If you want to create a software that can switch the database I would suggest to use Hibernate (or NHibernate if you use .Net).
Hibernate is an ORM which is not dependent to a specific database and this allows you to switch the DB very easy. It is already proven in large applications and well integrated in the Spring framework (but can be used without Spring framework, too). (Spring.net is the equivalent if using .Net)
Spring is a good technology stack to build large scalable applications (contains IoC-Container, Database access layer, transaction management, supports AOP and much more).
Wiki gives you a short overview:
http://en.wikipedia.org/wiki/Hibernate_(Java)
http://en.wikipedia.org/wiki/Spring_Framework
Would it be better to query the wholesalers database directly instead
of storing their data locally?
This depends on the availability and latency for accessing remote data. Databases itself have several posibilities to keep them in sync through multiple server instances. Ask yourself what should/would happen if a wholesaler database goes (partly) offline. Maybe not all data needs to be duplicated.
Can we do this and still remain database agnostic?
Yes, see my answer related to the ORM (N)Hibernate.
What would be best technology stack for such an implementation?
"Best" depends on your requirements. I like Spring. If you go with .Net the built-in ADO.NET Entity Framework might be fit, too.

Any recomendations for an efective way to sync data from one database, to other app's databases?

Here's my problem. I built a web app, and naturally kept the data in a database which describes that app's domain. Afterwords, I built another web app for the same organization, and used a seperate database to describe that app's domain and store data... and naturally a couple more projects came up and for each app I've isolated it's data to a single database. Deveolpment wise, I think it's ok, as I can maintain changes to the data structure and data at the app's database.
Considering these apps belong to the same organization, there tends to be plenty of data replicated between them, like department names, job titles, shop names, etc. Most of these tables hold the same data, but are not exactly the same in each database, and are not always used by all of the apps. Changes to this data, though, needs to be changed at all the apps (sometimes in a diferent ways) creating a growing management "hassle".
So I've been think of a way to get some syncronization between the data. I want an easier management - update at one app (or a central app) and update all the databases as needed by each app - and also a better way to share data between apps (like maybe mash up data from differnt apps in a new app to alow specific analysis). Most of the data I'm refering to is used as contraints more than being core domain concept, describing the organization rather than describing a particular domain.
I'm looking for opinions on some ways to get this done.
My first idea was to grab comun data structures, like the department names' table i mentioned, and stick'em in a core database. Any updates to the data would be done at this database, through a dedicated web app, and I'd apply some sort of Observer or Publisher / Suscriber Pattern for these changes - on changes the app would notify observing apps (through there dedicated webservice) that the changes occured and allow for the app to grab the new data and use it as it needs. GUIDs could be user as a reference to identify the same data throughout the apps. Also, I could build web services for read and search operations that don't need to be in a specific app's database, but could be useful to it.
A second idea would be that each app manage it's own data, and the apps could observe one another. A change in one could notify others that share the same data structure that the change occurred. I could still use some GUIDs and even build services on any of the apps. I think this would also be less excessive in terms of duplication of data, but might be harder to manage as each app would eventually be coupled to other apps, and I would some how have to distribute responsabilities as to which app controls what information.
I'm really curious as to something of this genre of data distibuition and syncing would work and even be recomended. Opions and other ideas are more than welcome!
What you describe here is a typical case for a "Master Data Management" system. EAI vendors (Oracle, TIBCO, IBM) offer such products. They resemble your first solution, being centralised databases with synchronization processes, detecting changes in external data sources, grabbing the changes and synchronizing data out to other external databases. They also provide a user interface to change master data directly.
MDM software are expensive, but you can implement a custom solution which will be - at least initially - cheaper than purchasing one. Both of your solutions make technical sense but there is a difference in their manageability.
The first one is better, if you can dedicate a responsible person/organization to take care of it and the business owners of your services can agree on making changes via this new centralised system.
The second solution shares the responsibility between the service owners. The hard task here is to identify the owner of each type of information (business object).
I cannot advise a solution without a deeper knowledge of your systems and organizations, but I hope I could give some ideas.

simplest fail over support for standard rdbms?

Currently I am working on failover support of an existing application.
The application uses postgres to store data but does not use any special feature (view/trigger etc). The database is more of a configuration storage rather than real data storage. When the application starts, it loads the data in memory and seldom goes back to database only when the configurations are changed. Trying to configure postgres failover solution for this simple task feels like overkill.
Is there any lightweight database which has built in failover support and simple to configure and use in production? Most of my data model is single table and there are like 5 transactions per minute or so.
BerkeleyDB is a very simple key/value store, probably it is perfectly adequate for your application, and it has support for hot failover.

Resources