How ofter fetch data from server - angularjs

I'm pretty new with angularjs and I'm wriring my first application.
I'd like to know if there is a specific best practice about how often I should pull data from the server when I have to deal with big dataset. Is it better to pull one big JSON dataset and make a single call to the server, or it's advisable to fetch small bunch of data with multiple requests?
I try to explain. My application is now fetching from the server all the JSON data required by the application when the main page loads. It's a lot of stuff (about 3MB). Then it never fetches any other data, I can apply filters to the data and sorting it, all it's done client-side with no interaction with the server.
Now, is it worth to fetch few data at the beginning and then, based on the applied filters, re-fetch the data from the server?
Thanks!

It all depends on the specific requirements and usage patterns. If you are worried on quick load times, there are patterns similar to the ones used with jQuery.dataTables, which allow for super-quick loading of data, relying on server-side filtering.
If you have good cacheability (data is the same for all users) and no worries for long load times, go for the eager load (and use a filesystem-based cache with nginx serving the cached data).
In general, having local copies of the whole database is only useful when you can't do the work server-side, as RDBMS are much better at data analysis than any javascript implementation.

Related

Best approach to interact with same data base table from more than one microservices

I have a situation, where I need to add/update/retrieve records from same database table from more than one microservices. I can think of below three approaches, please help me pick up the best suitable approach.
Having a dedicated Microservices say database-data-manager which will interact with data base and & add/update/retrieve data and all the other microservices will call the end points of database-data-manager to add/update/retrieve data when required.
Having a maven library called database-data-manager and all the other microservices will use this library for the db interactions.
Having the same code(copy paste) in all the applications to take care of db interactions.
Approach - 1 seems expensive as we need to host a dedicated application for a basic functionality.
Approach - 2 would reduce boiler plate code but difficult to manage library version.
Approach - 3 would cause lot of boiler plate code and maintenance efforts to keep similar code in all the microservices.
Please suggest, Thanks in advance.
A strict definition of "microservice" would include the fact it's essentially self-contained... that would include any data storage it might need. So you really have a collection of services talking to a common database. Schematics aside...
Option 1 sounds like it's on the right track: you need to have something sitting between the microservices and database. This could be a cache or a dedicated proxy service. Let's say you have an old legacy system which is really fragile, controlling data in/out through a more capable service, acting as a proxy, is a well proven pattern.
Such a proxy might do a bulk read of the database, hold the data in memory to service high-volumes of reads, and handle updates.
Updating is non-trivial and there are various options:
The services cached data becomes the pseudo master - updates are applied to the cached data first, then go into a queue to apply to the underlying database.
The services data is used only for data-reads; updates are applied to the database first, and if the update is successful it is then applied to the cached data.
Option one is great for performance, on the assumption that the proxy service is really good at managing the data and satisfying service requests. But, depending on how you implement, it might be vulnerable to outages, in which case you might lose any data that has made it into the cache but not into the pipeline that gets it into the database.
Option 2 is good for ensuring a solid master set of data, but there's the risk that consuming services might read cached data that is now out of date because it's just being updated in the database.
In terms of implementation, a queue of some sort to handle getting updates to the database might be something you want to consider, as it would give you a place to control how updates (and which updates) get to the database.

Any idea about keeping data locally with ionic framework

I am using ionic framework for my mobile app and I would like to make some functions like when user requests data (json) from database (using REST API) and it will keep on local storage on the device. whenever the user come back, application will use the data from local storage instead.
I read there are many options to do this ($localstorage, sqlite) but I'm not sure which on is better in terms of performance and easy coding.
The data is text only and I would be around 2,000 rows per one query.
For performance, I would suggest going with Sqlite and also, your data will be securely stored in your app.
You can use localStorage for temporary data which is not very important as the localStorage data can be deleted also due to activities of the device's internet browser.
With regards to performance, I suggested sqlite as sqlite does not block the DOM or your view while performing query on it but getting data out of the storage takes few milliseconds more than localStorage, whereas, localStorage completely blocks the DOM when being queried but is a little faster (very minor) than sqlite in fetching the data.
Localstorage: its data will not be stored permanently. data stored in localstorage is unreliable. It is possible for the data to be wiped due to various reasons. Besides, it has the storage limit of 5 to 10 MB! Though It is better in terms of performance when compared to the below option.
pouchDB: If you have installed the SQLite Cordova Plugin then pouchDB will automatically use SQLite. It's a full database which overcomes both the above limitations by storing data on a specific location in the device's storage. You get unlimited storage. Apart from that, it can handle complex queries easily whereas localstorage is a simple key-value system. Since you are installing the Cordova Plugin plugin, it makes sure that you have full cross-platform support.
Best option will be to store data into sqlite db because the data you want to store is quite large. The reason is quite simple -CRUD operations require easy coding and the performance is great. Moreover when you plan your architecture you think of all possible expansions whereas local storage can only store limited amount of data.

Does ATK4 have caching support?

I've gone through the documentation on ATK4, trying to find a reference point how to handle caching - partial or full page.
Seems like there is no entry on the matter. Strange from a framework that is built for scalability. Is there a way to cache DB queries, pages, views, etc?
Thanks for your question. (I'm the author of ATK4).
In my opinion, scalability and caching are two different topics and can be addressed separately. Framework approaches scalability by optimising queries and minimising load for each request and also designing approach where multiple nodes can be used to seamlessly scale your application horizontally. There is also an option to add reverse proxy to cache pages before they even hit the web server.
Agile Toolkit has support for two types of caching:
View-level caching
As you read documentation on object render trees - framework initialise and render recursively, so if you add "caching" support to a Page level, you will be able to intercept and retrieve it's contents from cache. You can also cache views.
Here is a controller which can be used to implement caching for you:
https://github.com/agile55/viewcache
Model level caching
Sometimes you would want to cache your model data, so instead of retrieving data from the slow database, you can quickly fetch the data from a faster source. Agile Toolkit has support for multiple model data sources, where a cache would be queried first and refreshed if it didn't contain the data. Here you can find more information or ask further questions:
http://book.agiletoolkit.org/model/core.html#using-caching
http://forum.agiletoolkit.org/t/is-setcache-implemented/62
Other Ideas
Given the object-oriented nature of ATK4, you can probably come up with a new ways to cache data. If you have any interesting ideas, our c

Using application's internal cache while working with Cassandra

As I've been working with traditional relational database for a long time, moving to nosql, especially Cassandra, is a big change. I ussually design my application so that everything in the database are loaded into application's internal caches on startup and if there is any update to a database's table, its corresponding cache is updated as well. For example, if I have a table Student, on startup, all data in that table is loaded into StudentCache, and when I want to insert/update/delete, I will call a service which updates both of them at the same time. The aim of my design is to prevent selecting directly from the database.
In Cassandra, as the idea is to build table containing all needed data so that join is unnencessary, I wonder if my favorite design is still useful, or is it more effective to query data directly from the database (i.e. from one table) when required.
Based on your described usecase I'd say that querying data as you need it prevents storing of data you dont need, plus what if your dataset is 5Gb? Are you still going to load the entire dataset?
Maybe consider a design where you dont load all the data on startup, but load it as needed and then store it and check this store before querying again, like what a cache does!
Cassandra is built to scale, your design cant handle scaling, you'll reach a point where your dataset is too large. Based on that, you should think about a tradeoff. Lots of on-the-fly querying vs storing everything in the client. I would advise direct queries, but store data when you do carry out a query, dont discard it and then carry out the same query again!
I would suggest to query the data directly as saving all the data to the application makes the applications performance based on the input. Now this might be a good thing if you know that the amount of data will never exceed your target machine's memory.
Should you however decide that this limit should change (higher!) you will be faced with a problem. Taking this approach will be fast when it comes down to searching (assuming you sort the result at start) but will pretty much kill maintainability.
The former favorite 'approach' is however still usefull should you choose for this.

is Using JSON data is better then Querying Database when there is no security issue for data

For my new project I'm looking forward to use JSON data as a text file rather then fetching data from database. My concept is to save a JSON file on the server whenever admin creates a new entry in the database.
As there is no issue of security, will this approach will make user access to data faster or shall I go with the usual database queries.
JSON is typically used as a way to format the data for the purpose of transporting it somewhere. Databases are typically used for storing data.
What you've described may be perfectly sensible, but you really need to say a little bit more about your project before the community can comment on your approach.
What's the pattern of access? Is it always read-only for the user, editable only by site administrator for example?
You shouldn't worry about performance early on. Worry more about ease of development, maintenance and reliability, you can always optimise afterwards.
You may want to look at http://www.mongodb.org/. MongoDB is a document-centric store that uses JSON as its storage format.
JSON in combination with Jquery is a great fast web page smooth updating option but ultimately it still will come down to the same database query.
Just make sure your query is efficient. Use a stored proc.
JSON is just the way the data is sent from the server (Web controller in MVC or code behind in standind c#) to the client (JQuery or JavaScript)
Ultimately the database will be queried the same way.
You should stick with the classic method (database), because you'll face many problems with concurrency and with having too many files to handle.
I think you should go with usual database query.
If you use JSON file you'll have to sync JSON files with the DB (That's mean an extra work is need) and face I/O problems (if your site super busy).

Resources