Performance AngularJS - angularjs

I create a Logs with tabel. What options are a better way to increase performance?
I extract all rows from Database and I filter them from AngularJS.
When an user type a new filter, I send a HTTP request and I select from database.
I wanna do the first option but I think will lagging because I will have maybe 50.000 rows?

Selecting from a database is much faster.
Databases are designed for super fast querying of data, angular filters are somewhat fast, but are no where as good as a database (which are designed to do super-fast queries super-quickl)
Oh! And if you have 50000 rows, each with around 200 bytes of data, you'll be using 10 megabytes of data per load - that's huge, and browsers (especially mobile) don't handle large logic operations well.
So use database whenever you can, and only use angular filters when you already have data client-side.

Related

How to: Spring Boot application with a large database and fast access to a small subset of data

This is a request for a general recommendation about how to organize a data storage in my case.
I'm developing a Spring Boot app in Java to collect and save measurements, and provide access to the saved data via REST API. I expect to have around 10 millions measurements per hour and I need to store history for recent 2-3 months. Total amount of measurements stored can reach tens of billions. The data model is not sophisticated, there will be around ten tables. No editing is planned, only cleaning obsolete data and vacuuming. I'm planning to use Postgres as a DBMS.
Being stored, the data can be retrieved as such (using temporal or spatial filters) or used to create aggregated data products. Despite performance tuning, using indexes, and optimizing queries, data retrieval can take significant time, but this is for research purposes and I understand the price of having that amount of records. Up to this point things are clear.
On the other hand, the most recent measurements (e.g. collected during the last ten minutes) must be accessible immediately. Well, as fast as possible. This data must be served by the REST API and shown in a Front-End app as graphs updated in real-time. Obviously, retrieving last-minutes-data from a table with billions of records will take time that is unacceptable for representation.
What can be a typical solution for such situation?
So far I came up with an idea of using two datasources: Postgres for history and in-memory H2 for keeping recent data ready to be served. Thus I will have a small DB duplicating recent data in memory. With this approach I expect to re-use my queries and entity classes. Does this seem OK?
I found a multi-datasource solution that perfectly matches my case. The author of this article is dealing with a project "where an in-memory database was needed for the high performance and a persistent database for storage".

Creating initial CouchDB database with millions of documents

I have a Postgres database with many millions of records that I want to migrate to CouchDB. I know how I want to represent the records as documents, each document will have 9 items (4 integers, 4 text strings, and a date string).
My question is: Do I really need to write something that's going to have to do millions and millions of POST requests to create my initial database from the existing data? I understand that CouchDB is generally fast but doing this over HTTP strikes me as extremely inefficient and time consuming to do over even localhost HTTP.
HTTP is the only API that I see, so is this normally what is done when someone has to create a database with a huge number of initial documents?
Thanks
Yes, it is done via http. It is not inefficient though, since you can create multiple documents in one request by using the _bulk_docs API.

How ofter fetch data from server

I'm pretty new with angularjs and I'm wriring my first application.
I'd like to know if there is a specific best practice about how often I should pull data from the server when I have to deal with big dataset. Is it better to pull one big JSON dataset and make a single call to the server, or it's advisable to fetch small bunch of data with multiple requests?
I try to explain. My application is now fetching from the server all the JSON data required by the application when the main page loads. It's a lot of stuff (about 3MB). Then it never fetches any other data, I can apply filters to the data and sorting it, all it's done client-side with no interaction with the server.
Now, is it worth to fetch few data at the beginning and then, based on the applied filters, re-fetch the data from the server?
Thanks!
It all depends on the specific requirements and usage patterns. If you are worried on quick load times, there are patterns similar to the ones used with jQuery.dataTables, which allow for super-quick loading of data, relying on server-side filtering.
If you have good cacheability (data is the same for all users) and no worries for long load times, go for the eager load (and use a filesystem-based cache with nginx serving the cached data).
In general, having local copies of the whole database is only useful when you can't do the work server-side, as RDBMS are much better at data analysis than any javascript implementation.

Fetching a large block of data using NHibernate

I have some 3 million rows of data to show on data grid using C#.
Currently using NHibernate to fetch data from database sqlserver 2005.
NHibernate takes lots of time to get data. Is there any way to retrieve data from data from database efficiently using NHibernate.
---Edit----
As the application has huge data to operate upon, loading all rows is just a worst case scenario. In a normal use user will load 10k rows. No of displaying rows can be minimised by using paging but as some rows are dependent on others I need to load all data while initializing the app.
NHibernate gets slow even with 1000 rows. Any suggestions to improve the performance.
Thanks.
You could use Stateless Session to get the data.
http://ayende.com/blog/4137/nhibernate-perf-tricks
http://darioquintana.com.ar/blogging/2007/10/08/statelesssession-nhibernate-without-first-level-cache/
But first ask your self do you really need to display millions of rows. What value does that give to your user? Can they easily locate the data they want?
Also, DataGrid itself will take a large amount of memory (regardless whether you are using Windows Forms, WPF, ASP.NET...). Memory to store the data itself, plus the memory to store additional DataGrid column / cell metadata.
Consider having only a subset of data instead. You could either allow the user to filter thru the data and/or add paging. Filter and paging can be translated to HQL / Criteria / Linq / QueryOver queries and eventually to SQL queries.
My recommendation here is not to use an ORM to fetch this size of data but then my second point is why would you want to fetch 3 Million rows of data and show it in a grid?
No user can possibly want to scroll through 3 Million lines of a table.
You could use a paged data system to request only the page you are viewing at any one time. Or you could filter the data for a smaller subset that the user is interested in.
If you have 3 Million records maybe the data needed is an analysis of those records.
I would take a look at some of these resources:
http://msdn.microsoft.com/en-us/magazine/cc164022.aspx
http://weblogs.asp.net/rajbk/archive/2010/05/08/asp-net-mvc-paging-sorting-filtering-using-the-mvccontrib-grid-and-pager.aspx
As the application grows in complexity and the data on which they thrive upon becomes complex, the generality of such OR mapping engines may lead to various performance and scalability bottleneck. Applications cannot scale linearly with the increasing transactional requirements. Caching is one technique which can enhance the performance of the NHibernate app. Infect NHibernate provides a basic, not-so-sophisticated in-process L1 cache out of box. However, it doesn’t provide features that a caching solution must have to have a notable impact on the application performance. So its better to use a 2nd level cache, it may help you to increase the performance of your Nhibernate App. There are many third party NHibernate 2nd Level cache are provider are available there but I'll recommend you to use NCache. Here is a good read about it,
http://www.alachisoft.com/ncache/nhibernate-l2cache-index.html

Dataset retrieving data from another dataset

I work with an application that it switching from filebased datastorage to database based. It has a very large amount of code that is written specifically towards the filebased system. To make the switch I am implementing functionality that will work as the old system, the plan is then making more optimal use of the database in new code.
One problem is that the filebased system often was reading single records, and read them repeatedly for reports. This have become alot of queries to the database, which is slow.
The idea I have been trying to flesh out is using two datasets. One dataset to retrieve an entire table, and another dataset to query against the first, thereby decreasing communication overhead with the database server.
I've tried to look at the DataSource property of TADODataSet but the dataset still seems to require a connection, and it asks the database directly if Connection is assigned.
The reason I would prefer to get the result in another dataset, rather than navigating the first one, is that there is already implemented a good amount of logic for emulating the old system. This logic is based on having a dataset containing only the results as queried with the old interface.
The functionality only have to support reading data, not writing it back.
How can I use one dataset to supply values for another dataset to select from?
I am using Delphi 2007 and MSSQL.
You can use a ClientDataSet/DataSetProvider pair to fetch data from an existing DataSet. You can use filters on the source dataset, filters on the ClientDataSet and provider events to trim the dataset only to the interesting records.
I've used this technique with success in a couple of migrating projects and to mitigate similar situation where a old SQL Server 7 database was queried thousands of times to retrieve individual records with painful performance costs. Querying it only one time and then fetching individual records to the client dataset was, at the time, not only an elegant solution but a great performance boost to that particular application: The most great example was an 8 hour process reduced to 15 minutes... poor users loved me that time.
A ClientDataSet is just a TDataSet you can seamlessly integrate into existing code and UI.

Resources