I have JSON data of around 50 mb that i need to sent to Angularjs front end which will be used by highcharts to show line graphs. But the fetching such amount of data over http is really time consuming (timeouts too in some cases) impacting user experience.
I also suggested , to client, to show aggregated data that reduces the sizes of json but client is not ready for that as that won't be of much use for analysis for his customers. Here is the fiddle demo with aggregated data. Just replace the query param file=1min_raw in following piece of code with file=merged_json to check the time taken to download data and render chart.
$.getJSON('http://srijan-best.s114.srijan-sites.com/merged_script.php?file=1min_raw'
So is there a way in highcharts to show the data as soon as it starts receiving. ? I have gone through the docs but not able to find anything to me requirements.
Is there any other alternative for showing such amount json data in highcharts or any other charting api without degrading the performance occurring due to http response payload size?
And yes, The json data to be fetched can go upto 200 MB.
Thanks
Related
really finding this framework giving lots of features.My question is I am trying to request around 38916(>30000) amount of data in my React application and then showing them in a grid view in React.I am using Axios to load the large datasets coming through API.Could someone tell me how to load huge amount of records in less amount of time in React JS?Also I want to send the request for multiple api parallely.If anyone have any idea or better one Please post soon.
Your options:
Pagination/lazy load. If it includes images too then lazy load images too lazy load images with react
Use virtualised list to render all data at once
virtualised list
If you had to get all products in once , you should show to user with pagination or lazy load, for example 10 item every page and if user want's more data change pagination number..
So i am working on a IoT SaaS project in React.
The user selects a sensor and a time range and receives data visualized in charts with a resolution about 5 minutes.
My question is regarding best practices when handling fetching and saving of this data on the front-end.
I have tried always fetching which works fine, but makes the system kind of slow.
This is especially true while users are quickly switching back and forth between sensors.
I have also tried saving the data, just as a json in the react state.
This significantly increase performance, but has a lot of other problems.
The browser starts complaining about ram uses and can sometimes get out of memory errors.
There is also a lot of needed data handling as saving several non continuous data-ranges for the same sensor, locating and merging date-range overlaps etc...
So i am wondering what is the best practice here, should i always fetch or save on front-end? are there any frameworks i could use helping me with the data handling front-end or do i have to do this manually.
Saving all data in front end is an antipattern. Because of memory and out-of-sync issues. To make your system work seemingly faster and use backend data you can try following:
Optimistic response. This technique uses some simplified parts of backend logic in front end, while doing actual request. So user will see result before backend data reaches browser. Lets say you are doing +1 operation on backend. User sends number 2 to perform this operation. So in your front end you can use something const optimisticResponse = (userData) => userData + 1. And then when you get data from backend you can overwrite value, of needed
GraphQL allows you reduce overhead by asking backend only for data you need.
I have a table having millions of records. I am using Sails js as my server side code , React js to render data in view and Mysql as my DBMS. So what is the best method to retrieve the data in faster manner.
Like the end user does not feel like getting a huge amount of data which affects UI as well.
Shall I bring only 50 records first and show the pagination in bottom using pagination logic and then using socket.io fetch the rest in background ?
Or any good way of handling it ?
This really depends on how you expect your user to go through the data.
You will probably want an API call for getting only the first page of data (likely in such a way that you can fetch any page: api/my-data/<pagesize>/<pagenumber>).
Then it depends on what you expect your user to do. Is he going to click through every page to see all the data? In that case, it seems ok to load all the others as well as you mentioned. This seems unlikely to me, however.
If you expect your user to only view a few pages, you could load the data for the next page in the background (api/my-data/<pagesize>/<currentpage+1>), and then load the next page every time the user navigates.
Then you probably still need to support jumping to a certain page number, where you will need to check if you have the data or not, and then show a loading state (or nothing) while the data is being fetched.
All this said I don't see why you would need socket.io instead of normal requests (you really only need socket.io if the server needs to be abled to make 'requests' to the client so to speak)
The problem:
AngularJS forces to split view from model and it's good. But we're experiencing the following downsides because of it: When a user loads our page a browser makes not 1, but 2 requests:
First it loads view template (the first request)
Then Angular services load data from a server (the second request)
And we have a bit slower load page speed.
The question:
Is it possible after first page loading load view with populated data and only then get data from a server when something must be changed on the page?
I tried to find something regarding it, but didn't find anything.
You will have a lot more request as you have to load the Javascript and CSS libraries as well.
You can store your initial data in a service or the rootscope and just update the data when you need it. What's exactly your problem here?
The modular approach would be to break all the different components of the page that consume data into separate data requests, so that the interface can load in progressively as different data requests complete.
Additionally, you can load initial data, but then make some predictions on what the user will do next and lazy load additional data in the background.
Finally, you could store the previously loaded data in local storage (lots of modules out there you can utilize) so that it's pulled instantly upon the user's next visit. You would want to want to also add some sort of timestamp comparison on data in storage and on server to see if it has been updated.
i have a map with large data set(more than 100k), with markers, and ma using Geojson format with cluster, and BBox strategy, [fetching geojson data through HTTP request on starting the page]
but my browser(IE7,8) has problem with large amount of data, its going stuck while processing the large amount of features and shows error message - Out of memory
is there any solution ?
please help...
Thanks in advance
Drawing 100k features on the client is not so good idea. Even "good" browsers will slow down attempting to render that much data. You have a couple of options though:
Generate images with data on the server side and serve tiles to the client. A WMS service is a way to go in this case and you can use Geoserver, Mapserver or other WMS-compliant map rendering engine. You can then use GetFeatureInfo requests to fetch attribute data for features. You can see an example of how it works in this OpenLayers demo
If you data is static and doesn't change much you can create tiles using Tilemill and then use them in OpenLayers as OpenLayers.Layer.TMS layer. You can then use UTFGrid tecnique to map attribute data to tiles. Here's an example of how it works.