what is a best approach to reduce the redux state size? - reactjs

I have a new project about mobile app using react native tech.
I am thinking about using redux to manage the whole data from remote server api. our product have more business data need to display in mobile app.
So, My question is: redux state store our business data and it will take more memory on mobile device, like a ListView component. how can i solve this problem if i want to reduce the memory usage?

I am choosing, based on your background description of what you're trying to do, to address the underlying concern about the size of your redux store generally and the approach of storing everything on the client in my answer, and will not address specifically how to actually reduce the size of your data store here (the only answer to that is simply "don't store so much").
This is just a total swag and ignores things like compression, data duplication, the difference between storing something in AsyncStorage vs simply being in memory, etc.
That having been said, if you need some sort of gut check on whether memory/storage will be an issue, take a representative chunk of record data served by your API, serialize it as a JSON string, and figure out how big it is.
For example, this example twitter response is roughly 8.5 KB with whitespace removed. Let's say 10KB for each individual record for simplicity.
Now, how many records do you plan on bringing down? 10? 100? 1000? Let's say 1000 records of this type. That's 10,000KB or roughly 10MB.
With the constructs here, 10 MB is (Edit: depending on the specific constraint you're concerned about, may or may not be) a trivial amount of memory/storage to use in your application.
You need to do this similar process to your particular use case, and see if the amount of data you wish to store will be a problem for the devices you have to support.
A more relevant thing to consider is the performance impact of churning through large quantities of data on a single thread to do things like data manipulation, joining/merging, etc if that will be a need.
Redux is a tiny library that doesn't actually do that much for you by itself. This consideration is a general one, and is totally unique to your own application and cannot be concretely answered.

Related

What are the disadvantages of URLs that are long but below 2k characters?

I'm building a web app where in certain cases users may want to deep link to a state consisting of a list of items to have some action carried out upon.
We are considering whether to serialize the state on the URL or implement a solution that will persist the state on the backend and provide an ID to look it up.
So the url-scheme would either be something like:
/batch/item-1&item-2&item-3
Or:
/batch/4jk5kjdl3k
Where 4jk5kjdl3k represents an ID which allows the backend to lookup the list of item IDs.
We do not care about SEO in this instance as it is an editing interface that will not generally be crawlable or meaningful to search engines. From my research IE has a limit of 2083 characters and SEO sitemaps 2048 characters. We should be able to keep URL lengths below either limit. In the typical use case scenario we would sit somewhere around 200-500 characters, but there could be extreme cases getting close to 2000.
If we serialize the state on the URL directly, we can save a fair amount of work on the backend, but I'm worried that I'm overlooking some disadvantages to these potentially very long URLs.
I found this question on the topic, but it appears to be more concerned with the matter of exceeding 2083 characters:
What is the maximum length of a URL in different browsers?
So far the only disadvantages I see are:
Perhaps the IDs we store could change design and become longer in the future and bump us over the limit (quite unlikely)
These very long URLs can seem a bit unwieldy for users to deal with when copy/pasting and they could potentially end up cutting them off by accident
For context, the app is built in React, but I don't think that should be a determining factor.

Storing often used immutable data on front-end

I am working on a portal where a large chunk of data is the same all the time - let's say it is like a very advanced dictionary with a base of questions to it. Those do not change. I thought that since it is a significant number of docs (total around 20mB) it is not worth downloading it every single time (and for many actions all of it has to be downloaded), but instead, to store it hashed on front-end and access as needed. This would significantly limit server's computation. However, I see that localStorage is limited to 5mB.
My questions is are there any other good solutions/practices to apply in this situation?
Database is mongodb, and it is a MERN stack app.
On frontend to use significant payload you can try to use IndexedDB, database located in browser itself https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB

Which is the best database to use to index the internet?

If you had to make provision for 80 million records (one for each page on the internet) and store the relationships between those records (which is 80 billion to the nth power), which database would be the best for this?
I've started this project thinking we will only map a portion of the internet, but unfortunately it has gone far beyond the limits of mysql. I need a better way to keep track of this data. The frontend is PHP, but I suppose the backend can be anything, as long as it can handle that amount of data?
i won't say there is the one holy database for your needs, maybe it could be better for your company to split your database in logical parts to handle the amount of data in a better way. maybe you could outsource some data into file system as you won't need anything everytime in your database.
if you scan the interwebs, you probably save the html, css or any big data you crawl for into your filesystem while you save connections and everything meta related into your database. but i really think you'd mentioned that already.
the best advice i want to give here is to make sure, your structure of your database is whatever fits your processes the best before think about switching the database. if you really need to switch (as mysql would not give you more performance), there will be mongodb and/or webscalesql. webscale seems to be used by facebook to handle the amount of their data.
a big question would be if you just can improve your performance by improve your hardware. you should check that too, AFTER you checked your structure and processes!

Best way to store constant data on Google App Engine

I'm making a (quite) simple python web application on GAE. The web app basically, ask user input, do basic calculation, then spew out some question from several module based on previous calculation, do basic calculation, spew out more information to the user.
Now, the problem is
The data that need to be fetched is located throughout the constant data (eg, several small part of the whole data)
The total whole data is about 100kb, required data per user is about 10 kb.
The data is constant, and may be modified (by me).
I want to conserve cpu cycle. :-)
So far, I've had been hard coding the data in python string literal separated with some if-elif-else as python module, but it is soo ugly (the data is formatted in HTML and more than one line per data). I could store it in the database but it may required the more cpu cycle and I don't know an easy way to store constant (non user modifiable) on the datastore. Putting it in a file, formatted in XML or something could require even more cpu power on parsing. So what is the best way to store constant data?
Store the data as constants in your source code, or as a data file that you open within your app and read the relevant data out of.
Ah... whatever. I use database for that. Used cache. And thinking about denormalizing it even further.

Speed: Saved Objects vs Database

I am designing a dynamic site for some form of weather data, which I need to scrape to get the data, as its always changing. I am interested in knowing if its faster to use a database like sqlite or to save objects ,reloading them whenever needed. Both options will hold the scraped data.
Likely will use Python or Ruby, haven't decided yet.
This depends on a huge number of factors.
If you need to query the data, and search through it, it's likely that using a database will be more efficient. They are highly optimized for this type of operation.
If, however, you're purely just trying to dump and reload a bunch of memory, it could be faster to save it directly to a file.
That being said, I'd look at how you're going to use the data, and choose the method that makes the most sense for your application. I would not base this on speed, as, unless you're saving a huge amount of data, the speed of saving the data will likely not be a real performance bottleneck.

Resources