my app keeps getting slower in Angular.js? - angularjs

I have an application made in ionic, it has several sections. It has a section that the first time is used download a dataset of 18,000 records (using $ http requests) and stored in a local database (pouchdb). then I get these 18 000 records and gender cycles for certain operations I need.
the problem is that whenever I access this section, this will gradually becoming slower. in the application settings (app.js) I have the option
{Cache: false}
in all routes of my app.
with this is supposed to release memory, I do not understand why gradually becomes increasingly slow.
the way I have my project with controllers and data in html views.
every time I go to this section I get the 18 thousand records and perform operations. It is necessary to cover all these 18 000 records each time I log. for example I have the list of all the cities of my country, someone selected from a dropdown a city and then I walk all records for operations, then it is necessary. more or less is what I have.
This only happens when I compile for android devices. but I have a samsung galaxy s7 and does not occur this problem, always fast. which makes me think it may be a ram problem.
What I can do?
I have an application made in ionic, it has several sections. It has a section that the first time is used download a dataset of 18,000 records (using $ http requests) and stored in a local database (pouchdb). then I get these 18 000 records and gender cycles for certain operations I need.
the problem is that whenever I access this section, this will gradually becoming slower. in the application settings (app.js) I have the option
{Cache: false}
in all routes of my app.
with this is supposed to release memory, I do not understand why gradually becomes increasingly slow. the way I have my project with controllers and data in html views. every time I go to this section I get the 18 thousand records and perform operations. It is necessary to cover all these 18 000 records each time I log. for example I have the list of all the cities of my country, someone selected from a dropdown a city and then I walk all records for operations, then it is necessary. more or less is what I have.
This only happens when I compile for android devices. but I have a samsung galaxy s7 and does not occur this problem, always fast. which makes me think it may be a ram problem. What I can do?

Related

snowsight page is displayed too slowly when the many queries is thrown

I'm using a snowflake trial version to do a performance test.
I perform 9 heavy queries (20 mins taken by XS cluster) at the same time and watch the warehouse or history pane. However, the time to display page is too much; about 30 seconds.
I think the cloudservice (like hadoop headnode?) doesn't have adequate resources to do this.
Is it because I'm using the trial version? If I use enterprise or business critical versions, will it happen?
The "cloud services", is actually unrelated to your warehouses, and 9 queries is not enough overload that. But at the same time the trial accounts might be on slightly underpowered SC layer, but I am not sure why that would be the case. The allocated credit spend is just that.
I am puzzled what you are trying to "test" about running many slow for the server sized queries at the same time?
When you say the page takes 30 seconds to load, do you mean if you do nothing the query execution status/time is only updated every ~30 seconds, or if you do a full page reload it is blank for 30 seconds?

Backbone.js Application takes up to 30% CPU performance

I am working on a Backbone.js application which is nearly done by now. My problem is that it seems like my application requires a lot of CPU performance. A regular Macbook Air takes up to 30% CPU if you visit my website (the Firefox process).
I can't think of any reason for this. I have like 6-7 different Views and a table with like 60 Views (each entry/row is a View object). Also I use setInterval() to fetch updates from the API every 10 seconds, but they're in total 4 HTTP requests with a content-length of ~1000, which should be totally acceptable.
According to Backbone-Eye I have 66 Models, 67 Views, 4 Collections, 1 Router. Also I took a "Javascript CPU profile" and it seems that a lot of CPU performance is used for rendering/painting, but with no information how to reduce it.
I would appreciate any tips how to reduce CPU load in my Backbone App.
Stagger the 4 requests you make every 10 seconds. Make each one of them poll between 9.8 to 10.2 seconds instead of doing them all at 10 seconds.
After you do these 4 fetches check if the content has changed. Only re-render the views if the content from your fetch has changed.
Do you have view memory leaks, zombie views? Do you properly close each row view? read How To: Detect Backbone Memory Leaks

Strategy for caching of remote service; what should I be considering?

My web app contains data gathered from an external API of which I do not have control. I'm limited to about 20,000 API requests per hour. I have about 250,000 items in my database. Each of these items is essentially a cached version. Consider that it takes 1 request to update the cache of 1 item. Obviously, it is not possible to have a perfectly up-to-date cache under these circumstances. So, what things should I be considering when developing a strategy for caching the data. These are the things that come to mind, but I'm hoping someone has some good ideas I haven't thought of.
time since item was created (less time means more important)
number of 'likes' a particular item has (could mean higher probability of being viewed)
time since last updated
A few more details: the items are photos. Every photo belongs to an event. Events that are currently occurring are more like to be viewed by client (therefore they should take priority). Though I only have 250K items in database now, that number increases rather rapidly (it will not be long until 1 million mark is reached, maybe 5 months).
Would http://instagram.com/developer/realtime/ be any use? It appears that Instagram is willing to POST to your server when there's new (and maybe updated?) images for you to check out. Would that do the trick?
Otherwise, I think your problem sounds much like the problem any search engine has—have you seen Wikipedia on crawler selection criteria? You're dealing with many of the problems faced by web crawlers: what to crawl, how often to crawl it, and how to avoid making too many requests to an individual site. You might also look at open-source crawlers (on the same page) for code and algorithms you might be able to study.
Anyway, to throw out some thoughts on standards for crawling:
Update the things that have changed often when updated. So, if an item hasn't changed in the last five updates, then maybe you could assume it won't change as often and update it less.
Create a score for each image, and update the ones with the highest scores. Or the lowest scores (depending on what kind of score you're using). This is a similar thought to what is used by LilyPond to typeset music. Some ways to create input for such a score:
A statistical model of the chance of an image being updated and needing to be recached.
An importance score for each image, using things like the recency of the image, or the currency of its event.
Update things that are being viewed frequently.
Update things that have many views.
Does time affect the probability that an image will be updated? You mentioned that newer images are more important, but what about the probability of changes on older ones? Slow down the frequency of checks of older images.
Allocate part of your requests to slowly updating everything, and split up other parts to process results from several different algorithms simultaneously. So, for example, have the following (numbers are for show/example only--I just pulled them out of a hat):
5,000 requests per hour churning through the complete contents of the database (provided they've not been updated since the last time that crawler came through)
2,500 requests processing new images (which you mentioned are more important)
2,500 requests processing images of current events
2,500 requests processing images that are in the top 15,000 most viewed (as long as there has been a change in the last 5 checks of that image, otherwise, check it on a decreasing schedule)
2,500 requests processing images that have been viewed at least
Total: 15,000 requests per hour.
How many (unique) photos / events are viewed on your site per hour? Those photos that are not viewed probably don't need to be updated often. Do you see any patterns in views for old events / phones? Old events might not be as popular so perhaps they don't have to be checked that often.
andyg0808 has good detailed information however it is important to know the patterns of your data usage before applying in practice.
At some point you will find that 20,000 API requests per hour will not be enough to update frequently viewed photos, which might lead you to different questions as well.

Database time acces in Heroku with Play Framework

I am having a problem and I need your help.
I am working with Play Framework v1.2.4 in java, and my server is uploaded in the Heroku servers.
All works fine, I can access to my databases and all is ok, but I am experiment troubles when I do a couple of saves to the database.
I have a method who store data many times in the database and return a notification to a mobile phone. My problem is that the notification arrives before the database finish to save the data, because when it arrives I request for the update data to the server, and it returns the data without the last update. After a few seconds I have trying to update again, and the data shows correctly, therefore I think there is a time-access problem.
The idea would be that when the databases end to save the data, the server send the notification.
I dont know if this is caused because I am using the free version of the Heroku Servers, but I want to be sure before purchasing it.
In general all requests to cloud databases are always slower than the same working on your local machine. Even simply query that on your computer needs just 0.0001 sec can be as slow as 0.5 sec in the cloud. Reason is simple clouds providers uses shared databases + (geo) replications, which just... cannot be compared to the database accessed only by one program on the same machine.
Also keep in mind that free Heroku DB plans doesn't offer ANY database cache, which means that every query is fetched from the cloud directly.
As we don't know your application it's hard to say what is the bottleneck anyway almost for sure you have at least 3 ways to solve your problem. They are not an alternatives, probably you will need to use (or at least check) all of them.
You need to risk some basic plan and see how things changed with paid version, maybe it will be good enough for you, maybe not.
Redesign your application to make less queries. For an example instead sending 10 queries to select 10 different rows, you will need to send one query, which selects all 10 records at once.
Use Play's cache API to avoid repeating selecting the same set of data again and again. For an example, if you have some categories, which changes rarely, but you need category tree for each article, you don't need to fetch categories from DB every time, instead you can store a List of categories in cache, so you will need to use only one request to fetch article's content (which can be cached for some short time as well...)

How can I save specific cache elements for 2 hours instead of 10 minutes in Cakephp?

I have noticed that my site in Cakephp is very very slow. I have rewritten my entire site in Cakephp with exactly the same functionality and it's taking 400 ms to generate every page instead of 20ms. 400ms is far away from the 50-100ms parsetimes I am hoping to archieve. Site speed is very important for me, it was one of the reasons I moved away from learning more about Drupal.
When writing all SQL queries myself and working with simple incudes, there was no need to do much optimizing. I have to start optimizing the code now though.
All pages show in a block the number of users, newsposts, articles and a few other things that have been posted. This takes 9 SQL queries and seems to take away some performance. That's what I want to use caching for.
At the moment my site doesn't get that many visitors and I'm mainly rebuilding it to become a better webdeveloper and the high parsetime bums me out. I am going to remove Croogo alltogether and only work with self-written code. I already stumbled on many horrible performance degrading parts of Croogo.
I would like to save all those 9 query results in cache once an hour via a cronjob. I want to run a cronjob with the 9 queries that saves the results in the cache. My question is how I can save data longer in cache? It normally saves data 10minutes, but I'd like to save this specific data for 150 minutes and run a cronjob every 2 hours. I know it can be done via core.php, but I wouldn't like to cache everything for 150 minutes, just the statistics-data for the leftmost block at www.daweb.nl.
Statistieken
Artikelen:
Leden:
Javascripts: 29
Nieuwsberichten: 4
Nodes: 16
PHP Scripts:
Members, Articles, PHP Scripts are empty, which means nobody has accessed the pages that generate the relevant data. I could make a long block of code with a lot of if (there is cache) and else (generate cache), but that's not going to make things much prettier either. Also, I'd have no idea where to place that code. I am not looking to write bunchload of code in app_controller.php, can't be good for the site.
If site speed is important to you (more than those automagics Cake has to offer) then you might want to look at CodeIgniter.
Anyway, here's how to set cache setting for elements: http://book.cakephp.org/view/1083/Caching-Elements

Resources