Could somebody please give some tips on how to improve web2py performance (WSGI apache + MySQL)? I have an application that receives Ajax requests from the client every few seconds to access database and return results. The server is a Ubuntu machine with 640 Mb RAM (virtual server on Amazon EC2, no Xserver).
There are 4 WSGI-processes in apache config. A newly started apache2 instance leaves ca 300 Mb free, but after a hundred requests the system is getting slow and there are long delays. Restarting the server helps to free up memory (I set up cron to do it every 30 minutes - but I guess it is bad practice).
Will be grateful for any advance! A more powerful server is not an option yet because of the budget.
Thanks in advance!
Make sure you use connection pools. Makes a big difference.
Also do not use cron. Use a background process. Cron may eat more memory than necessary.
Read 11 Deployment Recipes of the Web2Py book ! There are a lot of ways to improve web2py performance
If you are using background scripts, make sure to commit() or rollback() your transactions. This is not needed on web2py environment. But if you are running outside scripts it will be needed.
Related
I have a per-user DB architecture like so:
There is around 200 user DBs and each has a continuous replication link to the master couch. ( All within the same couch instance) The problem is the CPU usage at any given time is always close to 100%.
The DBs are idle, so no data is getting written/read from them.There's only a few KB of data per DB so I don't think the load is an issue at this point. The master DB size is less than 10 MB.
How can I go about debugging this performance issue?
You should have a look at https://github.com/redgeoff/spiegel - it's a tool to handle many CouchDB replications in a scalable way. Basically it achieves that by listening to the _global_changes endpoint and creating replications only when needed.
In recent CouchDB versions (2.1.0+), the replicator has been improved, but I think for replicating per-user databases it still makes sense to use an external mechanism like Spiegel to manage the number of active replications.
Just as a reminder, there are some security flaws in CouchDB 2.1.0 and you might need to upgrade to 2.1.1. Maybe you've been hacked like this one.
we have recently installed Redis Cache for our magento based sites.
we 2 websites on our server both using same redis server, our server is based on linux with centos.
the issue we are facing is that redis is consuming quite a lot of RAM, and it is growing by every day.
we have set some values on our magento local.xml file for redis
<redis_session>
<host>xxxxxxx</host>
<port>xxxx</port>
<password></password>
<timeout>2.5</timeout>
<persistent></persistent>
<db>1</db>
<compression_threshold>2048</compression_threshold>
<compression_lib>gzip</compression_lib>
<log_level>1</log_level>
<max_concurrency>6</max_concurrency>
<break_after_frontend>5</break_after_frontend>
<break_after_adminhtml>30</break_after_adminhtml>
<first_lifetime>86400</first_lifetime>
<bot_first_lifetime>60</bot_first_lifetime>
<bot_lifetime>7200</bot_lifetime>
<disable_locking>0</disable_locking>
<min_lifetime>60</min_lifetime>
<max_lifetime>2592000</max_lifetime>
<automatic_cleaning_factor>1</automatic_cleaning_factor>
</redis_session>
it seems like we do not have an expire set to it, plus there is no memory usage limit either.
i know there are few instruction on internet for setting expire but there is nothing for use with magento in easy way.
all help is apreciated.
Have to say im not an administrator of any sorts and never needed to distribute load on a server before, but now im in a situation where i can see that i might have a problem.
This is the scenario and my problem :
I have a IIS running on a server with a MSSQL, a client can send off a request that will retrieve a datapackage with a request (1 request) to the MSSQL database, that data is then sent back to the client.
This package of data can be of different lenght, but generally <10 MB.
This is all working fine, but im now facing a what-if if i have 10.000 clients pounding on the server simulataniously, i can see my bandwith getting smashed probably and also imagine that both IIS and MSSQL will be dying of exhaustion.
So my question is, i guess the bandwith issue is only about hosting ? but how can i distribute this so IIS and MSSQL will be able to perform without exhausting them ?
Really appriciate an explanation of how this can be achieved, its probably standard knowledge but for me its abit of a mystery, but know it can be done when i look at dropbox and whatelse just a big question how i can do it.
thanks alot
You will need to consider some form of Load Balancing. Since you are using IIS, I'm assuming that you are hosting on Windows Server, which provides a software based Network Load Balancer. See Network Load Balancing Overview
You need to identify the performance bottleneck then plan to reduce them. A sledgehammer approach here might not be the best idea.
Setup performance counters and record a day or two's worth of data. See this link on how to do SQL server performance troubleshooting.
The bandwidth might just be one of the problems. By setting up performance counters and doing a analysis of what is actually happening you will be able to plan a better solution with the right data.
I developed an application for client that uses Play framework 1.x and runs on GAE. The app works great, but sometimes is crazy slow. It takes around 30 seconds to load simple page but sometimes it runs faster - no code change whatsoever.
Are there any way to identify why it's running slow? I tried to contact support but I couldnt find any telephone number or email. Also there is no response on official google group.
How would you approach this problem? Currently my customer is very angry because of slow loading time, but switching to other provider is last option at the moment.
Use GAE Appstats to profile your remote procedure calls. All of the RPCs are slow (Google Cloud Storage, Google Cloud SQL, ...), so if you can reduce the amount of RPCs or can use some caching datastructures, use them -> your application will be much faster. But you can see with appstats which parts are slow and if they need attention :) .
For example, I've created a Google Cloud Storage cache for my application and decreased execution time from 2 minutes to under 30 seconds. The RPCs are a bottleneck in the GAE.
Google does not usually provide a contact support for a lot of services. The issue described about google app engine slowness is probably caused by a cold start. Google app engine front-end instances sleep after about 15 minutes. You could write a cron job to ping instances every 14 minutes to keep the nodes up.
Combining some answers and adding a few things to check:
Debug using app stats. Look for "staircase" situations and RPC calls. Maybe something in your app is triggering RPC calls at certain points that don't happen in your logic all the time.
Tweak your instance settings. Add some permanent/resident instances and see if that makes a difference. If you are spinning up new instances, things will be slow, for probably around the time frame (30 seconds or more) you describe. It will seem random. It's not just how many instances, but what combinations of the sliders you are using (you can actually hurt yourself with too little/many).
Look at your app itself. Are you doing lots of memory allocations in the JVM? Allocating/freeing memory is inherently a slow operation and can cause freezes. Are you sure your freezing is not a JVM issue? Try replicating the problem locally and tweak the JVM xmx and xms settings and see if you find similar behavior. Also profile your application locally for memory/performance issues. You can cut down on allocations using pooling, DI containers, etc.
Are you running any sort of cron jobs/processing on your front-end servers? Try to move as much as you can to background tasks such as sending emails. The intervals may seem random, but it can be a result of things happening depending on your job settings. 9 am every day may not mean what you think depending on the cron/task options. A corollary - move things to back-end servers and pull queues.
It's tough to give you a good answer without more information. The best someone here can do is give you a starting point, which pretty much every answer here already has.
By making at least one instance permanent, you get a great improvement in the first use. It takes about 15 sec. to load the application in the instance, which is why you experience long request times, when nobody has been using the application for a while
I'm having some issues with a web server. It is not used a lot yet (mainly by me now), but will be used as a live server hosting a couple hundred/thousand Wordpress sites soon.
I'm currently having the issue that when the webserver isn't used for a bit (some minutes) it seems to 'fall asleep'. From then the first request takes an awfull long time to process and after that it runs smoothly for a bit.
The server (VPS) it's on is dedicated to being a webserver, so apache (/mysql) should be top priority.
Does anybody know what I can do to improve this?
Thanks!
The issue finally solved itself when we started getting more visitors to the server, keeping it alive.