Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am running a website running on Django,Postgres and Apache.
Recently, due to sudden surge in traffic the site came down. On checking server logs i came to know that there was some issue of maximum connections limit exceeded. On looking up more i found out that in postgresql.conf , the parameter max_connections affects the simultaneous connections, that can be made at any point of time, to the DB.
The current value in my postgresql.conf is 100.
The event that happened and brought site down is not a commonly occuring event but i want to be prepared the next time it happens again.
So i am seeking advice for how can i monitor the active connections at any moment on a regular day and to how much i should increase the value of max connections and what other parameters need to be changed parallely as i was seeing that i have to increase other values in postgresql.conf accordingly(like shared buffers etc).
Please take a look at the relevant wiki article: http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
In general it's best not to bump max_connections up too much. Use a connection pool like PgBouncer, or a pool inside your server, instead.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have hosted my Prestashop at Fastdomain server it's like the 4th year, the website is stable and working fine till 3 days ago where my email inbox got full of spam and more than 300K requests on the site causing the site to go down.
I activated the basic protection from Sitelock provided from Fastdomain Cpanel, it worked fine for two days and the site is down again cuz of another strike.
Fastdomain support tried to fix it but no instant method they said the problem is caused due to script overuse in send to friend module, even though this module is an original PrestaShop module, and they said the website will "recover" in few hours.
any comment or thoughts? how t respond to such attack!?
my website is elektrojo.com and am using the up to date version
This appears to be a common problem. Not only is it taking your site down, it may be being used to spam others, which has the risk of getting your domain blacklisted.
In that thread was linked an updated version of the module supporting a CAPTCHA, along with a similar modification for product reviews. It seems to be for PrestaShop 1.5 and 1.6.
If you have some kind of a "backend" which you can update modules on, you should also do that.
Another suggestion is to use fail2ban to detect repeated attempts to access this feature and block it. You may not have the necessary access to do that, but if not, your hosts should be able to.
Failing that, you should remove the sendtoafriend code (ensure the files are actually gone from their original location) until you have found a way to harden it, since it's being abused to take your site down.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a batch file FolderWatcher.bat.
And I created a task with task scheduler in Windows 7. I need to run it in a specific time(once) and should not stop until the user wants it to stop. (Start once and should run for days and days.)
UAC is off and I am Admin.
Any idea how this can be done?
http://technet.microsoft.com/en-us/library/cc722178.aspx
A task's settings are displayed on the Settings tab of the Task Properties or Create Task dialog box. The following list contains the descriptions of task settings.
Stop the task if it runs longer than: time period
This setting allows you to limit the amount of time a task is allowed to run. Use this setting to limit tasks that might take a long period of time to execute, causing an inconvenience to the user.
Just uncheck this setting and you should be good.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have developed an app which talks to a sql database on a server. I have just arranged with a server company to host my sql database and in setting it up have asked how many public ip's do i need.
The application will be used by 10/20 companies each having approx 10-20 ipads / android tablets. There will also be a website they can log onto to again look at the data on the server.
How many public ip's would i require, or what factors do i need to consider when deciding.
I should add if you haven't already worked it out, know nothing about servers.
The number of ip address wouldn't matter that much if you don't need them. When will you need them?
When you want to add an extra layer of separation between companies (each its own ip)
More ip addresses means more DIFFERENT connections (so failover)
More ip addresses means more SIMULTANEOUS connections (so more possible users)
You want to have extra maintenance ;)
Just to mention a few
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I need a server with a lot of RAM, around ~1TB, for a GIS DB, that would be not written to hard disk, because the data is irrelevant after a few seconds. So I do not need a lot of disk space; I wish to hold all data in memory. The write data would be 1% of INSERT'S and 99% of UPDATE's. Write/Read ratio would be 20/1. I have to choose to rent a dedicated server or rent an Amazon service. I'm wondering: how to calculate the price of Amazon services with traffic ~100TB/month.
I think you might reconsider your configuration, using SSD disk, a really great CPU, and a lot of ram, but 1To of ram is way too much, most system will not handle it. After it's for the price ! It's really expensive, that kind of configuration is 15000$++ a month at OVH for example. So I thinks if you have that kind of problem, the better is to directly call Amazon and ask them for the best configuration and negociate the price.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I want know which one is good to follow whether auto vacuum or vacuum manually. Right now we are following manually in cron jobs, but sometimes it gets struck to vaccum on particular tables. so we are thinking about the auto vacuum. does it give good performance to production server? please suggest. Lot of thanks in advance.
Performance tuning is a difficult subject. Auto vacuum works in most cases very well. In certain cases however "manual" vacuum using cron might work better because for instance you know the database has nothing to do at night while during the day the vacuum might be to disruptive.
A good book on postgres performance and tuning is PostgreSQL 9.0 High performance