pgAdmin - Too many connection for the role "my username" - database

I am using pgAdmin to connect to my PostgreSQL database. After doing few actions I always get the message of Too many connection for the role "my username".
I have been reading other questions here I have found a solutions that solves the problem but only temporarily and after doing it manually which is closing and deleting the Sessions in the Dashboard in pgAdming like the screen shot below.
I would like to know either a way of setting my connection limit so I won't get that error or, in case I am mistaking something, a way of fixing anything I am missing out or doing wrong.
Thank you very much in advance.

To increase the connections permanently. You have to configure max_connections parameter.
You can change this in postgresql.conf file if postgres is hosted locally.
Reference for max_connections
Reference for setting parameter

when you are locked in do
ALTER ROLE my_username CONNECTION LIMIT -1;
this would make it so that there is no Limit.
see manual

Related

MediaWiki installation issue - port problems

I am trying to install MediaWiki version 1.31 localy and I have run into some issues that I cant get past by. Mainly when I input datatabe connection (I am trying to connect to PostgreSQL database) information it returns this error.
Thing is the port I am trying to connect is 5433 not 5432, also the names "template1" and "postgres" are not included in my input trough the dialogue screen - I dont know where they came from. "test1" is the name of the database I am trying to connect to.
Any help or advice how to get trough this error would be greatly appreciated. Thank you.
That the port you specify is not used while setting up the database schema in the first place is a long-standing known bug. One workaround is to run your database on the default port until you have wiki set up, then change it back to the port you want.
In order to create a new database, you need to connect to an existing database in the same cluster. 'template1' and 'postgres' are pre-existing databases (usually created at the time the cluster was created) commonly used to connect to in order to create a new database. These names are "well-known", you don't need to specify them.

SQL Server login timeout expired when running sqlcmd

Today I have some strange error on SQL Server. Something like this:
I don't know what I should do. This is happening when I want to run query more than 200MB. But I guess, the size doesn't matter. Can anyone guide me to fix this problem?
And, for some reason, I can't export SQL Server data for more than 100MB, so can anyone help me ?
Your error messages say a couple of different things:
Login timeout expired
... Server is not found or accessible ...
Could not open connection to SQL Server
If you look over the command parameters for sqlcmd here.
You're passing in s which is col_separator, rather than server which is S (note the capitalization).
Next you're probably going to need an authorization strategy, whether that is E for Integrated Security, or using U and P for userid and password respectively.
Try those and see what you're getting.
You could, of course, always try using SSMS rather than sqlcmd.
Edit: Looks like integrated security is by default, so you don't need to specify E unless you just want to.

SQL Server service not accepting connections

I am having a strange problem that happens randomly on a server. Some mornings, our client will call in and say their website is not working with the following error message:
The underlying provider failed on Open.
The temporary fix I found for this was to manually go in and restart the SQL Server service. Once this is done it works just fine until the next random time it happens. So my question is, does anyone know what exactly is happening? If so, how can I prevent this in the future?
I have tried searching everywhere for this with the only explanation saying that updates were being applied to the service and it wasn't restarted properly. But I couldn't find any fixes. Thanks in advance
This error:
'FCB::Open failed: could not open file (LDF file) for the file number 2. OS error: 32( The process cannot access the file because it's being used by another process)
is quite troubling and should not be occurring, unless you just restarted your SQL Service. It would easily cause the problems that you are seeing. I would take this to GoDaddy.
If you are getting this error through your Nagios check.
Make sure you deselect "autoclose" as an option on the database.
You can select the option in: Database properties -> Options
Here select at Auto Close: false

Server crashed from traffic spike, now getting database connection error

So I posted a new blog on my site and promoted it on my facebook where the traffic spike was far bigger than anticipated, the server went down from the volume of traffic and after it was rebooted I am now getting a database connection error.
I contacted my server host and they told me this:
"I was able to get the relevant database details from the wp-config.php file in the home directory for your site and, using those creds I am able to connect to the relevant database without a problem.
To be sure that I was able to connect AND make a query to the database I have also created a simple test script that can be viewed at http://yoursite.com/mysqltest.php
This confirms that the server is responding correctly and that the database itself is able to accept connections and queries.
This leaves us with the likelihood that the issue lies with the scripting/configuration of the wordpress installation which is not something I am going to be able to assist you with.
I suspect that the problem lies with the wp-config.php file but cannot be certain."
I can't see how the wp-config would have changed, I haven't touched it in over a month and it's been working fine otherwise. The website was also working fine after I posted that blog, it was only after the server was rebooted that it doesn't. All the other sites on the server remain in perfect working condition. I don't see how a traffic spike could have done this. I'm lost as to what to do next? Please help! :(
D
Try this database connection test script https://gist.github.com/162913

Why do I get this error "[DBNETLIB][ConnectionRead (recv()).]General network error" with ASP pages

Occasionally, on a ASP (classic) site users will get this error:
[DBNETLIB][ConnectionRead (recv()).]General network error.
Seems to be random and not connected to any particular page. The SQL server is separated from the web server and my guess is that every once and a while the "link" goes down between the two. Router/switch issue... or has someone else ran into this problem before?
Using the same setup as yours (ie separate web and database server), I've seen it from time to time and it has always been a connection problem between the servers - typically when the database server is being rebooted but sometimes when there's a comms problem somewhere in the system. I've not seen it triggered by any problems with the ASP code itself, which is why you're seeing it apparently at random and not connected to a particular page.
I'd seen this error many times. It could be caused by many things including network errors too :).
But one of the reason could be built-in feature of MS-SQL.
The feature detects DoS attacks -- in this case too many request from web server :).
But I have no idea how we fixed it :(.
SQL server configuration Manager
Disable TCP/IP , Enable Shared Memory & Named Pipes
Good Luck !
Not a solution exactly and not the same environment. However I get this error in a VBA/Excel program, and the problem is I have a hanging transaction which has not been submitted in SQL Server Management Studio (SSMS). After closing SSMS, everything works. So the lesson is a hanging transaction can block sprocs from proceeding (obvious fact, I know!). Hope this help someone here.
open command prompt - Run as administrator and type following command on the client side
netsh advfirewall set allprofiles state off
FWIW, I had this error from Excel, which would hang on an EXEC which worked fine within SSMS. I've seen queries with problems before, which were also OK within SSMS, due to 'parameter sniffing' and unsuitable cached query plans. Making a minor edit to the SP cured the problem, and it worked OK afterwards in its orginal form. I'd be interested to hear if anyone has encountered this scenario too. Try the good old OPTION (OPTIMIZE FOR UNKNOWN) :)

Resources