Postgresql too many clients error on linux server - database

when i try connect postgresql 9.0 server on linux i get too many clients connected already. I tried increasing max_connections from 100 to 200 and start the server it doest take the max connections. What should i change on the linux server
Eclipse LogCat
Caused by: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:291)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:108)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)

This is a bit of a FAQ and is discussed in Number of Database Connections on the PostgreSQL wiki.

The only way to increase max_connections and persist this value is modifying the postgresql.conf file, so first of all, check if the value has changed (after restarting the server):
show max_connections
If the value DID NOT change, there's something wrong with your procedure (file permissions maybe?). If the value DID change, you might try with a higher value (weird, but might depend on your application connection requirements, OR a connection leak).

Related

What is the cause of a sporadic ORA-12599: TNS:cryptographic checksum mismatch?

In our production environment there are sometimes peaks of ORA-12599 errors the connection works otherwise.
The description is not quite clear to me.
Cause: The data received is not the same as the data sent.
Action:
Attempt the transaction again. If error persists, check (and correct)
the integrity of your physical connection.
https://www.oraexcel.com/database-oracle-11gR2-ORA-12599
Does that mean there is a problem with my network cable? Or any other Hardware?
Because our DBA said the error can be caused by a configuration issue on the client side.
Edit:
It started without any changes on the client or server side. And there are spikes. 80 errors in an hour and then no errors for 4 hours while the load is more or less constant
It can mean a mismatch in how the client and the server are wishing to communicate (in encrypted fashion). For example, trying to connect to an 11g database with 10g JDBC drivers will get this because the encryption implementation changed between these versions.
Check your local (ie, client) sqlnet.ora file - you may need to opt for a different method or change the order of them.
The error can be typically be ignored (its informational) but using the latest drivers will normally resolve the issue.
We had sporadic ORA-12599 errors and also ORA-24300 ("bad value for mode") in a Python web app using Flask and uwsgi. It turned out to be caused by multiple worker processes writing and reading to/from the same TCP connection to the DB, giving chaotic communications on the socket.
In our setup this was caused by creating the TCP connections to the DB before uwsgi forked the workers, and the file descriptors being inherited. We fixed this by using uwgsi's lazy-apps mode.
See also this question.

How to resolve DB connection invalidated warning in Airflow Scheduler?

I am upgrading our Airflow instance from 1.9 to 1.10.3 and whenever the scheduler runs now I get a warning that the database connection has been invalidated and it's trying to reconnect. A bunch of these errors show up in a row. The console also indicates that tasks are being scheduled but if I check the database nothing is ever being written.
The following warning shows up where it didn't before
[2019-05-21 17:29:26,017] {sqlalchemy.py:81} WARNING - DB connection invalidated. Reconnecting...
Eventually, I'll also get this error
FATAL: remaining connection slots are reserved for non-replication superuser connections
I've tried to increase the SQL Alchemy pool size setting in airflow.cfg but that had no effect
# The SqlAlchemy pool size is the maximum number of database connections in the pool.
sql_alchemy_pool_size = 10
I'm using CeleryExecutor and I'm thinking that maybe the number of workers is overloading the database connections.
I run three commands, airflow webserver, airflow scheduler, and airflow worker, so there should only be one worker and I don't see why that would overload the database.
How do I resolve the database connection errors? Is there a setting to increase the number of database connections, if so where is it? Do I need to handle the workers differently?
Update:
Even with no workers running, starting the webserver and scheduler fresh, when the scheduler fills up the airflow pools the DB connection warning starts to appear.
Update 2:
I found the following issue in the Airflow Jira: https://issues.apache.org/jira/browse/AIRFLOW-4567
There is some activity with others saying they see the same issue. It is unclear whether this directly causes the crashes that some people are seeing or whether this is just an annoying cosmetic log. As of yet there is no resolution to this problem.
This has been resolved in the latest version of Airflow, 1.10.4
I believe it was fixed by AIRFLOW-4332, updating SQLAlchemy to a newer version.
Pull request

Google Data Studio MySql data source connection does not exist Error

Platform: Google Data Studio
Data Source: MySQL
Connection was working before,
meaning no issues with credentials.
All of a sudden, getting the below error:
All IPs have been whitelisted from the google data studio list of ips.
The only thing that comes to mind is a limitation of GDS to process data.
The data source table has around 200K+ rows.
Not sure what is the limitation for GDS with MySQL.
There's no indication anywhere.
Anyone out there can help to solve this or maybe provide some info would be appreciated.
Thanks
If you use a firewall, be sure to double check the Google ip adresses. They may have added new ips (in my case, the last one was missing).
Check them here !
After doing so, I had to change the Host name of the connection to the database for a url alias (www.yourserver.com <- url pointing on your server), and change it back to the IP to make it work.
Sounds like a the connector cannot establish a new connection.
Cloud SQL Connector:
At the time of writing this, the connector seems unable to establish a new connection once the existing one has timed out and modifying the JDBC url to include query parameters gives you an error when authenticating.
This is probably due to the connector appending it's own parameters.
(Seems to be a possible bug here when a connection no longer exists)
MySQL Connector (with IP Address):
This connector allows you to add query parameters to the JDBC url. Enable SSL and append useSSL=true to the url.
e.g.jdbc:mysql://<ip>/<database>?useSSL=true
This worked as expected and establishes new connections when required.
Example Source Setup
Suffering from this issue too, my experience is that using the MySQL connector instead of the Cloud SQL Connector provides better stability in combination with setting wait_timeout to a value above 12 hours.
This issue has been reported on the official Google Data Studio bug tracker. Please vote them up if you are also suffering from this !
🐛 130205306 MySQL connection does not exist Apr 9, 2019 04:36PM
🐛 118470083 Data source password not stored for MySQL sources. Oct 26, 2018 01:24PM

Mirroring in SQL Server 2008

I'm trying to set up mirroring between two sql 2008 databases on different servers in my internal network, as a test run before doing the same thing with two live servers in different locations.
When I actually try and switch the mirroring on the target DB (with
ALTER DATABASE testdb SET PARTNER = N'TCP://myNetworkAddress:5022') I'm getting an error telling me that the server network address can not be reached or does not exist. A little research suggests this is a fairly unhelpful message that pops up due to a number of possible causes, some of which are not directly related to the server existing or otherwise.
So far I've checked and tried the following to solve this problem:
On the target server, I've verified that in SQL Configuration Manager that "Protocols for SQLEXPRESS" (my local installation is labelled SQLEXPRESS for some reason, even though querying SERVERPROPERTY('Edition') reveals that it's 64-bit Enterprise), and Client Protocols for SQL Native Client 10 all have TCP/IP enabled
I'm using a utility program called CurrPorts to verify that there is a TCP/IP port with the same number specified by the mirroring setup (5022) is open and listening on my machine. Netstat verifies that both machines are listening on this port.
I've run SELECT type_desc, port FROM sys.tcp_endpoints; and
SELECT state_desc, role FROM sys.database_mirroring_endpoints to ensure that everything is set up as it should be. The only thing that confused me was the "role" returns 1 .. not entirely sure what that means.
I've tried to prepare the DB correctly. I've taken backups of the database and the log file from the master DB and restored them on the target database with NORESTORE. I've tried turning mirroring on both while leaving them in the NORESTORE state and running an empty RESTORE ... neither seems to make much difference. Just as a test I also tried to mirror an inactive, nearly empty database that I created but that didn't work either.
I've verified that neither server is behind a firewall (they're both on the same network, although on different machines)
I've no idea where to turn next. I've seen these two troubleshooting help pages:
http://msdn.microsoft.com/en-us/library/ms189127.aspx
http://msdn.microsoft.com/en-us/library/aa337361.aspx
And as far as I can tell I've run through all the points to no avail.
One other thing I'm unsure of is the service accounts box in the wizard. For both databases I've been putting in our high-level access account name which should have full admin permissions on the database - I assumed this was the right thing to do.
I'm not sure where to turn next to try and troubleshoot this problem. Suggestions gratefully received.
Cheers,
Matt
I think that SQL Express can only act as a witness server with this SQL feature, you might get better mileage on ServerFault though.
Mike.
Your network settings might be OK. We got quite non-informative error messages in MS SQL - the problem might be an authorization issue and the server still will be saying "network address can not be reached".
By the way, how the authentication is performed? A MSSQL service (on server1) itself must be runned as a valid db user (on server2, and vice versa) in order to make the mirroring work.

Do connection string DNS lookups get cached?

Suppose the following:
I have a database set up on database.mywebsite.com, which resolves to IP 111.111.1.1, running from a local DNS server on our network.
I have countless ASP, ASP.NET and WinForms applications that use a connection string utilising database.mywebsite.com as the server name, all running from the internal network.
Then the box running the database dies, and I switch over to a new box with an IP of 222.222.2.2.
So, I update the DNS for database.mywebsite.com to point to 222.222.2.2.
Will all the applications and computers running them have cached the old resolved IP address?
I'm assuming they will have.
Any suggestions along the lines of "don't have your IP change each time you switch box" are not too welcome as I cannot control this aspect of the situation, unfortunately. We are currently using the machine name of the box, which changes every time it dies and all apps etc. have to be updated with the new machine name. It hurts.
Even if the DNS is not cached local to the machine, it will likely be cached somewhere along the DNS chain between the machine and the name servers, at least for a short while. My understanding is this situation would usually be handled with IP takeover where you just make the new machine 111.111.1.1.
Probably a question for serverfault.
You're looking for DNS TTL (Time To Live) I guess.. In my opinion applications may cache the IP for at most the value of the TTL. I'm afraid however that some applications/technologies might actually cache it longer (agian in my opinion completely wrong)
Each machine will cache the ip address.
The length of time it is cached is the TTL (Time To Live). This is a setting on your DNS server, if you set it very low say 5 mins, then you show be up and running fairly quikly. A bit of a hack but it should work.
Yes, the other comments are correct in that what controls this is the DNS TTL set for the hostname database.mywebsite.com.
You'll have to decide what the maximum amount of time you're willing to wait for if you have a failure on your primary address (111.111.1.1) after you make the switch to the secondary address. Lower settings will give you a quicker recovery time, but will also increase the load and bandwidth to your DNS server because clients will have to re-query it to refresh their cache more often.
You can use nslookup using the -d option from your cmd prompt to see what your default TTL times and remaining TTL times are for the DNS server you are querying.
%> nslookup -d google.com
You should assume that they are cashed for two reasons not clearly mentioned before:
1- Many "modern" versions of OS families do DNS caching.
2- Many applications do DNS caching or have poor error/failure detection on live connections and/or opening new connections. This would possibly include your database client.
Also, this is probably not well documented. I did some googling, and found this for MySQL:
http://dev.mysql.com/doc/refman/5.0/en/connector-net-programming-connecting-connection-string.html#connector-net-programming-connecting-errors
It does not clearly explain its behavior in this regard.
I had a similar issue with a web site that disables the application pool recycling features and runs for weeks on end. Sometimes, a clustered SQL Server box would restart and for some reason, my SqlConnection's were not reconnecting. I was getting the error:
A network-related or instance-specific
error occurred while establishing a
connection to SQL Server. The server
was not found or was not accessible.
Verify that the instance name is
correct and that SQL Server is
configured to allow remote
connections. (provider: Named Pipes
Provider, error: 40 - Could not open a
connection to SQL Server)
The server was there - and running - in fact, if I just recycled the app pool, the app would work fine - but I don't like recycling app pools!
The connections that were being held in the connection pool were somehow using old connection information, and that could have been old IP addresses. This is what seems so similar to the poster's question, that it appears to be cached DNS information, because as soon as some sort of a cache is cleared, the app works fine.
This is how I solved it - by forcing all of the connections in the pool to be re-created:
Try
' Example: SqlDependency, but this could also be any SqlConnection.Open call
Dim result As Boolean = SqlClient.SqlDependency.Start(ConnStr)
Catch sqlex As SqlClient.SqlException
SqlClient.SqlConnection.ClearAllPools()
End Try
The code sample is just the boiled-down basics - it should be tweaked for your situation!
The DNS gets cached, but for any server that resolves to the wrong ip address, you can update the HOSTS file of the server and the ip should be updated immediately. This could be a solution if you have a limited amount of servers accessing your database server.

Resources