I have a problem in WSO2 DSS, the database connection timeout is occurring after a few hours, then I have to stopping and starting the DSS to work.
The DSS version is 2.5.1
the database server is sql server
can help me ? Thank
Have you properly configured your datasource that's used in the dataservice descriptor file to enable "validationQuery" parameter to "SELECT 1" (validation query can vary depending on the RDBMS type used but for SQL server you can use the aforementioned query) and "testOnBorrow" parameter to "true"?
To give you a bit of context on the issue, any RDBMS type by default has a connection timeout value defined. For example, MySQL has a default timeout of 8 hours by default. When connection pooling is used in an application, the connections are kept in the pool once they are created without physically closing to reuse them. However, after the aforementioned period of time, the connections become stale and you have to validate the connections before using them. This is done by specifying a validation query which will be executed whenever a pooled connection is reused. and the "testOnBorrow" parameter comes handy as, when specified, it will validate the pooled connections when they are borrowed from the conneciton pool.
Cheers,
Prabath
Related
What does it do? How does it work? Why am I supposed to test the database connection before "borrowing it from the pool"?
I was not able to find any related information as to why I should be using it. Just how to use it. And it baffles me.
Can anyone provide some meaningful definition and possibly resources to find out more?
"test-on-borrow" indicates that a connection from the pool has to be validated usually by a simple SQL validation query defined in "validationQuery". These two properties are commonly used in conjunction to make sure that the current connections in the pool are not stale (no longer connected to the DB actively as a result of a DB restart, or timeouts enforced by the DB, or whatever other reason that might cause stale connections). By testing the connections on borrow, the application can automatically reconnect to the DB using new connections (and dropping the invalid ones) without a manual restart of the app and thus preventing DB connection errors in the app.
You can find more information on jdbc connection pool attributes here:
https://tomcat.apache.org/tomcat-8.0-doc/jdbc-pool.html#Common_Attributes
i am using sql server 2012 with desktop application as client , the application get errors after period of time when no activity on it , i googled about this issue all solutions points me to AUTO_CLOSE option on database but it's already set to false .
i thing is something missing in connection string (ADO Extension)
To be honest, if you have long running connections, you can hit these errors regardless due to firewalls / routers closing connections, etc. The correct solution is to instantiate a connection when you need it, use the connection and release it. With connection pooling, this is not really a performance problem.
If your long-running application is "bursty", it is sometimes convenient to open the connection, do a number of commands -- then when you go idle, release the connection and wait the next burst of activity.
I have 3 servers set up for SQL mirroring and automatic failover using a witness server. This works as expected.
Now my application that connects to the database, seems to have a problem when a failover occurs - I need to manually intervene and change connection strings for it to connect again.
The best solution I've found so far involves using Failover Partner parameter of the connection string, however it's neither intuitive nor complete: Data Source="Mirror";Failover Partner="Principal" found here.
From the example in the blog above (scenario #3) when the first failover occurs, and principal (failover partner) is unavailable, data source is used instead (which is the new principal). If it fails again (and I only tried within a limited period), it then comes up with an error message. This happens because the connection string is cached, so until this is refreshed, it will keep coming out with an error (it seems connection string refreshes ~5 mins after it encounters an error). If after failover I swap data source and failover partner, I will have one more silent failover again.
Is there a way to achieve fully automatic failover for applications that use mirroring databases too (without ever seeing the error)?
I can see potential workarounds using custom scripts that would poll currently active database node name and adjust connection string accordingly, however it seems like an overkill at the moment.
Read the blog post here
http://blogs.msdn.com/b/spike/archive/2010/12/15/running-a-database-mirror-setup-with-the-sqlbrowser-service-off-may-produce-unexpected-results.aspx
It explains what is happening, the failover partner is actually being read from the sql server not from your config. Run the query in that post to find out what is actually being used as the failover server. It will probably be a machine name that is not discoverable from where your client is running.
You can clear the application pool in the case a failover has happened. Not very nice I know ;-)
// ClearAllPools resets (or empties) the connection pool.
// If there are connections in use at the time of the call,
// they are marked appropriately and will be discarded
// (instead of being returned to the pool) when Close is called on them.
System.Data.SqlClient.SqlConnection.ClearAllPools();
We use it when we change an underlying server via SQL Server alias, to enforce a "refresh" of the server name.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.clearallpools.aspx
The solution is to turn connection pooling off Pooling="false"
Whilst this has minimal impact on small applications, I haven't tested it with applications that receive hundreds of requests per minute (or more) and not sure what the implications are. Anyone care to comment?
Try this connectionString:
connectionString="Data Source=[MSSQLPrincipalServerIP,MSSQLPORT];Failover Partner=[MSSQLMirrorServerIP,MSSQLPORT];Initial Catalog=DatabaseName;Persist Security Info=True;User Id=userName; Password=userPassword.; Connection Timeout=15;"
If you are using .net development, you can try to use ObjAdoDBLib or PigSQLSrvLib and PigSQLSrvCoreLib, and the code will become simple.
Example code:
New object
ObjAdoDBLib
Me.ConnSQLSrv = New ConnSQLSrv(Me.DBSrv, Me.MirrDBSrv, Me.CurrDB, Me.DBUser, Me.DBPwd, Me.ProviderSQLSrv)
PigSQLSrvLib or PigSQLSrvCoreLib
Me.ConnSQLSrv = New ConnSQLSrv(Me.DBSrv, Me.MirrDBSrv, Me.CurrDB, Me.DBUser, Me.DBPwd)
Execute this method to automatically connect to the online database after the mirror database fails over.
Me.ConnSQLSrv.OpenOrKeepActive
For more information, see the relevant links.
https://www.nuget.org/packages/ObjAdoDBLib/
https://www.nuget.org/packages/PigSQLSrvLib/
https://www.nuget.org/packages/PigSQLSrvCoreLib/
Today we had a lot more activity than normal between our Ruby on Rails application and our remote legacy SQL Server 2005 database, and we started getting the error below intermittently. What is is? How can I prevent it (besides avoiding the situation, which we're working on)?
Error Message:
ActiveRecord::StatementInvalid: DBI::DatabaseError: 08S01 (20020) [unixODBC][FreeTDS][SQL Server]
Bad token from the server: Datastream processing out of sync: SELECT * FROM [marketing] WHERE ([marketing].[contact_id] = 832085)
You need to use some sort of connection pooling.
Windows itself (including Windows Server X) will only allow a certain number of socket connections to be created in a given time frame, even if you close them. All others will fail after that.
A connection pool will keep the same sockets open, avoiding the problem. Also, new connections are real slow.
This Microsoft article says:
Often caused by an abruptly terminated network connection, which causes a damaged Tabular Data Stream token to be read by the client.
Was the server network bound? I have no experience with SQLÂ Server, but does it have a limit on the number of connections you can make?
Add the following to your script; the first statement you run:
SET NO_BROWSETABLE OFF
In a asp.net/Sqlserver project, we create connections using ado.net (sql authentication) and we see a behaviour where there are a lot of active connections in "sleeping","Awaiting command" status
The code does the following - Get a connection from common function, update db, Commit transaction, close & dispose transaction, close connection.
1) In sqlserver 2008 when our program runs for sometime (it updates the db every few secs), the number of active connections increases dramatically and sqlserver starts refusing new connections (as the default connections is 100)
2) In sqlserver 2005, we see that the connections are getting reused and work fine. Our max connections does not go above 15-20.
We found an issue from MSFT on 2008 and conveyed to the client.
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=383517 - Talks about 2008 not releasing closed connections immediately.
In the client place, we see the same issue in sql2005 too.
My question, is when the .net program calls close() on a connection, how long does sqlserver keeps it active ?
Thanks a lot for any hints
Regards
Anand
Connections are going to the pool. If they are not reused from there, and the number of connections increases, you certainly did not clean them up properly. Make use of using blocks for any disposable type (also transactions, commands and whatever).
To clear the connection pool you can call this static method:
SqlConnection.ClearAllPools();
This should remove all connections and are not used in the pool. The others are still in use.