Setting timeout milliseconds in npgsql - npgsql

Default connection timeout value of npgsql is in seconds, but I would like to set in milliseconds since I need to have it less than 1 seconds.
p.s. I tried setting fractional(e.g. 0.5) but it doesn't work

From what i have been able to find out you will not be able to use a smaller measurement than a second with that provider, so you are left with few options.
1) Find another provider that allows connection timeout in less than a second.
2) Using a timer throw an exception if the connection was not established quick enough. I think you will need to use threading techniques for this to work. I found this guide ‘http://www.albahari.com/threading/’ to be very helpful for me.
3) Use connection pooling and leave a few connections already connected so you don't have to re-establish connection.
good luck

Related

How best to close connections and avoid inactive sessions while using C3P0?

I am using c3p0 for my connection pooling. The ComboPooledDataSource I use is configured as below.
#Bean
public DataSource dataSource() {
ComboPooledDataSource dataSource = new ComboPooledDataSource();
dataSource.setUser("user");
dataSource.setDriverClass("oracle.jdbc.OracleDriver");
dataSource.setJdbcUrl("test");
dataSource.setPassword("test");
dataSource.setMinPoolSize("10");
dataSource.setMaxPoolSize("20");
dataSource.setMaxStatements("100");
return dataSource;
}
I am facing some issues with this. I get warnings that this might leak connections. Also the below error from time to time,
as all the connections are being used up.
java.sql.SQLException: Io exception: Got minus one from a read call
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:439)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection
(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection
(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource
(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
And from the DB stat, could see almost 290 inactive connections. I am having around 8 applications deployed in two servers,
connecting to the same DB.
My queries are
How do I make sure th connections are closed and not to have these many inactive connections?
Would configuring idle time and timeout resolve this issue?
What would happen if the server is brought down/tomcat is shutdown, will the connections remain open?
Connections are mainly used during startup to load cache, so is there a way of not using these connections afterwards?
What should I do for the existing inactive connections?
Given maxPoolSize of 20 and eight deployments, you should expect to see up to 180 Connections, which may be inactive if the application has seen periods of traffic which has now subsided. You have configured nothing to encourage a fast scaling down of your pools -- set maxIdleTime and/or maxIdleTimeExcessConnections and/or maxConnectionAge.
You should probably tell Spring how to close the DataSource you've defined. Use #Bean(destroyMethodName="close") instead of #Bean alone above your dataSource() method.
You have not configured any sort of Connection testing, so even broken Connections might remain in the pool. Please see Simple Advice On Connection Testing.
If the issue were a Connection leak, clients would eventually hang indefinitely, as the pool would be out of Connections to check out, but would already have reached maxPoolSize, and so wouldn't be able to acquire more from the DBMS. Are you seeing clients hang like that?
The way you avoid Connection leaks is, post-Java7, to always acquire Connections from your DataSource via try-with-resources. That is, use...
try ( Connection conn = myDataSource.getConnection() ) {
...
}
rather than just calling getConnection() in a method that might throw an Exception or in a try block. If you are using an older version of Java, you need to use the robust resource cleanup idiom, that is, acquire the Connection in a try block and be sure that conn.close() is always closed in the finally block, regardless of any other failures in the finally block. If you are not working with the DataSource directly, but letting Spring utilities work with it, hopefully those utilities are doing the right thing. But you should post whatever warning you are receiving that warns you of potential Connection leaks!
If your application has little use for Connections after it has "warmed up", and you want to minimize the resource footprint, set minPoolSize to a very low number, and use maxIdleTime and/or maxIdleTimeExcessConnections and/or maxConnectionAge as above to ensure that the pool promptly scales down when Connections are no longer in demand. Alternatively you might close() the DataSource when you are done with its work, but you are probably leaving that to Spring.

Close SQL connections cleanly when connection dropped

Just wondering if there is a way that I can close all SQL connections and commands in the cleanest possible way when a connection is lost in VB.NET (can also be in C#.NET).
What I'm trying to achieve:
Using the System.Net.NetworkInformation.NetworkAvailabilityChanged event, I'm monitoring whether the connection has been lost. If it has been lost, I'm disabling all user input - works great. Network comes back on, UI is enabled. (thumbs up)
Now, however, comes my predicament. If an SQL query is executing before the connection drops and then the network is lost, then the query returns a null value as expected, however if that was mid population of datatables/fields, then I get NullReferenceExceptions.
My question is:
Is there anyway to cleanly exit a sub after the connection has dropped? I've tried Application.ExitThread, but that doesn't seem to quite cut it. Do I need to put dropped connection handlers within my objects, so that when the connection is dropped, the respective object won't return or try to assign null data?
Any help is greatly appreciated. Not asking for plain code, need explanations if at all possible. Cheers.
If you follow the "using" best practice, like:
Using cn As New SqlConnection(connectionString)
...
End Using
Then the compiler will generate code that cleans up the connection when an exception is thrown.

OpenLDAP Bind Timeout

I'm trying to understand how to properly implement a timeout for an OpenLDAP bind request to an LDAP server. From what I've found, there seems to be two ways to do this, with LDAP_OPT_TIMELIMIT and LDAP_OPT_TIMEOUT. My main confusion comes from trying to figure out what the difference is between these.
So far as I understand it, TIMELIMIT is an LDAP standard that sets the time limit for the request/response cycle for any ldap search. And in Windows at least, the default is 120 seconds.
On the other hand TIMEOUT is OpenLDAP specific and used purely client-side for timing out LDAP bind requests. This actually sounds closest to what I want to implement. I know from discussions that using an ldap_set_opt for TIMEOUT was not fully implemented until 2.4. From How can I cause ldap_simple_bind_s to timeout? I know that the work around for earlier versions is to use an asynchronous bind, followed by an ldap_result with the timeout and an ldap_abandon_ext in the case of timeout to drop the request. That makes sense, though looking through the source code for synchronous bind in version 2.4, it doesn't ever seem to handle a timeout in this way. This makes me wonder what the importance of calling ldap_abandon_ext is.
Any answers or insight would be appreciated.
If anyone is looking for OpenLDAP bind timeout yet, you should use method from Aki's answer here.
It is also working in ldapcpp library, when using LDAPAsynConnection for bind. Before bind you must just enable it using getSessionHandle() method.

Intermittent connection timeouts to Solr server using SolrNet

I have a production webserver hosting a search, and another machine which hosts the Solr search server (on a subnet which is in the same room, so no network problems). All is fine >90% of the time, but I consistently get a small number of The operation has timed out errors.
I've increased the timeout in the SolrNet init to 30 seconds (!)
SolrNet.Startup.Init<SolrDataObject>(
new SolrNet.Impl.SolrConnection(
System.Configuration.ConfigurationManager.AppSettings["URL"]
) {Timeout = 30000}
);
...but all that happened is I started getting this message instead of Unable to connect to the remote server which I was seeing before. It seems to have made no difference to the amount of timeout errors.
I can see nothing in any log (believe me I've looked!) and clearly my configuration is correct because it works most of the time. Anyone any ideas how I can find more information on this problem?
EDIT:
I have now increased the number of HttpRequest connections from 2 to 'a large number' (I see up to 10 connections) - but this has had no discernible effect on this problem.
The firewall is set to allow ANY connections between the two machines.
We've also checked the hardware with our server host and there are no problems on the connections, according to them.
EDIT 2:
We're still seeing this issue.
We're now logging the timeouts and they're mostly just over 30s - which is the SolrNet layer's timeout; some are 20s, though - which is the Tomcat default timeout period - which suggests it's something in the actual connection between the machines.
Not sure where to go from here, though - they're on a VLAN and we're specifically using the VLAN address - response time from pings is ALWAYS <1ms.
Without more information, I can only guess a few possible reasons:
You're fetching tons of documents per query, and it times out while transferring data.
You're hitting the ServicePoint.ConnectionLimit. If so, just increase this value. See also How can I programmatically remove the 2 connection limit in WebClient
You have some very facet-heavy requests or misusing Solr (e.g. not using filter queries). Check the qtime in the response. See the Solr performance wiki for more details.
Try setting this in .net.
ServicePointManager.Expect100Continue = false;
or this
ServicePointManager.SetTcpKeepAlive(true, 200000, 200000); - this sends requests to the server to keep the connection alive.

DirectoryEntry Timeout

I am having an issue with the DirectoryEntry object where it's taking a long time trying to connect to to a dead AD server and eventually failing. Is it possible to set a timeout so that if its not able to connect within a specific time, it just comes out to try the next one?
There is no timeout option for DirectoryEntry directly.
You can use DirectorySearcher and set the ClientTimeout (even if you're only looking for one object by path). Or do your directory operation on a new thread or BackgroundWorker and control your own timeout.
I suggest you create your own LdapConnection to the server. This will allow you to specify a timeout and finely control which method you are using.
Also note that without going to this lower level, the .NET classes will attempt to use LDAP+SSL, then Kerberos, and finally RPC. You may be experiencing delays/timeouts during this process.

Resources