I have observed quite some time that the connection attempt to an Oracle server using OCI library hangs forever and I am not able to find a way out to cancel the hung session. Even Ctrl+C doesn't work.
Do we have an option to call OCIServerAttach API in non-blocking mode or is there is an option that can be used to timeout the OCIServerAttach call?
Please note that the OCI version I am using is 11.2.0
Oracle Net can be configured to limit the connection establishment time. The available options have been enhanced in various versions. Oracle 11.2 is 10 years old!!
For recent versions (check the manual for your version), you could create a sqlnet.ora file with options like SQLNET.OUTBOUND_CONNECT_TIMEOUT or TCP.CONNECT_TIMEOUT, or use CONNECT_TIMEOUT in a tnsnames.ora file. Search for 'connect_timeout' in that manual for other choices.
If you use 19c client libraries (that, by the way, will connect to Oracle DB 11.2 or later) then you can even use a timeout option in an Easy Connect connection string like "mydbmachine.example.com/orclpdb1?connect_timeout=15"
Related
During Oracle 21c database creation with DBCA OS 1053(sreenshot below) "OracleService{SID}" occurs. Tried adjusting ServicesPipeTimeout registry but it seems to take no effect as timeout occurs instantly. What could be checked to gain more insight into this problem? Couldn't find anything regarding the error in dbca log.
I had to go for "set up software only" in oracle installer because of "INS-30014: Unable to check whether the location specified is on CFS". Some people reported that disabling network helps with this issue but I am working on a remote VM so it doesn't seem to be an option for me.
System: Windows Server 2016 Standard
Seems that too long DNS name was causing the issue(above 15 chars). After making it shorter, managed to perform installation and services work correctly.
I searched on web but not found a solution to my problem.
My environment:
Laravel 5.5
PHP 7.2
Postgresql 12.3
Ubuntu 18.4
My problem is that DB::disconnect() doesn't close the connection.
My project have multiple database connection to Postgre and many jobs that use this connection.
Inside the job I want to disconnect from the default connection and connect to a specific one.
So if I run multiple times the job, it will create multiple connection and never close the old one.
I tried with DB::reconnect(), DB::disconnect() and DB::purge(), but the connection still open.
I read that the PDO connection is close when all references are set to null, it is possible that the framework keep some reference to the PDO connection and so it will never close?
I tried to make a simple script like:
DB::connect('some_connection');
DB::disconnect('some_connection');
But I can see the connection open on my database.
Any solutions?
Does anyone know if there is a way to force abort connection that is being established in background thread? Or at least define connect timeout?
A couple of notes
Leaving background thread running and just forgetting about it is not an option.
Went through source code and available properties - could not find anything related to the question.
Please be aware that FireDAC (Data Access Library) only describes the Technology from Delphi to simplify accessing Data on several Databases.
To define a Timeout on your connection, the Database Connection has to Support that.
Here is the Describtion for a Connectiontimeout in, for example, an Interbase Connection via FireDAC.
http://docwiki.embarcadero.com/Libraries/Berlin/en/FireDAC.Phys.IBBase.TFDIBService.ConnectTimeout
I get above error while trying to connect oracle 12c. I try using ojdbc6 and ojdbc7 jar files. I found below comment
------------------->
Bug 14575666
In 12.1, the default value for the SQLNET.ALLOWED_LOGON_VERSION parameter has been updated to 11. This means that database clients using pre-11g JDBC thin drivers cannot authenticate to 12.1 database servers unless theSQLNET.ALLOWED_LOGON_VERSION parameter is set to the old default of 8.
This will cause a 10.2.0.5 Oracle RAC database creation using DBCA to fail with the ORA-28040: No matching authentication protocol error in 12.1 Oracle ASM and Oracle Grid Infrastructure environments.
Workaround: Set SQLNET.ALLOWED_LOGON_VERSION=8 in the oracle/network/admin/sqlnet.ora file.
<-------------------
I have one dought to implement above workaround as we have shared database.
If I set SQLNET.ALLOWED_LOGON_VERSION=8 in the oracle/network/admin/sqlnet.ora file will it affect other users ?
Will it affect shared applications and its functionality ?
Setting SQLNET.ALLOWED_LOGON_VERSION=8 in sqlnet.ora affects all connections to the server. You're allowing user authentication with older versions of the password verifier and it affects all users. You can't allow it for just one user. But this isn't going to break other applications that can already connect successfully. It will allow older applications (that use old drivers) to connect too. The best solution is to upgrade all clients if possible but this setting is the workaround and it was made available for this exact purpose.
I have written a job server that runs 1 or more jobs concurrently (or simultaneously depending on the number of CPUs on the system). A lot of the jobs created connect to a SQL Server database, perform a query, fetch the results and write the results to a CSV file. For these types of jobs I use pyodbc and Microsoft SQL Server ODBC Driver 1.0 for Linux to connect, run the query, then disconnect.
Each job runs as a separate process using the python multiprocessing module. The job server itself is kicked off as a double forked background process.
This all ran fine until I noticed today that the first SQL Server job ran fine but the second seemed to hang (i.e. look as though it was running forever).
On further investigation I noticed the process for this second job had become zombified so I ran a manual test as follows:
[root#myserver jobserver]# python
Python 2.6.6 (r266:84292, Dec 7 2011, 20:48:22)
[GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import pyodbc
conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD')
c = conn.cursor()
c.execute('select * from my_table')
<pyodbc.Cursor object at 0x1d373f0>
r = c.fetchall()
len(r)
19012
c.close()
conn.close()
conn = pyodbc.connect('DRIVER={SQL Server Native Client 11.0};SERVER=MY-DATABASE-SERVER;DATABASE=MY-DATABASE;UID=MY-ID;PWD=MY-PASSWORD')
Segmentation fault
So as you can see the first connection to the database works fine but any subsequent attempts to connect fail with a segmentation fault.
I cannot for the life of me figure out why this has started happening or the solution, all worked fine before today and no code has been changed.
Any help on this issue would be much appreciated.
I had a very similar problem and in my case the solution was to upgrade the ODBC driver on the machine I was trying to make the connection from. I'm afraid I don't know much about why that fixed the problem. I suspect something was changed or upgraded on the database server I was trying to connect to.
This answer might be too late for the OP but I wanted to share it anyway since I found this question while I was troubleshooting the problem and was a little discouraged when I didn't see any answers.
I cannot detail the specifics of the underlying mechanics behind this problem. I can however say that the problem was being caused by using the Queue class in python's multiprocessing module. Whether I was implementing this Queue correctly remains unanswered but it appears the queue was not terminating the sub process (and the underlying database connection) properly after each job completed which led to the segmentation faults.
To solve this I implemented my own queuing system which was basically a list of Process objects executed in the order they were put into the list. A loop then made periodic checks on the status of those processes until all had completed. The next batch of jobs would then be retrieved and executed.
I also encounter this problem recently. My config includes unixODBC-2.3.0 plus MS ODBC Driver 1.0 for Linux. After some experiments, we speculate that the problem may arise due to database upgrade (to SQLServer 2008 SP1 in our case), thus triggering some bugs in the MS ODBC driver. The problem also occurs in this thread:
http://social.technet.microsoft.com/Forums/sqlserver/en-US/23fafa84-d333-45ac-8bd0-4b76151e8bcc/sql-server-driver-for-linux-causes-segmentation-fault?forum=sqldataaccess
I also tried upgrade my driver manager to unixODBC-2.3.2 but with no luck. My final solution is using FreeTDS 0.82.6+ with unixODBC-2.3.2. This version of FreeTDS driver goes badly along with unixODBC-2.3.0, for the manager keeps complaining about function non-support of the driver. Everything goes smooth if unixODBC is upgraded.