Informix server is not able to start time to time - database

I am trying to find the root cause of the issue with the Informix installation where the server is restarted every night (some legacy setup). Most of the restarts are OK, but from time to time the database does not start.
It ends just with the following lines:
...
Thu Dec 15 00:36:17 2022
00:36:17 Successfully added a bufferpool of page size 2K.
00:36:17 Event alarms enabled. ALARMPROG = '/opt/IBM/informix/12.10/etc/alarmprogram.sh'
00:36:17 Booting Language <c> from module <>
00:36:17 Loading Module <CNULL>
00:36:17 Booting Language <builtin> from module <>
00:36:17 Loading Module <BUILTINNULL>
00:36:22 Entries in the surrogates file /etc/informix/allowed.surrogates are loaded into surrogate cache.
00:36:22 Trusted host cache successfully built:/etc/hosts.equiv.
00:36:22 DR: DRAUTO is 0 (Off)
00:36:22 DR: ENCRYPT_HDR is 0 (HDR encryption Disabled)
00:36:22 Event notification facility epoll enabled.
00:36:23 IBM Informix Dynamic Server Version 12.10.FC2WE Software Serial Number AAA#B000000
00:36:23 (5) connection rejected - no calls allowed for sqlexec
00:36:23 (6) connection rejected - no calls allowed for sqlexec
When I then SSH to the server, I am able to start it normally. Do you please have any idea what can cause this issue?
Thank you.
On the link bellow, there is a log from a successful start and from a failed on.
https://pastebin.com/hQPBccX6

Related

DB2 Communication Error

We recently developed an application which will run a query in DB2 and send a mail to the corresponding recipient. It works well in our local system and QA region. But in production, few queries failed (even if it's rare, like once in week). It throws the exception below.
Exception InnerDetails:
ERROR [40003] [IBM][CLI Driver] SQL30081N A communication error has
been detected. Communication protocol being used: "TCP/IP".
Communication API being used: "SOCKETS". Location where the error was
detected: "111.111.111.111". Communication function detecting the
error: "recv". Protocol specific error code(s): "10004", "", "".
SQLSTATE=08001
Since error occurs only in production and not very often, we are not sure whether it is the code or a setting issue. Do you have any idea?
We recently discussed this issue with our IBM rep. After looking in their internal knowledge base, he suggested we add "Interrupt=0" to our connection string, based on recommendations given to other customers that had the same problem.
The default value for Interrupt was 1 before v10.5 FP2 and still is for most connections. They changed the default value to 2 for connections to z/OS (mainframe) in FP2.
We're using C# and the connection string properties for the IBM Data Server Driver for .Net can be found here. I'm sure there is a similar property for their drivers for other languages.
This page from the IBM docs goes into a bit more detail about the setting.
We haven't seen the issue since we recently added the property, but it was always intermittent so I can't yet confidently say that the problem is fixed. Time will tell...
That particular error (SQL30081N) is just a generic message that indicates a network issue between your DB2 client and the server. In this case, you want to look at the Protocol specific error code(s). Here, it looks like you're on Windows, and that particular code (10004) isn't given in the IBM documentation.
So, if you google "windows network error codes", you'll find this page, which says:
WSAEINTR
10004
Interrupted function call.
A blocking operation was interrupted by a call to WSACancelBlockingCall.
Which links to this page with more information on that specific function (emphasis mine):
The WSACancelBlockingCall function has been removed in compliance
with the Windows Sockets 2 specification, revision 2.2.0.
The function is not exported directly by WS2_32.DLL and Windows
Sockets 2 applications should not use this function. Windows Sockets
1.1 applications that call this function are still supported through the WINSOCK.DLL and WSOCK32.DLL.
Blocking hooks are generally used to keep a single-threaded GUI
application responsive during calls to blocking functions. Instead of
using blocking hooks, an applications should use a separate thread
(separate from the main GUI thread) for network activity.
I'm guessing that your application may be blocking for a longer time in your production application than your other environments, and something along the way is causing the interrupt.
Hopefully this leads you down the right path...
I spent hours to solve the same problem and fixed it. I use a Windows exe (developed with C#.NET) to run a SELECT query from a DB2 database and I sometimes got this error. Finally I realized that my problem is a time out error. Error with protocol code "10004" message, sometimes occurs if query execution is longer than 30 seconds which is default timeout value. Maybe the interruption call on the "Windows Socket Error Codes" page occurs for time out mechanism. I add aline to set an acceptable timeout value and got rid off this annoying error. I hope it helps other.
Here is my code fix :
...
connDb.Open();
DB2Command cmdDb = new DB2Command(QueryText,connDb);
cmdDb.CommandTimeout = 300; //I added this line.
using (DB2DataReader readerDb = cmdDb.ExecuteReader())
{
...

MongoDB available connections

I installed MongoDB on Windows, Mac and Linux.
I run MongoDB with all default arguments and I enter a command db.serverStatus().connections on mongo to check the available connections.
Here is my observation, Windows 7 has 19999, Mac has only 203 and Linux has 818. Therefore, I would like to ask what makes the number of available connections different and is it possible to increase the available connections?
For UNIX-like systems (i.e. Linux and OS X), the connection limit is governed by ulimits. MongoDB will use 80% of the available file descriptors for connections, which is why you see 203 on Mac (~80% of 256) and 819 on Linux (~80% of 1024).
The MongoDB documentation includes recommended settings for production systems. Typically you wouldn't need to change this on development environments, but you will see a warning on startup if the connection limits are considered low.
In MongoDB 2.4 and earlier, there is a hard-coded maximum of 20,000 connections per server irrespective of the ulimits. This maximum has been removed as at MongoDB 2.6.
There is also a maxConns MongoDB configuration directive that can be used to limit the connections to something lower than what would be allowed by ulimits.
#fmchan Turn off SELinux and check again.
I set high NOFile and NProc limits on systemd, and in /etc/security/limits.conf file. But, it didn't help
Now, the only thing that works for me is to
1. setenforce 0 && systemctl restart mongod.service
2. Write a SELinux policy to allow mongod_exec_t to setrlimit and rlimitinh
Here's a similar issue https://bugzilla.redhat.com/show_bug.cgi?id=1415045
I was facing the same issue and apparently ulimit -n 6000 didn't help.
On macos we could check the setting for open files using below command which was showing 256 in my case and 80% of 256 was 204 hence the max connection by mongo was 204.
launchctl limit maxfiles
I have followed https://gist.github.com/tombigel/d503800a282fcadbee14b537735d202c to resolve my error. This explains in details how to set the values.
Follow the documentation, restart laptop and you should see the setting

net-snmp agentx re-enable

I have embedded a net-snmp agentx subagent in my c++ application code on Ubuntu Linux. I want to disable the agentx subagent once it is working and then re-enable it again. I am successfully able to setup the agent, poll the mib using snmpget from the command line and disable agentx socket connection by using snmp_shutdown but I am unable to re-enable the socket connection again once I disable it.
Appreciate any help/pointers.
I use the following code to initialise the SNMP library and the agentx socket connection.
In the beginning, initialise the AgentX subagent -
netsnmp_ds_set_boolean(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_ROLE, 1);
netsnmp_ds_set_int(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_AGENTX_PING_INTERVAL, 120);
netsnmp_ds_set_string(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_X_SOCKET, m_agentx_socket.c_str());
/* initialize the agent library */
init_agent("MyApp");
// initialise MIB module
init_snmp("MyApp");
The poll the MIB using snmpget and disable the connection using the function below -
snmp_shutdown("MyApp");
SOCK_CLEANUP;
Works fine so far.
Then I re-enable the connection using the code below but this does not work.
netsnmp_ds_set_boolean(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_ROLE, 1);
netsnmp_ds_set_int(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_AGENTX_PING_INTERVAL, 120);
netsnmp_ds_set_string(NETSNMP_DS_APPLICATION_ID, NETSNMP_DS_AGENT_X_SOCKET, m_agentx_socket.c_str());
/* initialize the agent library */
init_agent("MyApp");
init_snmp("MyApp");
I think you have to re-run binary itself after it got shut down.
You have not clarified here why you want to restart agentx.
if you are doing this for fetching some data frequently. than I guess you can try infinite for loop with sleep command of a time span in your code. this will be better option.
I found the following information in the README.agentx file from net-snmp-5.7.2 (currently visible at http://www.net-snmp.org/docs/README.agentx.html :
Similarly, a subagent will not be able to re-register in place of a
defunct colleague until the master agent has received three requests
for the dead connection (and hence unregistered it).
It seems likely, therefore, that the master still has your subagent registered despite your attempt at a clean shutdown. Perhaps you could try making three or more requests while your subagent is disabled and then proceeding with your re-registration.

Apache stops processing requests (mod_wsgi?)

At some point my site, running on Apache2 with mod_wsgi just stops processing requests. The connection to server is maintained and client waits for responce, but it never is returned by apache. The server at this time is at 0% CPU, and nothing is processing. I think, apache just sends request to queue and never gets them out of there.
When I perform apache2ctl graceful the problem does not resolve. Only after apache2ctl restart.
My site is a 4 instance wsgi application of Pyramid and 2 instances of Zope 3. It is running normaly and does not have speed problems, that I am aware of.
versions:
Ubuntu 10.04
apache2 2.2.14-5ubuntu8.9
libapache2-mod-wsgi 2.8-2ubuntu1
Sounds like you are using embedded mode to run the multiple applications and you are using third party C extensions that have problems in sub interpreters, resulting in potential deadlock. Else your code is internally deadlocking or blocking on external services and never returning, causing exhaustion of available processes/threads.
For a start, you should look at using daemon mode and delegate each web application to a distinct daemon process group and then forcing each to run in the main interpreter.
See:
http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide#Delegation_To_Daemon_Process
http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Python_Simplified_GIL_State_API
Otherwise use debugging tips described in:
http://code.google.com/p/modwsgi/wiki/DebuggingTechniques
for getting stack traces about what application is doing.

JDBC connection hanging

One of our customers has a newish problem: the application grinds to halt. Thread dump shows that all threads are hanging on network IO in JDBC calls.
We/I have never seen these 'network IO' hangs. Typically a slow machine w/ DB problems has either a) one or two long-running queries or b) some type of lock/deadlock. In either of these cases the threads 'hang' on different methods. I have never seen all 30+ threads hanging on network IO.
Below I have included an excerpt from the thread dump. All HTTP threads are hanging on the same java.net.SocketInputStream.read call.
I talked to their dba and sysadmin. According to them 'nothing has changed' in the environment recently which would cause this problem.
db environment
MSSQL 2005 64-bit Service Pack 2
Driver: sqljdbc.jar : 1.0 809 102
Note: they are running an older jdbc driver. AFAIK they tried upgrading from 1.0 to the 1.2 driver but had some other problem.
other environment issues
They're running both the app server and the db server in VMWare VM's. I don't know how this setup affects network performance.
Apparently this is the only application with this problem. I don't know anything else about their network architecture.
Questions
* any insights on this problem?
* if it is network, any next steps for analyzing?
Appendix A: Excerpt from Thread dump
All HTTP connections are hanging on the same method:
"TP-Processor31" daemon prio=5 tid=0x04085b78 nid=0x970 runnable [0x0764d000..0x0764fd6c]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at com.microsoft.sqlserver.jdbc.DBComms.receive(Unknown Source)
at com.microsoft.sqlserver.jdbc.IOBuffer.sendCommand(Unknown Source)
- locked (a com.microsoft.sqlserver.jdbc.DBComms)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.sendExecute(Unknown Source)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteQuery(Unknown Source)
We've had similar issues, and traced them back to a buggy JDK update (1.6.29).
We downloaded 1.6.27 (http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase6-419409.html#jdk-6u27-oth-JPR), re-set the JAVA_HOME environment and we were back on track.
It is the changes to JSSE in version 1.6 u29. The change is to partially patch CVE-2011-3389 and CVE-2011-3560.
If are not deploying applets and using java web-start. You might be able to just use the 1.6 u27 jsse.jar. You're still going to have the vulnerability, but it will allow the sqljdbc.jar and sqljdbc4.jar to work.
The other options to migrate to Java 7, the sqljdbc4.jar does work in that environment.
The patch is not complete fix. So expect more changes in future patches.

Resources