Close database connections after inactivity - database

I have a Mule application that connects to an Oracle database. The application is a SOAP api which allows executing SQL Stored Procedures. My connector is set up to use connection pooling and I've been monitoring the connections themselves. I have a maximum pool size of 20 and when doing calls to the database, I can see them opening (netstat -ntl | grep PORTNUMBER).
tcp4 0 0 IP HERE OTHER IP HERE SYN_SENT
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 0 0 IP HERE OTHER IP HERE ESTABLISHED
tcp4 10 0 IP HERE OTHER IP HERE ESTABLISHED
When the calls are done, I expect the connections to be closed after a certain period of time. This does not happen. I've noticed that when the application was running on a server, connections were still open from july (that's a couple of months back).
The only way I found so far that actually closes the connections after a couple of seconds is by enabling XA transactions and setting the Connection Timeout. However, this completely messes up the performance of the application and it's unnecessary overhead.
How would I go about adding such a timeout without using XA connections? I'd like for my database connections to be closed after 20 seconds of inactivity.
Thank you
Edit:
Generic database connector is used - Mule version 3.8.0
We have a maximum number of connections that are allowed to the database, we have multiple instances of this flow running. This means connections are reserved by one of the instances which causes the other instances unable to get new connections.
The specific issue we've had was that one instance still had 120 connections reserved, even though the last time it ran was weeks before. When the second instance requested more connections, it could only get 30 since the maximum on the database side is 150.

You should use a connection pool implementation that provides you control of the time to live of a connection. Ideally the pool should also provide validation queries to detect stale queries.
For example the c3p0 pool has a configuration called maxConnectionAge that seems to match your needs. maxIdleTime also could be of interest.

You can try using Oracle Transparent Connection Caching if will have no luck with Mule.
A few questions to understand the case better:
which type of connector you're using (jdbc/database) and which version of Mule is that?
why do you care about connection being open afterwards? are you observing some other symptoms you're not happy with?

Database connections via JDBC are designed to stay open with the intention of being reused. In general, most database technologies including the next generation NoSQL databases, have expensive startup and shutdown costs. Database connections should be established at application startup and closed gracefully at application shutdown. You should not be closing connections after each usage.
Oracle offers a connection pool called UCP. UCP offers options to control stale connections which includes setting a max reuse time and inactive connection timeout among other options.
This can be useful for returning resources to the application as well as checking for broken connections. Regardless, connections should be reused multiple times before closing.

Related

Post Keepalived configuration, Server with VIP unable to communicate to other servers

I have configured keepalived on two servers, 10.90.11.194 (Server1) and 10.90.11.196(Server2). Server1 is configured as the MASTER while Server2 is the BACKUP. The VIP 10.90.11.219 successfully switches from Server1 to Server2 when keepalived is stopped on Server1.
Both the servers have syslog-ng configured in them to receive syslogs from firewalls, proxy etc. These servers also have Splunk Heavy Forwarder application installed on them to forward these incoming syslogs to Splunk indexers 10.90.11.226 (IDX1), 10.90.11.227(IDX2) and 10.90.11.228(IDX3).
Server1, Server2, IDX1, IDX2 and IDX 3 are all in the same security group and any-any connection is allowed between them. VIP is also allowed inbound and outbound for this security group.
Problem: No matter which device has VIP assigned, it will not be able to connect to the indexers (IDX1, IDX2 and IDX3) at port 9997. However, this connectivity works absolutely fine from the other device without VIP.
Keepalived Config on Master.
vrrp_instance VI_1 {
state MASTER
interface ens3
virtual_router_id 51
priority 100
advert_int 1
unicast_src_ip 10.90.11.194
unicast_peer {
10.90.11.196
}
authentication {
auth_type PASS
auth_pass 12345
}
virtual_ipaddress {
10.90.11.219/24
}
}
On BACKUP unicast IPs are reversed, priority is 50 and state is BACKUP.
Pinging IDX devices from the server without VIP works fine. Problem is only with the device with VIP where I get "Destination Host Unreachable" response from VIP
[root#SERVER1 ~]# ping 10.90.11.226
PING 10.90.11.226 (10.90.11.226) 56(84) bytes of data.
From 10.90.11.219 icmp_seq=1 Destination Host Unreachable
From 10.90.11.219 icmp_seq=2 Destination Host Unreachable
From 10.90.11.219 icmp_seq=3 Destination Host Unreachable
[root#SERVER1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.90.11.193 0.0.0.0 UG 100 0 0 ens3
10.90.11.0 0.0.0.0 255.255.255.0 U 0 0 0 ens3
10.90.11.192 0.0.0.0 255.255.255.224 U 100 0 0 ens3
169.254.169.254 10.90.11.222 255.255.255.255 UGH 100 0 0 ens3
Could any of you please help fix the issue here?

Using stale statistics instead of current ones

If I am writing constantly to a database and the following LOG message is displayed will any of the data I am writing by damaged or omitted?
LOG: using stale statistics instead of current ones because stats collector is not responding
No, this will not affect the integrity of data written to the database.
It just means that the statistics collector does not react fast enough, perhaps because of I/O overload.
You can probably get rid of the problem if you set stats_temp_directory to point to a directory in a RAM file system.
As already said in the previous answer, no it will not lose data. But you probably still want to fix the problem.
One possible cause for this problem is that the statistics collector process is bound to an IP:port which is not responding.
In such a case, restarting postgres will fix it.
This problem happened to me when I disabled IPv6 on the server without restarting Postgres. I eventually found a detailed explanation here (search for "The statistics collector" in the page), but in short:
PostgreSQL [...] will loop through all the addresses returned [for
localhost], create a UDP socket and test it until it has a socket
that works.
If the socket it had selected was IPv6 and it is later disabled, it stops working and you get that message in the logs.
You can check to which IP and UDP port the "postmaster" (or "postgres") service is bound with
netstat -n -u -p
The output is something like this:
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 127.0.0.1:47780 127.0.0.1:47780 ESTABLISHED 2824/postmaster
or on another host where it is bound to IPv6 ("udp6"):
# netstat -n -u -p
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp6 0 0 ::1:51761 ::1:51761 ESTABLISHED 1006/postgres

How does the accept() function work?

I have a question about the accept() function in C.
When a server receive a connection, the accept() function creates a new socket to communicate with the client, and then let the "old socket" listening for new connections.
Then, I understand that the server can communicate with the client through the "new socket", but how can the client communicate with the "new socket" (because the client don't know about this "new socket") ?
On the server side, the listening socket is associated with only a local IP and port and is in the LISTEN state.
In contrast, accepted sockets on the server (as well as connected sockets on the client) are identified by a local IP and port as well as a remote IP and port and is in the ESTABLISHED state.
On the client side, it doesn't matter that the server uses a listening socket separate from the connected socket. By the time the client returns from connect, the server has returned from accept and the socket descriptors returned from each can communicate with each other.
Any communication in IP protocol (including TCP/IP) occurs between two endpoints. The endpoints are always host:port. In the TCP world, the two endpoints identify the connection. A socket is associated with a connection, not with an endpoint.
Thus, you can have 2 sockets returned from 2 accept() calls, describing 2 distinct connections.
Here is an example of netstat -an output on a unix machine:
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 170.44.26.7:22 161.231.133.178:11550 ESTABLISHED
tcp 0 0 170.44.26.7:22 161.231.133.178:33938 ESTABLISHED
tcp 0 0 170.44.26.7:22 161.231.133.178:13875 ESTABLISHED
tcp 0 0 170.44.26.7:22 161.231.133.178:34968 ESTABLISHED
tcp 0 0 170.44.26.7:22 161.231.133.178:44212 ESTABLISHED
tcp 0 0 170.44.26.7:22 161.231.133.178:34967 ESTABLISHED
Here we have a listening socket, and a few connections (each backed by its own socket) resulting from accept() on that socket.
Sockets are an abstraction of the network programming API. On the wire and for the client there is still only a single connection and the client does not see if the server is using a network API with listen, accept etc or if the server is using some other API or raw sockets to establish the connection.
The explanation is that a TCP (an end point in a TCP/IP transmission) is uniquely identified by the couple IPaddress/port_number. When a client asks for a connection, it does it using its IP and port number, a pair which is unique. That operation binds SRCIP+SRCPORT to DSTIP+DSTPORT, and those 4 numbers (the two IPs plus the two ports) uniquely identify a connection. So the two sockets on the server really refer to two different connections/streams.

dbus-daemon listening on server port

I have written a simple server program which listens on port 4849. When I start it the first time everything works fine. If I stop and restart it, it fails:
Cannot bind!! ([Errno 98] Address already in use)
Netstat tells me this...
root#node2:/home/pi/woof# netstat -pl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 *:4849 *:* LISTEN 2426/dbus-daemon
tcp 0 *:* LISTEN 2195/sshd
....
What is dbus-daemon? Do I need it? Why is it listening on the port my server was listening on?
Thanks for any help.

What can be the reasons of connection refused errors?

I'm trying to write a server program in C,
using another client, I get this error when I try to connect through port 2080 for example.
connection refused
What can be the reasons of this error?
There could be many reasons, but the most common are:
The port is not open on the destination machine.
The port is open on the destination machine, but its backlog of pending connections is full.
A firewall between the client and server is blocking access (also check local firewalls).
After checking for firewalls and that the port is open, use telnet to connect to the ip/port to test connectivity. This removes any potential issues from your application.
The error means the OS of the listening socket recognized the inbound connection request but chose to intentionally reject it.
Assuming an intermediate firewall is not getting in the way, there are only two reasons (that I know of) for the OS to reject an inbound connection request. One reason has already been mentioned several times - the listening port being connected to is not open.
There is another reason that has not been mentioned yet - the listening port is actually open and actively being used, but its backlog of queued inbound connection requests has reached its maximum so there is no room available for the inbound connection request to be queued at that moment. The server code has not called accept() enough times yet to finish clearing out available slots for new queue items.
Wait a moment or so and try the connection again. Unfortunately, there is no way to differentiate between "the port is not open at all" and "the port is open but too busy right now". They both use the same generic error code.
If you try to open a TCP connection to another host and see the error "Connection refused," it means that
You sent a TCP SYN packet to the other host.
Then you received a TCP RST packet in reply.
RST is a bit on the TCP packet which indicates that the connection should be reset. Usually it means that the other host has received your connection attempt and is actively refusing your TCP connection, but sometimes an intervening firewall may block your TCP SYN packet and send a TCP RST back to you.
See https://www.rfc-editor.org/rfc/rfc793 page 69:
SYN-RECEIVED STATE
If the RST bit is set
If this connection was initiated with a passive OPEN (i.e., came
from the LISTEN state), then return this connection to LISTEN state
and return. The user need not be informed. If this connection was
initiated with an active OPEN (i.e., came from SYN-SENT state) then
the connection was refused, signal the user "connection refused". In
either case, all segments on the retransmission queue should be
removed. And in the active OPEN case, enter the CLOSED state and
delete the TCB, and return.
Connection refused means that the port you are trying to connect to is not actually open.
So either you are connecting to the wrong IP address, or to the wrong port, or the server is listening on the wrong port, or is not actually running.
A common mistake is not specifying the port number when binding or connecting in network byte order...
Check at the server side that it is listening at the port 2080.
First try to confirm it on the server machine by issuing telnet to that port:
telnet localhost 2080
If it is listening, it is able to respond.
1.Check your server status.
2.Check the port status.
For example 3306 netstat -nupl|grep 3306.
3.Check your firewalls.
For example add 3306
vim /etc/sysconfig/iptables
# add
-A INPUT -p tcp -m state --state NEW -m tcp --dport 3306 -j ACCEPT
Although it does not seem to be the case for your situation, sometimes a connection refused error can also indicate that there is an ip address conflict on your network. You can search for possible ip conflicts by running:
arp-scan -I eth0 -l | grep <ipaddress>
and
arping <ipaddress>
This AskUbuntu question has some more information also.
I get the same problem with my work computer.
The problem is that when you enter localhost it goes to proxy's address not local address you should bypass it follow this steps
Chrome => Settings => Change proxy settings => LAN Settings => check Bypass proxy server for local addresses.
In Ubuntu, Try
sudo ufw allow <port_number>
to allow firewall access to both of your server and db.
From the standpoint of a Checkpoint firewall, you will see a message from the firewall if you actually choose Reject as an Action thereby exposing to a propective attacker the presence of a firewall in front of the server. The firewall will silently drop all connections that doesn't match the policy. Connection refused almost always comes from the server
In my case, it happens when the site is blocked in my country and I don't use VPN.
For example when I try to access vimeo.com from Indonesia which is blocked.
Check if your application is bind with the port where you are sending the request
Check if the application is accepting connections from the host you are sending the request, maybe you forgot to allow all the incoming connections 0.0.0.0 and by default, it's only allowing connections from 127.0.0.1
I had the same message with a totally different cause: the wsock32.dll was not found. The ::socket(PF_INET, SOCK_STREAM, 0); call kept returning an INVALID_SOCKET but the reason was that the winsock dll was not loaded.
In the end I launched Sysinternals' process monitor and noticed that it searched for the dll 'everywhere' but didn't find it.
Silent failures are great!

Resources