Camel-Netty4 TCP not able to connect to remote server - apache-camel

I'm facing problem while trying to connect remote server1 with remote server2 by using camel-netty4.
While trying to connect with remote server, throws below exception, but works for localhost.
leTCPNettyServerBootstrapFactory | 313 - org.apache.camel.camel-netty4 - 2.17.0.redhat-630187 | ServerBootstrap unbinding from :
NettyConsumer | 313 - org.apache.camel.camel-netty4 - 2.17.0.redhat-630187 | Netty consumer unbound from: :
BlueprintCamelContext | 234 - org.apache.camel.camel-blueprint - 2.17.0.redhat-630187 | Error occurred during starting Camel: CamelContext() due Cannot assign requested address
java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)[:1.8.0_131]
Please advice to resolve this issue, thank you.

I made a mistake while configuring TCP client and server, now created a consumer which is listening to same host, and a producer created to send message to remote server.

Related

cannot connect to PostgreSQL DB running on EC2 instance

I have a simple PostgreSQL DB running on an EC2 instance.
ubuntu#ip-172-31-38-xx:~$ service postgresql status
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2020-06-19 14:04:12 UTC; 7h ago
Main PID: 11065 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 1152)
CGroup: /system.slice/postgresql.service
Jun 19 14:04:12 ip-172-31-38-xx systemd[1]: Starting PostgreSQL RDBMS...
Jun 19 14:04:12 ip-172-31-38-xx systemd[1]: Started PostgreSQL RDBMS.
ubuntu#ip-172-31-38-xx:~$ psql -U postgres
Password for user postgres:
psql (10.12 (Ubuntu 10.12-0ubuntu0.18.04.1))
Type "help" for help.
postgres=# SELECT *
postgres-# FROM pg_settings
postgres-# WHERE name = 'port';
name | setting | unit | category | short_desc | extra_desc | context | vartype | source | min_val | max_val | enumvals | boot_val | reset_val | sourcefile | sourceline | pending_restart
------+---------+------+------------------------------------------------------+------------------------------------------+------------+------------+---------+--------------------+---------+---------+----------+----------+-----------+-----------------------------------------+------------+-----------------
port | 5432 | | Connections and Authentication / Connection Settings | Sets the TCP port the server listens on. | | postmaster | integer | configuration file | 1 | 65535 | | 5432 | 5432 | /etc/postgresql/10/main/postgresql.conf | 63 | f
(1 row)
The only Security Group that is associated with this EC2 instance has inbound rules wide open:
5432, TCP, 0.0.0.0/0
But when I use a client to connect to this DB with the correct hostname (public IP/DNS), port number, DB name, user name and password typed in, it always says:
could not connect to server: Connection refused, is the server running on host "ec2-dns.com(172.public.ip)" and accepting TCP/IP connections on port 5432?
All right, I've figured it out from this answer
Two things I did to enable myself to connect (exactly from the link above, I'm duplicating it here for convenience):
open this file: sudo vi /etc/postgresql/10/main/pg_hba.conf
immediately below this line:
host all all 127.0.0.1/32 md5
added this line:
host all all 0.0.0.0/0 md5
open this file: sudo vi /etc/postgresql/10/main/postgresql.conf
find a line that starts with this:
#listen_addresses = 'localhost'
Uncomment the line by deleting the #, and change 'localhost' to '*'.
The line should now look like this:
listen_addresses = '*' # what IP address(es) to listen on;.
then restart your service:
sudo service postgresql restart
then you should be able to connect to your DB via a SQL client.
Are you sure PostgreSQL is listening on the IP address and Port number that you are using as host and port parameters. Try by modifying your postgresql.conf file and restarting the server.
sudo nano /etc/postgresql/{YOUR_POSTGRES_VERSION}/main/postgresql.conf
Now go on and find the connection settings and update the following values.
listen_addresses = {YOUR_IP_ADDRESS}
port = {YOUR_PORT_NUMBER}
Now save the file and restart postgresl server:
sudo systemctl restart postgres
Checkout documentation
here:

Replace zookeeper server from zookeeper ensemble (with SolrCloud)

I have a SolrCloud cluster (6.6) setup with external Zookeeper Ensemble (3.4.8) of 5 nodes. Recently, one machine (ip1:port1) that run 1 Zookeeper with id=1 went down. This is what I've done to replace zookeeper:
Start zookeeper in another machine with the same id (=1).
Change zoo.cfg in 4 live zookeeper to match new zookeeper server and restart.
Update ZK_HOST variable in solr.in.sh to match new zookeeper server.
Restart solr.
After that, my solr cluster seemed to functioning well, but in solr.log, it looked like solr client and zookeeper servers still try to connect to the old zookeeper:
Solr log
2017-12-01 15:04:38.782 WARN (Timer-0-SendThread(ip1:port1)) [ ] o.a.z.ClientCnxn Client session timed out, have not heard from server in 30029ms for sessionid 0x0
2017-12-01 15:04:40.807 WARN (Timer-0-SendThread(ip1:port1)) [ ] o.a.z.ClientCnxn Client session timed out, have not heard from server in 31030ms for sessionid 0x0
Zookeeper log:
2017-12-01 13:53:57,972 [myid:] - INFO [main-SendThread(ip1:port1):ClientCnxn$SendThread#1032] - Opening socket connection to server ip1:port1. Will not attempt to authenticate using SASL (unknown error)
2017-12-01 13:54:03,972 [myid:] - WARN [main-SendThread(ip1:port1):ClientCnxn$SendThread#1162] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
2017-12-01 13:54:05,074 [myid:] - INFO [main-SendThread(ip1:port1):ClientCnxn$SendThread#1032] - Opening socket connection to server ip1:port1. Will not attempt to authenticate using SASL (unknown error)
2017-12-01 13:54:06,974 [myid:] - WARN [main-SendThread(ip1:port1):ClientCnxn$SendThread#1162] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
I've done some search in add/remove zookeeper but didn't find a document for it. My zookeeper version (3.4.7) is not supported for dynamic reconfiguration (which is in zookeeper 3.5).
Is there a way I can manually remove/add zookeeper server from ensemble?
Thanks for your attention!

unixODBC works but Apache will not connect

I am trying to setup apache to connect to a Microsoft SQL server for authentication. This is not ideal but this legacy system has the credentials in MSSQL and that can not change. I have unixODBC setup and working
**odbcinst.ini**
[SQL Server Native Client 11.0]
Description = Microsoft SQL Server ODBC Driver V1.0 for Linux
Driver = /opt/microsoft/sqlncli/lib64/libsqlncli-11.0.so.1790.0
Threading = 1
UsageCount = 1
**odbc.ini**
[mssql]
Driver = SQL Server Native Client 11.0
Server = 192.168.250.200
Database = DBName
When I connect using isql I am able to query the database without issue
isql mssql username password
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
In apache I have configured the following
DBDriver odbc
DBDParams "datasource=mssql,user=username;pass=password"
DBDMin 1
DBDKeep 2
DBDMax 10
DBDExptime 300
When I start httpd I get this in the error log
[Thu Dec 10 09:10:35 2015] [dbd_odbc] SQLDriverConnect returned SQL_ERROR (-1) at dbd/apr_dbd_odbc.c:1146 [unixODBC][Microsoft][SQL Server Native Client 11.0]Login timeout expired HYT00 [unixODBC][Microsoft][SQL Server Native Client 11.0]TCP Provider: Error code 0xD 08001 [unixODBC][Microsoft][SQL Server Native Client 11.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is confi 08001
[Thu Dec 10 09:10:35.633986 2015] [dbd:error] [pid 15481] (20014)Internal error: AH00629: Can't connect to odbc: [dbd_odbc] SQLDriverConnect returned SQL_ERROR (-1) at dbd/apr_dbd_odbc.c:1146 [unixODBC][Microsoft][SQL Server Native Client 11.0]Login timeout expired HYT00 [unixODBC][Microsoft][SQL Server Native Client 11.0]TCP Provider: Error code 0xD 08001 [unixODBC][Microsoft][SQL Server Native Client 11.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is confi 08001
[Thu Dec 10 09:10:35.634054 2015] [dbd:error] [pid 15481] (20014)Internal error: AH00633: failed to initialise
[Thu Dec 10 09:10:35.634200 2015] [dbd:crit] [pid 15481] (20014)Internal error: AH00636: child init failed!
SELinux was blocking the connection from Apache.

not attempt to authenticate using SASL (unknown error)

I am trying to setup zookeeper on ec2 two instances. as given here and here.
I am trying to run zookeeper which fails with an error:
command: bin/zkCli.sh -server localhost:2181
> 2015-03-15 00:22:35,644 [myid:] - INFO [main:ZooKeeper#438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher#3ff0efca
Welcome to ZooKeeper!
2015-03-15 00:22:35,671 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread#975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2015-03-15 00:22:35,677 [myid:] - WARN [main-SendThread(localhost:2181):ClientCnxn$SendThread#1102] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[zk: localhost:2181(CONNECTING) 0] 2015-03-15 00:22:36,796 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread#975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2015-03-15 00:22:36,797 [myid:] - WARN [main-SendThread(localhost:2181):ClientCnxn$SendThread#1102] - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
zoo.cfg as bellow
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/lib/zookeeper
clientPort=2181
server.1=localhost:2888:3888
server.2=<My ec2 private IPs>:2889:3889
also I have created myId file as on both ec2 instances - /var/lib/zookeeper/myid
I also tried to edit /ect/hosts file but still facing the same issue.
also how I can start both of the zookeeper instances by 1 command?
Note: Server get started successfully if I tried with bin/zkCli.sh start command.
Thanks in advance!
look zk log zookeeper.out,if there have connection limit error, configure the following to zoo.cfg.
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60
This is temporary error , for mine after some time , It gone away :-
This is my zoo.conf file ::-
Dir=../data
clientPort=2181
tickTime=2000
initLimit=5
This error occurred when I forgot to run% ZOOKEEPER_HOME% \ bin \ zkserver.cmd
By running, the problem has been resolved.
Correct this property on the server.properties
default would be localhost change it to match the zookeeper server starup ip and port
zookeeper.connect=0.0.0.0:2181

zookeeper is not running after restart

I have 3 zookeeper nodes. Those node was working fine but when I restart those nodes using ./zkServer.sh restart, the zookeeper did not got up again.
When I checked on the zookeeper status, it return:
./zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
my zoo.cnf is:
dataDir=/var/lib/zookeeperdata/3
clientPort=2181
initLimit=50
tickTime=2000
syncLimit=10
maxClientCnxns=100000
server.1=IP1 value:2888:3888
server.2=IP2 value:2889:3889
server.3=127.0.0.1:2890:3890
This is unstable behavior because may be after two hours or tomorrow if I made restart for the 3 zookeeper nodes, they will see each others and working fine because this happened before with me.
zookeeper log:
2014-05-14 15:22:34,236 [myid:3] - INFO [main:NIOServerCnxnFactory#94] - binding to port 0.0.0.0/0.0.0.0:2181
2014-05-14 15:22:34,282 [myid:3] - INFO [main:QuorumPeer#913] - tickTime set to 2000
2014-05-14 15:22:34,283 [myid:3] - INFO [main:QuorumPeer#933] - minSessionTimeout set to -1
2014-05-14 15:22:34,283 [myid:3] - INFO [main:QuorumPeer#944] - maxSessionTimeout set to -1
2014-05-14 15:22:34,283 [myid:3] - INFO [main:QuorumPeer#959] - initLimit set to 50
2014-05-14 15:22:34,356 [myid:3] - INFO [main:FileSnap#83] - Reading snapshot /var/lib/zookeeperdata/3/version-2/snapshot.f100000001
2014-05-14 15:22:43,387 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#197] - Accepted socket connection from /127.0.0.1:50923
2014-05-14 15:22:43,396 [myid:3] - INFO [Thread-1:QuorumCnxManager$Listener#486] - My election bind port: 0.0.0.0/0.0.0.0:3890
2014-05-14 15:22:43,404 [myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#354] - Exception causing close of session 0x0 due to java.io.IOExce
ption: ZooKeeperServer not running
2014-05-14 15:22:43,404 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1001] - Closed socket connection for client /127.0.0.1:50923 (no se
ssion established for client)
2014-05-14 15:22:43,427 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumPeer#670] - LOOKING
2014-05-14 15:22:43,429 [myid:3] - INFO [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:FastLeaderElection#740] - New election. My id = 3, proposed zxid=0xf100000001
2014-05-14 15:22:48,438 [myid:3] - WARN [WorkerSender[myid=3]:QuorumCnxManager#368] - Cannot open channel to 1 at election address /54.76.10.81:3888
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:354)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:327)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:393)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:365)
at java.lang.Thread.run(Thread.java:662)
2014-05-14 15:22:53,440 [myid:3] - WARN [QuorumPeer[myid=3]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager#368] - Cannot open channel to 1 at election address /54.76.10.81:3
888
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:354)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:388)
I searched a lot on this but I did not found anything useful for me so I hope someone can help me.
Thanks
I've seen behavior like this as well. A ZK configuration that's been running fine will sometimes simply fail to restart. When this happens I've tried the following:
1) look at the logs for all of the servers...often one will list an error
2) stop all servers and restart
3) stop all servers and restart the servers one at a time
4) verify that each server's myid file exists, has correct permissions and has the right value.
I've used clusterssh to open windows to each of the servers so that the restarts can be at the very same time...and then I've tailed all of the server logs. Keep in mind that during restart the ZK cluster is doing a lot: both starting each server and electing a leader. I've had times when the cluster seemed to fail and then after a few more minutes it seems to figure it out.
There is a great tool called zktop that I've used for monitoring ZK.
I fixed it by changing the IP 127.0.0.1 to the internal IP for amazon node, after making this change for the three nodes and restart, this problem did not happened again. I hope this answer can help someone asking about the same problem.
make sure you have put correct data Dir in each of your node configuration.
and also put a myid file in data Dir and put a number between 1-255 for each of you node in the myid file.
I think it resole the issue.

Resources