JMSWMQ0018: Failed to connect to queue manager 'RBA.QM.GZ' with connection mode 'Client'
and host name 'rba007.in.ibm.com(1416)'. Check the queue manager is started and if running in client mode,
check there is a listener running. Please see the linked exception for more information.
JMSCMQ0001: WebSphere MQ call failed with compcode '2' ('MQCC_FAILED') reason '2059'
('MQRC_Q_MGR_NOT_AVAILABLE').
am getting this two errors can any one tell.
Related
[ubldev#UBL-DEV ~]$ ./DBT
Exception in thread "main" com.temenos.tafj.common.exception.DatabaseRuntimeException: TAFJERR-1020: Error database connection Details : TAFJERR-1020: Error database connection Details : IO Error: Connection reset by peer, Authentication lapse 316987 ms.
at com.temenos.tafj.dataaccess.DataAccessExecutor.doConnection(DataAccessExecutor.java:1825)
at com.temenos.tafj.dataaccess.DataAccessExecutor.doConnection(DataAccessExecutor.java:1667)
at com.temenos.tafj.dataaccess.DataAccessExecutor.getConnection(DataAccessExecutor.java:1639)
at com.temenos.tafj.dataaccess.DataAccessExecutor.getConnection(DataAccessExecutor.java:1630)
at com.temenos.tafj.dataaccess.JDBCDataAccessConductor.getConnection(JDBCDataAccessConductor.java:5301)
at com.temenos.tafj.tools.integration.DBToolsConsole.getConnection(DBToolsConsole.java:1330)
at com.temenos.tafj.tools.integration.DBToolsConsole.init(DBToolsConsole.java:250)
at com.temenos.tafj.tools.integration.AbstractConsole.doIt(AbstractConsole.java:236)
at com.temenos.tafj.tools.integration.DBToolsConsole.main(DBToolsConsole.java:208)
I am trying to access my DB in Redhat and but unable to connect it, following error i found. Pelase provide me a way forward
I have set up a flink standalone cluster, with one master and three slaves , all SESU Linux machines. In the master Dashboard http://flink-master:8081/ I can see 3 Task Managers and 3 task slots as I have set taskmanager.numberOfTaskSlots: 1 in flink-conf.yaml in all of the slaves.
When I run a flink built-in program,like the examples/streaming/Iteration.jar,I get exception often:
java.io.IOException: Connecting the channel failed: Connecting to remote task manager + 'ccr202/127.0.0.2:49651' has failed. This might indicate that the remote task manager has been lost.
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory$ConnectingChannel.waitForChannel(PartitionRequestClientFactory.java:197)
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory$ConnectingChannel.access$000(PartitionRequestClientFactory.java:132)
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory.createPartitionRequestClient(PartitionRequestClientFactory.java:84)
at org.apache.flink.runtime.io.network.netty.NettyConnectionManager.createPartitionRequestClient(NettyConnectionManager.java:59)
at org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel.requestSubpartition(RemoteInputChannel.java:156)
at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.requestPartitions(SingleInputGate.java:480)
at org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate.getNextBufferOrEvent(SingleInputGate.java:502)
at org.apache.flink.streaming.runtime.io.BarrierTracker.getNextNonBlocked(BarrierTracker.java:93)
at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:214)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:264)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException: Connecting to remote task manager + 'ccr202/127.0.0.2:49651' has failed. This might indicate that the remote task manager has been lost.
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory$ConnectingChannel.operationComplete(PartitionRequestClientFactory.java:220)
at org.apache.flink.runtime.io.network.netty.PartitionRequestClientFactory$ConnectingChannel.operationComplete(PartitionRequestClientFactory.java:132)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
... 1 more
Caused by: java.net.ConnectException: Connection refused: ccr202/127.0.0.2:49651
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281)
... 6 more
It seems that the network causes the problem,but sometimes the flink program can successfully finish.So what is the reason?
I also encounter this issue very frequently especially when there are many taskManagers. There are a few config I have tried to solve this issue. It's happened when the taskManager read the remote partition through netty connection. It timed out when request the connection. I increased the config "taskmanager.network.netty.server.numThreads", it solved the issue.
I'm trying now to startup a liferay 6.1.1 ga2 server with sqlserver database.
I created user and database with windows commande line "sqlcmd -S localhost" as the following:
1> create database liferayportal;
2> create login lrUser with password='abc123';
3> create user lrUser for login lrUser;
4> exec sp_addrolemember 'liferayportal' , 'lrUser';
the portal-ext.properties:
jdbc.default.driverClassName=net.sourceforge.jtds.jdbc.Driver
jdbc.default.url=jdbc:jtds:sqlserver://localhost/liferayportal
jdbc.default.username=lrUser
jdbc.default.password=abc123
And this is a summary of the stacktrace
09:28:36,657 WARN [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1][BasicResourcePool:1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#6245a4 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: Network error IOException: Connection refused: connect
at net.sourceforge.jtds.jdbc.ConnectionJDBC2.<init>(ConnectionJDBC2.java:410)
.
.
.
09:28:36,702 WARN [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0][BasicResourcePool:1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#c8aeb3 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: Network error IOException: Connection refused: connect
at net.sourceforge.jtds.jdbc.ConnectionJDBC2.<init>(ConnectionJDBC2.java:410)
.
.09:28:36,717 WARN [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#7][BasicResourcePool:1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#fab5b1 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: Network error IOException: Connection refused: connect
at net.sourceforge.jtds.jdbc.ConnectionJDBC2.<init>(ConnectionJDBC2.java:410)
.
.
.
09:28:36,743 WARN [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#4][BasicResourcePool:1841] com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#12524b0 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: Network error IOException: Connection refused: connect
at net.sourceforge.jtds.jdbc.ConnectionJDBC2.<init>(ConnectionJDBC2.java:410)
.
.
.
I would check whether the server has the port related to MSSQL open for remote access (normally the port is 1433) if not, you should set MSSQL to work on that port, and allow remote access and verify that your firewall allows the access to that port.
I've recently installed the BizTalk 2013 HL7 adapter on my development machine. During setup, it asks for the logging account, which I provided and was successfully added and the installation finished without a hitch.
However, when I try to submit a message to the Receive port configured to use the HL7 pipeline, I'm always receiving the same errors
First there is an "Information" event log stating:
Login failed for user 'my-BizTalk-HOST-account'. Reason: Failed to open
the explicitly specified database. [CLIENT: 1.2.3.4]
Then immediately after there is:
There was a failure executing the receive pipeline: "BTAHL72XPipelines.BTAHL72XReceivePipeline, BTAHL72XPipelines, Version=1.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Source: "BTAHL7 2.X Disassembler" Receive Port: "my-receive-port-name" URI: "0.0.0.0:11001" Reason: Unable to configure the logstore.
If I look at the details tab in the event, it shows in the Binary Data in Bytes, the name of my server, followed by master.
A few points to consider:
we do not have SQL Server logging enabled in the HL7 configuration
tool (just the event log)
my-BizTalk-HOST-account is not the account that is configured for HL7 logging anyway, so why is it being used?
I'm not sure why it's trying to access the master database (if that is indeed what the event log is telling me)
SQL logins/users for my-BizTalk-HOST-account are setup in the BizTalk databases with proper permissions
sending to any other receive location behaves fine, it's just those using the BTAHL72xReceivePipeline
Can anyone explain this or have a fix?
I have setup 3 servers with Amazon EC2, and have each server with the following Zookeeper-config.
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
server.1=server1address:2888:3888
server.2=server3address:2888:3888
server.3=server3address:2888:3888
I start zookeeper on each server, and after I start Solr on the servers, I get errors like this in Solr:
3766 [main] INFO org.apache.solr.common.cloud.ConnectionManager – Waiting for client to connect to ZooKeeper
3790 [main-SendThread(*serverAddress*:2181)] WARN org.apache.zookeeper.ClientCnxn – Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
This was apparently coming because Zookeeper wasn't running properly. What I then figured out was that zookeeper was producing this error:
2013-06-09 08:00:57,953 [myid:1] - INFO [ec2amazonaddress.com/ipaddress#amazon:QuorumCnxManager$Listener#493] - Received connection request /ipaddress:60855
2013-06-09 08:00:57,963 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager#368] - Cannot open
channel to 3 at election address ec2amazonaddress/ipaddress#amazon:
3888
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:35
4)
So the problem is with ZooKeeper. What I did was to start another server before the server I previously started first, and then it worked. However, after some restarts that didn't work anymore. In other words, it seems like the order of when you start the ZK server matters. I was able to see that some servers who were fired up first went into follower mode instead of leader mode right away, and maybe that's the reason. I have deleted and reinstalled my whole setup, but the problem was still there.
I have checked the ports and have killed all processes using ports 2181 and 2888/3888 before launching Zookeeper. What bothers me is that this has worked with the same setup earlier.
Hope some of you guys have some experience with this problem. Any suggestion that could be related to not being able to connect to ZK-servers is also welcomed