Camel SFTP can't establish connection - apache-camel

I have the SFTP server up in one docker container available at localhost:2222 with user user/pass
Trying to establish connection in other one via camel 2.22.0 route like
from("sftp:user#localhost:2222/sftp/in?password=pass"))
.log("${file:name}");
But cannot connect because of
Error auto creating directory:/sftp/in due Cannot connect to sftp://user#localhost:2222. This exception is ignored.
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://pms#localhost:2222
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:144)
at org.apache.camel.component.file.remote.RemoteFileConsumer.connectIfNecessary(RemoteFileConsumer.java:233)
Caused by: com.jcraft.jsch.JSchException: java.net.ConnectException: Connection refused (Connection refused)
at com.jcraft.jsch.Util.createSocket(Util.java:394)
Got that after moving from camel 2.18.2 to camel 2.22.0.
Is it possible to fix?

We upgraded from camel 2.20.0 to camel 2.22.0 during development. After upgrading we could not reach camel from another server. Same problem, Connection Refused. We downgraded back to 2.20.0 and things started working again

I have also had this issue and resolved it by adding camel-ftp dependency:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-ftp</artifactId>
<version>3.16.0</version>
</dependency>
Please check dependency version that works for you here: https://mvnrepository.com/artifact/org.apache.camel/camel-ftp

Related

Connecting Keycloak domain mode to a remote MariaDB

I wanted to deploy Keycloak (v 15.0.2) on the domain mode using MariaDB as an external DB.
Imagine my DB is on 10.0.0.1. I deploy my master on 10.0.0.1 as well by modifying the "KeycloakDS" datasource and the drivers in domain.xml.
I also wanted to deploy a slave on 10.0.0.2 by modifying the domain.xml and the host-slave.xml as the documentation of the Keycloak mentioned (link). I made the below changes on "KeycloakDS" in domain.xml:
<datasource jndi-name="java:jboss/datasources/KeycloakDS" pool-name="KeycloakDS">
<connection-url>jdbc:mariadb://10.0.0.1:3306/keycloak</connection-url>
<driver>mariadb</driver>
<security>
<user-name>myuser</user-name>
<password>mypassword</password>
</security>
</datasource>
Note telnet on 3306 from 10.0.0.2 to 10.0.0.1 is Ok.
After the above changes, I wanted to deploy the slave on 10.0.0.2 but keep facing the error below:
Caused by: java.sql.SQLNonTransientConnectionException:
Socket fail to connect to host:address=(host=localhost)(port=3306)(type=primary). Connection refused: connect
Also note that the sceneio works properly for the standalone-ha mode by making the same changes in standalone-ha.xml.
I followed this link: Installing and Configuring Keycloak - Domain Clustered Deployment
Does anyone have any suggestion that how can I solve this problem?
Try changing the datasource on both
<subsystem xmlns="urn:jboss:domain:datasources:6.0"> in /domain.xml file.
when I was reviewing the file i found there two occurrences of that subsystem.

Failed to submit JobGraph Apache Flink

I am trying to run the simple code below after building everything from Flink's github master branch for various reasons. I get an exception below and I wonder what runs on port 9065? and How to fix this exception?
val dataStream = senv.fromElements(1, 2, 3, 4)
dataStream.countWindowAll(2).sum(0).print()
senv.execute("My streaming program")
Below is the Exception
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$18(RestClusterClient.java:306)
at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$222(RestClient.java:196)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.CompletionException: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9065
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
... 16 more
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9065
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281)
I build it from the sources in the following way (just followed the instructions on Flink github page):
git clone https://github.com/apache/flink.git
cd flink
mvn clean package -DskipTests
cd build-target
./bin/start-scala-shell.sh local
Underlying distributed runtime is currently heavily worked on in master. Starting from 1.5 the default runtime will be the one known as FLIP6, therefore ocassionally some parts might not work. I think it would be very beneficial if you could create a JIRA ticket for this.
Just to add what runs on 9065 port, in the new architecture it is the default port of Dispatcher.
I had the same exception. My issue was that I had a port conflict when started the cluster with a docker image on my machine. So I had changed the port for rest in flink config file to use 8084 instead of 8081. When I did this the cluster would start up properly but I was unable to submit the job. When I killed the conflicting process and reverted the port back to 8081, I could submit jobs successfully
I got the same error.
Use jdk 1.8 for flink 1.7.2

Jackrabbit MSSQL database repository

Any specific reason why abruptly connection to MSSQL server is lost. I am running web application on the same machine so network connectivity issues are out of question. My application uses Jackrabbit configured to store content with MSSQL. Application is running on Wildfly v9 with JAVA_HOME set to jdk1.8 and I have verified that wildfly is picking it up as well. Also, sqljdbc4-3.0.jar is available to wildfly modules with proper driver configuration in standalone. I am baffled as to why jdk7 adapter would be called to manage connection. However, interesting observation is that this does not occur with Wildfly 10.
standalone.xml
<driver name="sqlserver" module="com.microsoft.sqlserver">
<driver-class>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver-class>
<xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class>
</driver>
exception:
ERROR 21-07 16:41:13,636 (DbUtility.java:logException:92) failed to close ResultSet
ERROR 21-07 16:41:13,637 (DbUtility.java:logException:94) Reason: IJ031040: Connection is not associated with a managed connection: org.jboss.jca.adapters.jdbc.jdk7.WrappedConnectionJDK7#386eff84
ERROR 21-07 16:41:13,639 (DbUtility.java:logException:95) State/Code: null/0

SAML - Service Provider could not handle the request

I am self learning SAML. I am learning using picket link quick starts: https://github.com/jboss-developer/jboss-picketlink-quickstarts.
I deployed picketlink-federation-saml-idp-basic-wildfly.war in wildfly 9.0.2 running in port 9080 and picketlink-federation-saml-sp-post-basic-wildfly.war deployed in wildfly 9.0.2 running in port 8080. I also updated standalone.xml to update security domain for IDP and SP.
The only change I had todo in sample, was to update dependency of picketlink-jbas7, since the version in sample 2.8.0.Beta1-SNAPSHOT cannot to resolved. The maven dependency I am using in IDP is:
<dependency>
<groupId>org.picketlink.distribution</groupId>
<artifactId>picketlink-jbas7</artifactId>
<version>2.7.0.Final</version>
<scope>provided</scope>
</dependency>
The issue I am facing is, when I login to IDP and click on the SP link I get following exception in SP logs:
23:05:55,833 ERROR [org.picketlink.common] (default task-5) Service Provider could not handle the request.: java.lang.NullPointerException
at org.picketlink.identity.federation.web.handlers.saml2.SAML2IssuerTrustHandler$SPTrustHandler.handleStatusResponseType(SAML2IssuerTrustHandler.java:143)
at org.picketlink.identity.federation.web.handlers.saml2.SAML2IssuerTrustHandler.handleStatusResponseType(SAML2IssuerTrustHandler.java:70)
at org.picketlink.identity.federation.web.process.SAMLHandlerChainProcessor.callHandlerChain(SAMLHandlerChainProcessor.java:67)
at org.picketlink.identity.federation.web.process.ServiceProviderSAMLResponseProcessor.processHandlersChain(ServiceProviderSAMLResponseProcessor.java:106)
at org.picketlink.identity.federation.web.process.ServiceProviderSAMLResponseProcessor.process(ServiceProviderSAMLResponseProcessor.java:88)
at org.picketlink.identity.federation.bindings.wildfly.sp.SPFormAuthenticationMechanism.handleSAML2Response(SPFormAuthenticationMechanism.java:516)
at org.picketlink.identity.federation.bindings.wildfly.sp.SPFormAuthenticationMechanism.handleSAMLResponse(SPFormAuthenticationMechanism.java:306)
at org.picketlink.identity.federation.bindings.wildfly.sp.SPFormAuthenticationMechanism.authenticate(SPFormAuthenticationMechanism.java:268)
at io.undertow.security.impl.SecurityContextImpl$AuthAttempter.transition(SecurityContextImpl.java:339)
at io.undertow.security.impl.SecurityContextImpl$AuthAttempter.transition(SecurityContextImpl.java:356)
at io.undertow.security.impl.SecurityContextImpl$AuthAttempter.access$100(SecurityContextImpl.java:325)
at io.undertow.security.impl.SecurityContextImpl.attemptAuthentication(SecurityContextImpl.java:138)
at io.undertow.security.impl.SecurityContextImpl.authTransition(SecurityContextImpl.java:113)
at io.undertow.security.impl.SecurityContextImpl.authenticate(SecurityContextImpl.java:106)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:55)
at io.undertow.server.handlers.DisableCacheHandler.handleRequest(DisableCacheHandler.java:33)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58)
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:282)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:261)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Please let me know what I am doing wrong.
Thanks
I faced the same issue learning from picketbox quickstarts. I am working with wildfly 10.1.0.Final.
The first thing I noticed was that in order to get the "Basic" working is necessary (https://github.com/jboss-developer/jboss-picketlink-quickstarts):
IDP: picketlink-federation-saml-idp-basic
SP(s): picketlink-federation-saml-sp-post-basic and picketlink-federation-saml-sp-redirect-basic
I deployed all generated .war in one container for simplicity.
There were two things that helped me find what was going on:
enable TRACE debug
version of picketlink is 2.5.5.SP2 in Wildfly 10 and SAML2LoginModule was not found in that package in picketlink-wildfly8-2.5.5.SP2.jar.
In particular I had a problem with login module getting this error:
Class org.picketlink.identity.federation.bindings.wildfly.SAML2LoginModule not found from Module "deployment.picketlink-federation-saml-sp-post-basic-wildfly.war:main" from Service Module Loader
Login failure: javax.security.auth.login.LoginException: unable to find LoginModule class: org.picketlink.identity.federation.bindings.wildfly.SAML2LoginModule
What I did was change login module to: org.picketlink.identity.federation.bindings.jboss.auth.SAML2LoginModule and the quickstart started working.
I gave up on picketlink.
I used openSAML, and I was able to develop IDP initiated and SP initiated flows with no issues.
References:
https://wiki.shibboleth.net/confluence/display/OpenSAML/OSTwoUserManual#
https://github.com/rasmusson/webprofile-ref-project

CXF Weblogic - Override CXF logger to Slf4j

We are using Apache CXF 2.5.2 for webservice client proxies. We use weblogic 10.3.4. To override the CXF logger we use the following option:
-Dorg.apache.cxf.Logger=org.apache.cxf.common.logging.Slf4jLogger
For the org.apache.cxf.common.logging.Slf4jLogger, we've included the cxf-common-utilities-2.5.2 in our build.
When we try to deploy to weblogic we get the following exception:
[ERROR] Target state: deploy failed on Server AdminServer
[ERROR] java.lang.NoClassDefFoundError: weblogic/wsee/jaxws/spi/WLSProvider
According to documentation, your WebService client should have wseeclient.jar as it's runtime dependency. If the problems is still there, then include wlfullclient.jar (see Creating a wlfullclient.jar for JDK 1.6 client applications ).

Resources