Logs are not received in Hawkular APM from Zipkin Client - apache-camel

I have client application instrumented with Zipkin library with configuration in spring application.properties .
camel.zipkin.host-name=hawkular-apm-server.com
camel.zipkin.port=443
camel.zipkin.include-message-body-streams=true
Maven dependency
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-zipkin-starter</artifactId>
</dependency>
The hawkular apm server console is reachable from the local machine.
However, when the rest api exposed in the client application is invoked, the zipkin trace is logged but they are not collected at the hawkular apm server.
04:31:55.632 [http-nio-0.0.0.0-8080-exec-1] INFO o.a.c.c.s.CamelHttpTransportServlet - Initialized CamelHttpTransportServlet[name=CamelServlet, contextPath=]
04:31:55.668 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - clientRequest [service=MyCamelClient, traceId=-5541987202080201726, spanId=-5541987202080201726]
04:31:55.672 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - serverRequest [service=MyCamel, traceId=-5541987202080201726, spanId=-5541987202080201726]
04:31:55.676 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - serverResponse[service=MyCamel, traceId=-5541987202080201726, spanId=-5541987202080201726]
04:31:55.677 [http-nio-0.0.0.0-8080-exec-1] DEBUG org.apache.camel.zipkin.ZipkinTracer - clientResponse[service=MyCamelClient, traceId=-5541987202080201726, spanId=-5541987202080201726]
04:31:55.758 [pool-1-thread-1] WARN o.a.t.transport.TIOStreamTransport - Error closing output stream.
java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:118)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.close(FilterOutputStream.java:158)
at org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
at org.apache.thrift.transport.TSocket.close(TSocket.java:194)
at org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89)
at com.github.kristofa.brave.scribe.ScribeClientProvider.close(ScribeClientProvider.java:96)
at com.github.kristofa.brave.scribe.ScribeClientProvider.exception(ScribeClientProvider.java:75)
at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:123)
at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:109)
at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:95)
at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:35)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
04:31:55.763 [pool-1-thread-1] WARN c.g.k.b.scribe.SpanProcessingThread - Logging spans failed. 1 spans are lost!
org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at com.twitter.zipkin.gen.scribe$Client.recv_Log(scribe.java:74)
at com.twitter.zipkin.gen.scribe$Client.Log(scribe.java:61)
at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:127)
at com.github.kristofa.brave.scribe.SpanProcessingThread.log(SpanProcessingThread.java:109)
at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:95)
at com.github.kristofa.brave.scribe.SpanProcessingThread.call(SpanProcessingThread.java:35)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I am not sure if its a configuration issue at client application, as the Hawkular APM UI is opening properly.
As per my understanding, Zipkin client can be integrated with Hawkular apm by simple replacing the hawkular url in place of zipkin server, but this does not seem to work.
Any suggestions on this, unfortunately I could not find any examples too.

Related

Zeppelin throws java.lang.NullPointerException when starting SparkInterpreter after adding Spark dependencies via Maven

I'm trying to add Spark dependencies via maven by specifying groupId:artifactId:version in Dependencies section in Zeppelin's Interpreter console. Once I saved and executed a spark paragraph. A java.lang.NullPointerException was thrown, full log below
java.lang.NullPointerException
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:44)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:39)
at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkContext_2(OldSparkInterpreter.java:375)
at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkContext(OldSparkInterpreter.java:364)
at org.apache.zeppelin.spark.OldSparkInterpreter.getSparkContext(OldSparkInterpreter.java:172)
at org.apache.zeppelin.spark.OldSparkInterpreter.open(OldSparkInterpreter.java:740)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:61)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.spark.SparkSqlInterpreter.getSparkInterpreter(SparkSqlInterpreter.java:76)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:92)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:633)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I then removed that maven dependencies but the exception didn't disappear.
According to the log, it seems like Zeppelin is unable to start a spark interpreter, so I looked at Spark Interpreter log at zeppelin-interpreter-spark-zeppelin-development-cluster-m.log but nothing was logged when I ran a spark paragraph. Below is a Zeppelin log found in zeppelin-zeppelin-development-cluster-m.log
INFO [2018-08-10 13:24:53,036] ({qtp2110245805-15} VFSNotebookRepo.java[save]:196) - Saving note:2DM9MXZGM
INFO [2018-08-10 13:24:53,053] ({pool-2-thread-2} SchedulerFactory.java[jobStarted]:109) - Job 20180810-111509_373986425 started by scheduler org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-2DM9MXZGM
INFO [2018-08-10 13:24:53,054] ({pool-2-thread-2} Paragraph.java[jobRun]:380) - Run paragraph [paragraph_id: 20180810-111509_373986425, interpreter: sql, note_id: 2DM9MXZGM, user: anonymous]
WARN [2018-08-10 13:24:58,614] ({pool-2-thread-2} NotebookServer.java[afterStatusChange]:2303) - Job 20180810-111509_373986425 is finished, status: ERROR, exception: null, result: %text java.lang.NullPointerException
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:44)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:39)
at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkContext_2(OldSparkInterpreter.java:375)
at org.apache.zeppelin.spark.OldSparkInterpreter.createSparkContext(OldSparkInterpreter.java:364)
at org.apache.zeppelin.spark.OldSparkInterpreter.getSparkContext(OldSparkInterpreter.java:172)
at org.apache.zeppelin.spark.OldSparkInterpreter.open(OldSparkInterpreter.java:740)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:61)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.spark.SparkSqlInterpreter.getSparkInterpreter(SparkSqlInterpreter.java:76)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:92)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:633)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I found similar posts facing the same issues but they are related to zeppelin unable to connect to Hive. I don't think that's my issue. I also ran spark-shell and it worked fine.
I'm on Google Dataproc image version 1.3.1.
thanks
I've managed to fix it. The problem is I added the following artifact https://mvnrepository.com/artifact/spotify/spark-bigquery/0.2.2-s_2.11 as an external dependency and it might have conflicted with Spark or Zeppelin libs. After removing everything in zeppelin.dep.localrepo which is /usr/lib/zeppelin/local-repo in my case and restart Zeppelin, everything is back to normal.
thanks

Camel SFTP can't establish connection

I have the SFTP server up in one docker container available at localhost:2222 with user user/pass
Trying to establish connection in other one via camel 2.22.0 route like
from("sftp:user#localhost:2222/sftp/in?password=pass"))
.log("${file:name}");
But cannot connect because of
Error auto creating directory:/sftp/in due Cannot connect to sftp://user#localhost:2222. This exception is ignored.
org.apache.camel.component.file.GenericFileOperationFailedException: Cannot connect to sftp://pms#localhost:2222
at org.apache.camel.component.file.remote.SftpOperations.connect(SftpOperations.java:144)
at org.apache.camel.component.file.remote.RemoteFileConsumer.connectIfNecessary(RemoteFileConsumer.java:233)
Caused by: com.jcraft.jsch.JSchException: java.net.ConnectException: Connection refused (Connection refused)
at com.jcraft.jsch.Util.createSocket(Util.java:394)
Got that after moving from camel 2.18.2 to camel 2.22.0.
Is it possible to fix?
We upgraded from camel 2.20.0 to camel 2.22.0 during development. After upgrading we could not reach camel from another server. Same problem, Connection Refused. We downgraded back to 2.20.0 and things started working again
I have also had this issue and resolved it by adding camel-ftp dependency:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-ftp</artifactId>
<version>3.16.0</version>
</dependency>
Please check dependency version that works for you here: https://mvnrepository.com/artifact/org.apache.camel/camel-ftp

Failed to submit JobGraph Apache Flink

I am trying to run the simple code below after building everything from Flink's github master branch for various reasons. I get an exception below and I wonder what runs on port 9065? and How to fix this exception?
val dataStream = senv.fromElements(1, 2, 3, 4)
dataStream.countWindowAll(2).sum(0).print()
senv.execute("My streaming program")
Below is the Exception
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$18(RestClusterClient.java:306)
at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$222(RestClient.java:196)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.CompletionException: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9065
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
... 16 more
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9065
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281)
I build it from the sources in the following way (just followed the instructions on Flink github page):
git clone https://github.com/apache/flink.git
cd flink
mvn clean package -DskipTests
cd build-target
./bin/start-scala-shell.sh local
Underlying distributed runtime is currently heavily worked on in master. Starting from 1.5 the default runtime will be the one known as FLIP6, therefore ocassionally some parts might not work. I think it would be very beneficial if you could create a JIRA ticket for this.
Just to add what runs on 9065 port, in the new architecture it is the default port of Dispatcher.
I had the same exception. My issue was that I had a port conflict when started the cluster with a docker image on my machine. So I had changed the port for rest in flink config file to use 8084 instead of 8081. When I did this the cluster would start up properly but I was unable to submit the job. When I killed the conflicting process and reverted the port back to 8081, I could submit jobs successfully
I got the same error.
Use jdk 1.8 for flink 1.7.2

java.io.IOException: Unable to unwrap data, invalid status

I am working on a SPA using spring(REST backend & security) and angularjs. Everything going ok but when I run it on my local I do see an error no Idea why this is happening.
2016-10-24 10:43:29.028 DEBUG 8040 --- [io-8443-exec-10] o.apache.coyote.http11.Http11Processor : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#1a572ed1:org.apache.tomcat.util.net.SecureNioChannel#1b7d9385:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8443 remote=/0:0:0:0:0:0:0:1:53170]], Status in: [OPEN_READ], State out: [OPEN]
10:43:29.028 [https-jsse-nio-8443-exec-10] DEBUG o.a.coyote.http11.Http11Processor - Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#1a572ed1:org.apache.tomcat.util.net.SecureNioChannel#1b7d9385:java.nio.channels.SocketChannel[connected local=/0:0:0:0:0:0:0:1:8443 remote=/0:0:0:0:0:0:0:1:53170]], Status in: [OPEN_READ], State out: [OPEN]
2016-10-24 10:43:31.949 DEBUG 8040 --- [nio-8443-exec-7] o.apache.coyote.http11.Http11Processor : Error parsing HTTP request header
java.io.IOException: Unable to unwrap data, invalid status [CLOSED]
at org.apache.tomcat.util.net.SecureNioChannel.read(SecureNioChannel.java:590)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.fillReadBuffer(NioEndpoint.java:1200)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.read(NioEndpoint.java:1149)
at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:742)
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:404)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:667)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:802)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1410)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
10:43:31.949 [https-jsse-nio-8443-exec-7] DEBUG o.a.coyote.http11.Http11Processor - Error parsing HTTP request header
java.io.IOException: Unable to unwrap data, invalid status [CLOSED]
at org.apache.tomcat.util.net.SecureNioChannel.read(SecureNioChannel.java:590)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.fillReadBuffer(NioEndpoint.java:1200)
at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.read(NioEndpoint.java:1149)
at org.apache.coyote.http11.Http11InputBuffer.fill(Http11InputBuffer.java:742)
at org.apache.coyote.http11.Http11InputBuffer.parseRequestLine(Http11InputBuffer.java:404)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:667)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:802)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1410)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
If anyone can give me a starting point I can do more research.
We had the same issue.
It turned out that on the server side an external event was forcefully closing the connection at some point during the call, which was causing this error.
I was facing same issue with creating wss(secured) webscoket api using tomcat's jsr-356 and deploying over aws ec2 and referring via domain from cloudflare.
To fix the issue I made 2 changes:
- changed cloudflare proxied mode to DNS only in A record.
- Bought new certificate and started using that from tomcat instead of cloudflare's origin CA certificate.
For detailed steps check this:
https://techxperiment.blogspot.com/2020/06/aws-ec2-tomcat-jsr-356-secure.html

SAML - Service Provider could not handle the request

I am self learning SAML. I am learning using picket link quick starts: https://github.com/jboss-developer/jboss-picketlink-quickstarts.
I deployed picketlink-federation-saml-idp-basic-wildfly.war in wildfly 9.0.2 running in port 9080 and picketlink-federation-saml-sp-post-basic-wildfly.war deployed in wildfly 9.0.2 running in port 8080. I also updated standalone.xml to update security domain for IDP and SP.
The only change I had todo in sample, was to update dependency of picketlink-jbas7, since the version in sample 2.8.0.Beta1-SNAPSHOT cannot to resolved. The maven dependency I am using in IDP is:
<dependency>
<groupId>org.picketlink.distribution</groupId>
<artifactId>picketlink-jbas7</artifactId>
<version>2.7.0.Final</version>
<scope>provided</scope>
</dependency>
The issue I am facing is, when I login to IDP and click on the SP link I get following exception in SP logs:
23:05:55,833 ERROR [org.picketlink.common] (default task-5) Service Provider could not handle the request.: java.lang.NullPointerException
at org.picketlink.identity.federation.web.handlers.saml2.SAML2IssuerTrustHandler$SPTrustHandler.handleStatusResponseType(SAML2IssuerTrustHandler.java:143)
at org.picketlink.identity.federation.web.handlers.saml2.SAML2IssuerTrustHandler.handleStatusResponseType(SAML2IssuerTrustHandler.java:70)
at org.picketlink.identity.federation.web.process.SAMLHandlerChainProcessor.callHandlerChain(SAMLHandlerChainProcessor.java:67)
at org.picketlink.identity.federation.web.process.ServiceProviderSAMLResponseProcessor.processHandlersChain(ServiceProviderSAMLResponseProcessor.java:106)
at org.picketlink.identity.federation.web.process.ServiceProviderSAMLResponseProcessor.process(ServiceProviderSAMLResponseProcessor.java:88)
at org.picketlink.identity.federation.bindings.wildfly.sp.SPFormAuthenticationMechanism.handleSAML2Response(SPFormAuthenticationMechanism.java:516)
at org.picketlink.identity.federation.bindings.wildfly.sp.SPFormAuthenticationMechanism.handleSAMLResponse(SPFormAuthenticationMechanism.java:306)
at org.picketlink.identity.federation.bindings.wildfly.sp.SPFormAuthenticationMechanism.authenticate(SPFormAuthenticationMechanism.java:268)
at io.undertow.security.impl.SecurityContextImpl$AuthAttempter.transition(SecurityContextImpl.java:339)
at io.undertow.security.impl.SecurityContextImpl$AuthAttempter.transition(SecurityContextImpl.java:356)
at io.undertow.security.impl.SecurityContextImpl$AuthAttempter.access$100(SecurityContextImpl.java:325)
at io.undertow.security.impl.SecurityContextImpl.attemptAuthentication(SecurityContextImpl.java:138)
at io.undertow.security.impl.SecurityContextImpl.authTransition(SecurityContextImpl.java:113)
at io.undertow.security.impl.SecurityContextImpl.authenticate(SecurityContextImpl.java:106)
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:55)
at io.undertow.server.handlers.DisableCacheHandler.handleRequest(DisableCacheHandler.java:33)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51)
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46)
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64)
at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56)
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58)
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:72)
at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50)
at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43)
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:282)
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:261)
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:80)
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:172)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:774)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Please let me know what I am doing wrong.
Thanks
I faced the same issue learning from picketbox quickstarts. I am working with wildfly 10.1.0.Final.
The first thing I noticed was that in order to get the "Basic" working is necessary (https://github.com/jboss-developer/jboss-picketlink-quickstarts):
IDP: picketlink-federation-saml-idp-basic
SP(s): picketlink-federation-saml-sp-post-basic and picketlink-federation-saml-sp-redirect-basic
I deployed all generated .war in one container for simplicity.
There were two things that helped me find what was going on:
enable TRACE debug
version of picketlink is 2.5.5.SP2 in Wildfly 10 and SAML2LoginModule was not found in that package in picketlink-wildfly8-2.5.5.SP2.jar.
In particular I had a problem with login module getting this error:
Class org.picketlink.identity.federation.bindings.wildfly.SAML2LoginModule not found from Module "deployment.picketlink-federation-saml-sp-post-basic-wildfly.war:main" from Service Module Loader
Login failure: javax.security.auth.login.LoginException: unable to find LoginModule class: org.picketlink.identity.federation.bindings.wildfly.SAML2LoginModule
What I did was change login module to: org.picketlink.identity.federation.bindings.jboss.auth.SAML2LoginModule and the quickstart started working.
I gave up on picketlink.
I used openSAML, and I was able to develop IDP initiated and SP initiated flows with no issues.
References:
https://wiki.shibboleth.net/confluence/display/OpenSAML/OSTwoUserManual#
https://github.com/rasmusson/webprofile-ref-project

Resources