As per documentation under Tomcat 6.0, in introduction it is mentioned that AJP connectors do not support Advanced IO.
1) Does it mean that I cant use protocol org.apache.coyote.ajp.AjpNioProtocol (AJP NIO) to implement/use CometProcessor?
I do use Apache with Tomcat using AJP connector in my application.
Using org.apache.coyote.http11.Http11NioProtocol I can listen events in my CometProcessor implementation with HTTP port specified in Connector.
2) Is there anything missing while dealing with AJP and NIO? or I have to move to user Servlet 3.0 (Async) ?
Related
We are using Spring Cloud Gateway [JavaDsl] as our API Gateway. For the proxies we have multiple microservices [running on different ip:port] as target's. Would like to know either we can configure multiple targets to spring cloud gateway proxies, similar to apache camel load balancer eip.
camel.apache.org/manual/latest/loadBalance-eip.html
We are looking for software load balancing with in spring cloud gateway [similar to netflix/apache-camel] instead of another dedicated LB.
Able to get Spring Cloud Gateway load balanced Route working using spring-cloud-starter-netflix-ribbon. However when one of the server instance is down, load-balancing-fails. Code Snippets below.
Version :
spring-cloud-gateway : 2.1.1.BUILD-SNAPSHOT
Gateway Route
.route(
r ->
r.path("/res/security/")
.filters( f -> f
.preserveHostHeader()
.rewritePath("/res/security/", "/targetContext/security/")
.filter(new LoggingFilter())
.uri("lb://target-service1-endpoints")
)
application.yml
ribbon:
eureka:
enabled: false
target-service1-endpoints:
ribbon:
listOfServers: 172.xx.xx.s1:80, 172.xx.xx.s2:80
ServerListRefreshInterval: 1000
retryableStatusCodes: 404, 500
MaxAutoRetriesNextServer: 1
management:
endpoint:
health:
enabled: true
Here is the response from Spring Cloud Team.
What you've described, indeed, happens. However, it is not gateway-specific. If you just use Ribbon in a Spring Cloud project with listOfServers, the same thing will happen. This is because, unlike for eureka, the IPing for non-discovery-service scenario is not instrumented (a DummyPing instance is used).
You could probably change this behaviour by providing your own IPing, IRule or ServerListFilter implementation and overriding the setup we provide in the autoconfiguration in this way.
https://github.com/spring-cloud/spring-cloud-gateway/issues/1482
I have 2 Spring apps ("client-app" and "service-app") that are already registered to Eureka (and talk via Feign Client). However, I have to talk to an instance of Solr and I'm forced to hard-code the IP address in the properties file. I would much rather not do this and use Eureka for service-discovery.
Question: Is there a way/plugin to have solr register itself with Eureka, so that clients can then discover it (even if it's programmatically via a start-up listener or some sort)?
I've looked at the solr API and it doesn't seem to have lifecycle listener (onStartUp or onShutdown hooks)
You would need a Solr Plugin for this, which is SolrCore aware. That interface method inform is called anytime something interesting happens with a core. Within the implementation of the inform method you would need to register/deregister as a client.
Then you would need to add it to your Solr (Cloud) instance. After that and proper configuration of your plugin, it should work.
I am not able to connect to https://test.salesforce.com/services/oauth2/token form SoapUI (ver 5.2.1). I have tried the PRO version and other older versions (4.6.xx) as well.
I can access the website from the web-browser. The GET to this URL gives me the response where as SOAPUI says HttpHostConnectException connection to https://test.salesforce.com/ refused.
I have checked that there is direct connection available from my PC to this address. I have tried adding https.proxyHost and https.proxyPort settings in soapui.vmoptions and sopaui.bat but of no use.
I have also tried playing around with Preemptive Authentication settings in SOAPUI without success
My organization has firewall which has white listed this address. I have also confirmed that firewall settings does allow to connect thru non standard clients (such as ApacheHttpClient).
If I use a Java Program using URLConnection using the proxy, it works.
At this point it seems to me that SOAPUI is not honoring the proxy settings.
Please share if anyone has similar experience and how did they resolve it.
Regards
Ash
After a few days trying to develop a WS client from a provided WSDL, I discover I was all this time using axis, and not axis2...
Well, what I'm doing is right clicking the wsdl > New > Other > Web Service Client.
In the wizard window, 'Web service runtime' was all this time set to 'Apache Axis', and I didn't see that. Clicking on it I'm able to choose 'Apache Axis2' and 'Apache CXF 2.x', but both fails, while 'Apache Axis' "works": client is created, but doesn't add header username and password to XML request.
Here's the error I get when trying to use CXF:
Unable to add the follwing facets to project SIAPP_WS_FORNECEDOR_CFX_01: CXF 2.x Web Services.
org.eclipse.wst.common.project.facet.core.FacetedProjectFrameworkException: Failed while installing CXF 2.x Web Services 1.0.
at org.eclipse.wst.common.project.facet.core.internal.FacetedProject.callDelegate(FacetedProject.java:1507)
at org.eclipse.wst.common.project.facet.core.internal.FacetedProject.modifyInternal(FacetedProject.java:441)
at org.eclipse.wst.common.project.facet.core.internal.FacetedProject.mergeChangesInternal(FacetedProject.java:1181)
at org.eclipse.wst.common.project.facet.core.internal.FacetedProject.access$2(FacetedProject.java:1117)
at org.eclipse.wst.common.project.facet.core.internal.FacetedProject$1.run(FacetedProject.java:324)
at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:2344)
at org.eclipse.wst.common.project.facet.core.internal.FacetedProject.modify(FacetedProject.java:339)
at org.eclipse.jst.ws.internal.consumption.ui.common.FacetOperationDelegate$1.run(FacetOperationDelegate.java:62)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121)
Caused by: org.eclipse.core.runtime.CoreException: CXF Runtime location not set. Please set location in Preferences > Web Services > CXf 2.x Preferences
at org.eclipse.jst.ws.internal.cxf.facet.CXFFacetInstallDelegate.execute(CXFFacetInstallDelegate.java:50)
at org.eclipse.wst.common.project.facet.core.internal.FacetedProject.callDelegate(FacetedProject.java:1477)
... 8 more
For CXF, you would need to go to Preference -> Web Service -> CXF 2.x Preferences and add a CXF runtime (point to an CXF installation). That should allow it to find the wsdl2java tool (and such) that would be needed for CXF.
Given the "cxf-osgi" example from fuse source's apache-servicemix-4.4.1-fuse-00-08, built with maven 3.0.3, when deploying it to apache karaf 2.2.4 and CXF 2.4.3 the web service is never published and never visible to the CXF servlet (http://localhost:8181/cxf/). There are no errors in the karaf log. How would one go about debugging such behavior?
It's worth turning up the log level(s) - you can do this permanently in the etc/org.ops4j.pax.logging.cfg or in the console with log:set TRACE org.apache.cxf - IIRC this will show some useful information.
Also check that it's actually published on localhost/127.0.0.1 - it may well be being published on another interface, the IP of the local network but not localhost. Try using 0.0.0.0 as the the address, that way it will bind to all available interfaces.
As you're using maven, you can download the CXF source (easily in Eclipse) and connect a remote debugger to the Karaf instance, with some strategically placed breakpoints you should be able to get a handle on what's going on.
Try changing to Equinox instead of the default of Felix. There is a bug in 2.4.3 in that it doesn't work well with Felix. Alternatively, CXF 2.4.4 is now available that should also fix it.
Take a look at this issue I filed this week: https://issues.apache.org/jira/browse/CXF-4058
What I found is that if my beans.xml is loaded before the cxf bundle jar, then the endpoints are registered with CXF but not with the OSGi http service. So everything looks good from the logs but the endpoints are never accessible. This is a race condition.
I did two workarounds: 1) in the short term, just move my own jars later in the boot order (I use Karaf features) so Spring and CXF are fully loaded before my beans.xml is read and 2) abandon Spring and roll my own binding code based loosely on this approach: http://eclipsesource.com/blogs/2012/01/23/an-osgi-jax-rs-connector-part-1-publishing-rest-services/
I just implemented solution #2 yesterday and I'm already extremely happy with it. It's solved all of my classloader issues (before I had to manually add a lot of Import-Package lines because BND doesn't see beans.xml references) and fixed my boot race condition.