I have a camel restlet application. The application exposes multiple http endpoints. Lately, I updated all camel components to 2.22.1 and added some other things. While all is still running fine, the application seems to be bound only to localhost:
TCP 127.0.0.1:8082 0.0.0.0:0 ABHÖREN 7608
(netstat windows).
Could this have been caused by the update? How can I set the binding address?
As I can see in http://camel.apache.org/rest-dsl.html
you should use restConfiguration like this
restConfiguration().component("restlet").host("0.0.0.0");
Related
As i am a novice of netty .here is my problem.
client side made of netty4 tcp communication and server module created by apache camel netty.
and in the middle of this communication,we have a load balancer L4.
this is our picture.
client and server picture
client config :10.10.10.1:8501
server config :
from (10.10.10.1:8501....
from (10.10.10.1:8502....
how can i make a client config file?
If I understood your problem, you can set your two address in your client, justo do it:
.loadBalance().roundRobin().to(ExchangePattern.InOut, "address1", "address2")
But I didn't understand about your config file, for me you are talking about properties, right?
If you are talking about properties you can take ir in your routeBuilder with a Properties, like this:
Properties property = new Properties();
property.load(new FileInputStream("YourProperties.properties"));
String propA = property.getProperty("propA");
Or set it in your blueprint/spring and get it in your route.
Here you can find more explanations about it
http://camel.apache.org/using-propertyplaceholder.html
I have 2 Spring apps ("client-app" and "service-app") that are already registered to Eureka (and talk via Feign Client). However, I have to talk to an instance of Solr and I'm forced to hard-code the IP address in the properties file. I would much rather not do this and use Eureka for service-discovery.
Question: Is there a way/plugin to have solr register itself with Eureka, so that clients can then discover it (even if it's programmatically via a start-up listener or some sort)?
I've looked at the solr API and it doesn't seem to have lifecycle listener (onStartUp or onShutdown hooks)
You would need a Solr Plugin for this, which is SolrCore aware. That interface method inform is called anytime something interesting happens with a core. Within the implementation of the inform method you would need to register/deregister as a client.
Then you would need to add it to your Solr (Cloud) instance. After that and proper configuration of your plugin, it should work.
I am using amq+camel+smpp for working with SMSC. I used SMPP as camel coponent and use these endpoints in routes. I want to monitor on SMPP connection binding, logs every SMSC bind and unbind in separate file than activemq.log.
Kindly guide me in approaching the mentioned event scenario.
Have you tried configuring log4j - http://camel.apache.org/how-do-i-use-log4j.html
Given the "cxf-osgi" example from fuse source's apache-servicemix-4.4.1-fuse-00-08, built with maven 3.0.3, when deploying it to apache karaf 2.2.4 and CXF 2.4.3 the web service is never published and never visible to the CXF servlet (http://localhost:8181/cxf/). There are no errors in the karaf log. How would one go about debugging such behavior?
It's worth turning up the log level(s) - you can do this permanently in the etc/org.ops4j.pax.logging.cfg or in the console with log:set TRACE org.apache.cxf - IIRC this will show some useful information.
Also check that it's actually published on localhost/127.0.0.1 - it may well be being published on another interface, the IP of the local network but not localhost. Try using 0.0.0.0 as the the address, that way it will bind to all available interfaces.
As you're using maven, you can download the CXF source (easily in Eclipse) and connect a remote debugger to the Karaf instance, with some strategically placed breakpoints you should be able to get a handle on what's going on.
Try changing to Equinox instead of the default of Felix. There is a bug in 2.4.3 in that it doesn't work well with Felix. Alternatively, CXF 2.4.4 is now available that should also fix it.
Take a look at this issue I filed this week: https://issues.apache.org/jira/browse/CXF-4058
What I found is that if my beans.xml is loaded before the cxf bundle jar, then the endpoints are registered with CXF but not with the OSGi http service. So everything looks good from the logs but the endpoints are never accessible. This is a race condition.
I did two workarounds: 1) in the short term, just move my own jars later in the boot order (I use Karaf features) so Spring and CXF are fully loaded before my beans.xml is read and 2) abandon Spring and roll my own binding code based loosely on this approach: http://eclipsesource.com/blogs/2012/01/23/an-osgi-jax-rs-connector-part-1-publishing-rest-services/
I just implemented solution #2 yesterday and I'm already extremely happy with it. It's solved all of my classloader issues (before I had to manually add a lot of Import-Package lines because BND doesn't see beans.xml references) and fixed my boot race condition.
I want to implement proxy support (SOCKS5 and HTTP CONNECT method) in my application. There are two parts that needs to be implemented:
Detection of proxy details (protocol, host, port): I am using libproxy for that.
Connecting to the the proxy server and telling it to relay the packets. Get the connected socket and then use it in your application.
Is there library for the #2 part?
You might be able to hack libmicrohttpd into doing what you want without too much effort, at least as far as the user end. I'm not aware of anything that does what you want straight out of the box.
Now there is proxysocket (https://github.com/brechtsanders/proxysocket/) to do exactly that.
Supports SOCKS4, SOCKS5 and HTTP CONNECT.
The result is a normal connected socket so you don't have to rewrite the rest of your application.
libcurl can receive webpage via proxy. You can send raw http header to it, and let it talk to the proxy