I am new to Apache camel reactive(Rxjava2).
Looking for one simple "Hello world" observable subscriber example with Apache camel Reactive.
Please help.
Many Thanks
Here is is simple example, taken from http://camel.apache.org/rx.html
ReactiveCamel rx = new ReactiveCamel(camelContext);
Observable<Message> observable = rx.toObservable("activemq:MyMessages");
observable.subscribe(msg -> {
// do something
}, err -> {
System.out.println("failure");
} );
Related
I am trying to setup a simple camel route which reads from a sqlite table and prints the record (later it would be written to a file).
The flow I have setup is below
bindToRegistry("sqlConsumer", new SqliteConsumer());
bindToRegistry("sqliteDatasource", dataSource());
from("sql:select * from recordsheet_record_1 where col_1 = 'A5'?dataSource=#sqliteDatasource")
.to("bean:sqlConsumer?method=consume")
.end();
And the SqliteConsmer as below
public class SqliteConsumer {
public void consume(Map<String, Object> data, Exchange exchange) {
System.out.println("Map: '" + data + "'");
//TODO: append to file
}
}
When I execute the route, it should only execute once (prints once), but, it keeps on printing... Am I doing anything wrong here?
I am new to camel framework so any help or guide would be much appreciated.
Thanks.
It is a polling consumer so it polls the source according to the configuration, you can find more info here: https://camel.apache.org/components/latest/eips/polling-consumer.html
I am using flink version 1.8.0 . My application reads data from kafka -> transform -> publish to Kafka. To avoid any duplicates during restart, i want to use kafka producer with Exactly once semantics , read about it here :
https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/connectors/kafka.html#kafka-011-and-newer
My kafka version is 1.1 .
return new FlinkKafkaProducer<String>( topic, new KeyedSerializationSchema<String>() {
public byte[] serializeKey(String element) {
// TODO Auto-generated method stub
return element.getBytes();
}
public byte[] serializeValue(String element) {
// TODO Auto-generated method stub
return element.getBytes();
}
public String getTargetTopic(String element) {
// TODO Auto-generated method stub
return topic;
}
},prop, opt, FlinkKafkaProducer.Semantic.EXACTLY_ONCE, 1);
Checkpoint Code :
CheckpointConfig checkpointConfig = env.getCheckpointConfig();
checkpointConfig.setCheckpointTimeout(15 * 1000 );
checkpointConfig.enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION);
env.enableCheckpointing(5000 );
If I add exactly once sematics in kafka producer , my flink consumer is not reading any new data.
Can any one please share any sample code/application with Exactly once Semantics ?
Please find complete code here :
https://github.com/sris2/sample_flink_exactly_once
Thanks
Can any one please share any sample code/application with Exactly once Semantics ?
An exactly once example is hidden in an end-to-end test in flink. Since it uses some convenience functions, it may be hard to follow without checking out the whole repo.
If I add exactly once sematics in kafka producer , my flink consumer
is not reading any new data.
[...]
Please find complete code here :
https://github.com/sris2/sample_flink_exactly_once
I checked out your code and found the issue (had to fix the whole setup/code to actually get it running). The sink can actually not configure the transactions correctly. As written in the Flink Kafka connector documentation, you need to adjust the transaction.timeout.ms either in your Kafka broker up to 1 hour or in your application down to 15 min:
prop.setProperty("transaction.timeout.ms", "900000");
The respective excerpt is:
Kafka brokers by default have transaction.max.timeout.ms set to 15 minutes. This property will not allow to set transaction timeouts for the producers larger than it’s value. FlinkKafkaProducer011 by default sets the transaction.timeout.ms property in producer config to 1 hour, thus transaction.max.timeout.ms should be increased before using the Semantic.EXACTLY_ONCE mode.
1、Can I use bulkhead pattern in feignClient?
2、I have some confusion about hystrix.
For example,if I only have three feign clients "a","b","c"。The "a" calls "b" and "c".
I know I can easily use circuit breaker with fallback parameter and some Configuration like this:
#FeignClient(name = "b", fallback = bFallback.class)
protected interface HystrixClient {
//some methods
}
#FeignClient(name = "c", fallback = cFallback.class)
protected interface HystrixClient {
//some methods
}
In another way,I could use #HystrixCommand to wrap my remote call with some Configuration like this:
#HystrixCommand(fallbackMethod="getFallback")
public Object get(#PathVariable("id") long id) {
//...
}
In addition I can configure some parameter in #HystrixCommand or application.yml,and I also can add threadPoolKey in in #HystrixCommand
Q1:I have learn that Hystrix wrapped remote call to achieve purpose,I can understand on the latter way,but the former way likes wrapping callee?
I found in document that:
Feign will wrap all methods with a circuit break
Is this mean FeignClient seems adding #Hystrixcommand on every method in interface in essence?
Q2:If the Feign client "b" have three remote call,how can I let them run in bulkhead to avoid one method consuming all thread? to Combine the feignClient and #HystrixCommand? will them conflict?
Because I do not found the parameter likes threadPoolKey in feignClient. Auto bulkhead?
Q3:If my hystrix configuration is in application.yml ,the feignClient pattern and #HytirxCommand pattern whether have the same configuration pattern? like this:
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds:1000
circuitBreaker:
requestVolumeThreshold:10
...
...
but what's the follow Timeout?
feign:
client:
config:
feignName:
connectTimeout: 5000
readTimeout: 5000
1、Can I use bulkhead pattern in feignClient?
Java doc of setterFactory() method of HystrixFeign class says:
/**
* Allows you to override hystrix properties such as thread pools and command keys.
*/
public Builder setterFactory(SetterFactory setterFactory) {
this.setterFactory = setterFactory;
return this;
}
https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-feign.html says:
Spring Cloud Netflix does not provide the following beans by default for feign, but still looks up beans of these types from the application context to create the feign client:
• Logger.Level
• Retryer
• ErrorDecoder
• Request.Options
• Collection
• SetterFactory
So we should create setterFactory and specifying thread pool there. You can create a Bean like this:
#Bean
public SetterFactory feignHystrixSetterFactory() {
return (target, method) -> {
String groupKey = target.name();
String commandKey = Feign.configKey(target.type(), method);
return HystrixCommand.Setter
.withGroupKey(HystrixCommandGroupKey.Factory.asKey(groupKey))
.andCommandKey(HystrixCommandKey.Factory.asKey(commandKey))
.andThreadPoolKey(HystrixThreadPoolKey.Factory.asKey( target.type().getSimpleName() ));
};
}
but what's the follow Timeout?
Feign client timeout is similar to ribbon timeout and specifies the properties of httpconnectin but you can define different timeouts for different feignclient.
feign.client.config.bar.readTimeout //this configuration will apply to bar client
feign.client.config.default.readTimeout // this configuration will apply to all feign
How did I found that? if you debug your application and put breakpoints on the following code of RetryableFeignLoadBalancer class:
final Request.Options options;
if (configOverride != null) {
RibbonProperties ribbon = RibbonProperties.from(configOverride);
options = new Request.Options(ribbon.connectTimeout(this.connectTimeout),
ribbon.readTimeout(this.readTimeout));
}
else {
options = new Request.Options(this.connectTimeout, this.readTimeout);
}
you will see these value will be used as properties of HTTPConection.pls have a look at feign.Client class.
connection.setConnectTimeout(options.connectTimeoutMillis());
connection.setReadTimeout(options.readTimeoutMillis());
Solr version :: 6.6.1
SolrNet API with the C# based application
I wish to invoke or trigger the data import handler from the C# code with
the help of SolrNet. But i am unable to locate any tutorial in the SolrNet
API. I can easily invoke the DIH from the solr admin UI, but my need is to invoke it from an external application.
Please suggest the code snippet how do i invoke the data import action from the C# based
application ?
I don't think it's possible to do completely from Solr.NET, a brief look gives me an idea, that currently there is only class responsible for DIH status page, which is good, but not covering the initial process. I think this was discarded recently, since this functionality wasn't needed.
In the SolrBasicServer class you have:
public SolrDIHStatus GetDIHStatus(KeyValuePair<string, string> options) {
var response = connection.Get("/dataimport", null);
var dihstatus = XDocument.Parse(response);
return dihStatusParser.Parse(dihstatus);
}
which is getting the DIH. Most likely, you need to extend this class and do something similar (I'm not C# developer, so not 100% sure about the code):
connection.Post("/dataimport?command=full-import", null);
or something similar with delta-import command and then later get the status part.
If updating Solr.NET is not the case for you, you still could just trigger it via usual HTTP call with some preferable C# library and do POST request to http://host:port/solr/collection-name/dataimport?command=full-import
string solrTargetDIHUrl = "http://localhost:8983/solr/dih/dataimport?command=delta-import";
try
{
using (var solrClient = new HttpClient())
{
var resultObj = solrClient.GetAsync(new Uri(solrTargetDIHUrl)).Result;
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine("\t\t Data Import Triggered Successfully !");
Console.ResetColor();
}
}
catch(Exception ex)
{
Console.WriteLine("ERROR in DIH Trigger >>>>> " + ex.Message + "||" + ex.StackTrace);
}
I have a minimal camel route with a cxf endpoint (in a RouteBuilder#configure method):
CxfRsComponent cxfComponent = new CxfRsComponent(context);
CxfRsEndpoint serviceEndpoint = new CxfRsEndpoint("http:/localhost/rest", cxfComponent);
serviceEndpoint.addResourceClass(PersonService.class);
serviceEndpoint.setPerformInvocation(true);
from(serviceEndpoint).log("this is irrelevant");
The issue is that the methods of the resource class are called twice:
Let's say there is a "PersonService#post" method:
public Person post(Person p){
p.setId(p.getId() + "_PersonService#post");
return p;
}
It gets invoced twice: breakpoints get hit twice, response for payload
{
"id" : "id_from_client"
}
is
{
"id": "id_from_client_PersonService#post_PersonService#post"
}
Is this expected behaviour? If yes, is there a setting to only execute the method once? This seems like a bug to me.
Camel version is 2.16.2 (maven: org.apache.camel:camel-cxf-transport:2.16.2)
CXF version is 3.1.4 (org.apache.cxf:cxf-rt-transports-http-jetty:3.1.4)
FWIW, I changed my configuration to add the "synchronous=true" option along with "performInvocation=true" and the double calls went away. I'm not sure if this is how it's supposed to behave or not, but for now it seems to work OK this way.
<camel:from uri="cxfrs:bean:rsServer performInvocation=true&synchronous=true" />