Can I share local data between Camel Routes in the same Camel context? - apache-camel

I have a route (route1), which sends data to an HTTP endpoint. To do this, it must set an authorization header.
The header value times out every hour and must be renewed.
For this I have created another route (route2), which gets the access token from a web service at a regular interval using provided credentials (getCredentials). This works fine.
How do I make the access token available to route1?
I have tried simple local variables, static variables, AtomicReference variables (volatile and static...)
My code (shortened for readability):
public class DataRoute extends RouteBuilder{
volatile static AtomicReference<String> cache = new AtomicReference<>();
#Override
public void configure() throws Exception {
from("timer://test?period=3500000")
.routeId("route2")
.setHeader("Authorization", constant(getCredentials()))
.to("http://127.0.0.1:8099/v1/login")
.process(exchange -> {
cache.set(parseAuthString(exchange.getIn().getBody(String.class)));
});
... other route producing for direct:rest
from("direct:rest")
.routeId("route1")
.setHeader("Authorization",constant((cache.get()==null?"":cache.get())))
.to("http://localhost:8099/v1/shipment");
}
}
The cached value is always empty...

Do not use constant to set dynamic values, its a one-time CONSTANT only.
Instead use an inlined processor (you can use java 8 lambda) or message transform / setBody with a processor.

Related

How to implement custom`snapshotState` in KafkaSource & KafkaSourceReader

We are migrating to KafkaSource from FlinkKafkaConsumer.
We have disabled auto commit of offset and instead committing them manually to some external store
We override FlinkKafkaConsumer and then on an overridden instance of KafkaFetcher we try to store the offset in some external store by overriding doCommitInternalOffsetsToKafka
protected void doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long> offsets,
#Nonnull KafkaCommitCallback commitCallback) throws Exception {
//Store offset in S3
}
Now In order to migrate we tried coping/overriding KafkaSource, KafkaSourceBuilder and KafkaSourceReader, but looks like a lot of redundant code which somehow does not look correct to me.
In Custom KafkaSourceReader I tried overriding snapshotState
#Override
public List<KafkaPartitionSplit> snapshotState(long checkpointId) {
// custom logic to store offset in s3
return super.snapshotState(checkpointId);
}
Is this correct or Is there any other way to achieve the same.

Quarkus server-side http-cache

I tried to find out how to config. a server-side rest client (i.e. microservice A calls other microservice B using rest) to used a http cache.
The background is, that the binary entities transfered over the wire can be quite large. Overall performance can benefit from a cache on microservice A side which employs http caching headers and etags provided by microservice B.
I found a solution that seems to work, but I'm not sure it that is a proper solution, that work together with current requests, that can occur on microservice A at any time.
#Inject
/* package private */ ManagedExecutor executor;
//
// Instead of using a declarative rest client we create it ourselves, because we can then supply a server-side cache: See ctor()
//
private ServiceBApi serviceClientB;
#ConfigProperty(name="serviceB.url")
/* package private */ String serviceBUrl;
#ConfigProperty(name="cache-entries")
/* package private */ int cacheEntries;
#ConfigProperty(name="cache-entrysize")
/* package private */ int cacheEntrySize;
#PostConstruct
public void ctor()
{
// Create proxy ourselves, because we can then supply a server-side cache
final CacheConfig cc = CacheConfig.custom()
.setMaxCacheEntries(cacheEntries)
.setMaxObjectSize(cacheEntrySize)
.build();
final CloseableHttpClient httpClient = CachingHttpClientBuilder.create()
.setCacheConfig(cc)
.build();
final ResteasyClient client = new ResteasyClientBuilderImpl()
.httpEngine(new ApacheHttpClient43Engine(httpClient))
.executorService(executor)
.build();
final ResteasyWebTarget target = (ResteasyWebTarget) client.target(serviceBUrl);
this.serviceClientB = target.proxy(ServiceBApi.class);
}
#Override
public byte[] getDoc(final String id)
{
try (final Response response = serviceClientB.getDoc(id)) {
[...]
// Use normally and no need to handle conditional gets and caching headers and other HTTP protocol stuff here, because this does underlying impl.
[...]
}
}
My questions are:
Is my solution ok as server-side solution, i.e. can it handle concurrent requests?
Is there a declarative (quarkus) way (#RegisterRestClient. etc) to achieve the same?
--
Edit
To make things clear: I want service B to be able to control the caching based on the HTTP get request and the specific resource. Additionally I want to avoid the unnecessary transmission of the large documents service B provides.
--
Mik
Assuming that you have worked with the declarative way of using Quarkus' REST Client before, you would just inject the client in your serviceB-consuming class. The method, that will invoke Service B, should be annotated with #CacheResult. This will cache results depending on the incoming id. See also Quarkus Cache Guide.
Please note: As Quarkus and Vert.x are all about non-blocking operations, you should use the async support of the REST Client.
#Inject
#RestClient
ServiceBApi serviceB;
...
#Override
#CacheResult(cacheName = "service-b-cache")
public Uni<byte[]> getDoc(final String id) {
return serviceB.getDoc(id).map(...);
}
...

Non Serializable object in Apache Flink

I am using Apache Flink to perform analytics on streaming data.
I am using a dependency whose object takes more than 10 secs to create as it is reads several files present in hdfs before initialisation.
If I initialise the object in open method I get a timeout Exception and if in the constructor of a sink/flatmap, I get serialisation exception.
Currently I am using static block to initialise the object in some other class, using Preconditions.checkNotNull(MGenerator.mGenerator) in main file and then it's working if used in a flatmap of sink.
Is there a way to create a non serializable dependency's object which might take more than 10 secs to be initialised in Flink's flatmap or sink?
public class DependencyWrap {
static MGenerator mGenerator;
static {
final String configStr = "{}";
final Config config = new Gson().fromJson(config, Config.class);
mGenerator = new MGenerator(config);
}
}
public class MyStreaming {
public static void main(String[] args) throws Exception {
Preconditions.checkNotNull(MGenerator.mGenerator);
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(parallelism);
...
input.flatMap(new RichFlatMapFunction<Map<String,Object>,List<String>>() {
#Override
public void open(Configuration parameters) {
}
#Override
public void flatMap(Map<String,Object> value, Collector<List<String>> out) throws Exception {
out.collect(MFVGenerator.mfvGenerator.generateMyResult(value.f0, value.f1));
}
});
}
}
Also, Please correct me if I am wrong about the question.
Doing it in the Open method is 100% the right way to do it. Is Flink giving you a timeout exception, or the object?
As a last ditch method, you could wrap your object in a class that contains both the object and it's JSON string or Config (is Config serializable?) with the object marked transient and then override the ReadObject/WriteObject methods to call the constructor. If the mGenerator object itself is stateless (and you'll have other problems if it's not), the serialization code should get called only once when jobs are distributed to taskmanagers.
Using open is usually the right place to load external lookup sources. The timeout is a bit odd, maybe there is a configuration around it.
However, if it's huge using a static loader (either static class as you did or singleton) has the benefit that you only need to load it once for all parallel instances of the task on the same task manager. Hence, you save memory and CPU time. This is especially true for you, as you use the same data structure in two separate tasks. Further, the static loader can be lazily initialized when it's used for the first time to avoid the timeout in open.
The clear downside of this approach is that the testability of your code suffers. There are some ways around that, which I could expand if there is interest.
I don't see a benefit of using the proxy serializer pattern. It's unnecessarily complex (custom serialization in Java) and offers little benefit.

How to pass parameters to a Camel route?

It is possible to pass parameters to a Camel route?, for instance, in the next code snippet:
public class MyRoute extends RouteBuilder {
public void configure() throws Exception {
from("direct:start")
.to("cxf:bean:inventoryEndpoint?dataFormat=PAYLOAD");
}
}
The value for dataFormat is in hard code, but, what if I want set it dynamically?, passing a value from the code where route is called. I know this is possible adding a constructor and passing parameters in it, like this:
public class MyRoute extends RouteBuilder {
private String type;
public MyRoute(String type){
this.type = type;
}
public void configure() throws Exception {
from("direct:start")
.to("cxf:bean:inventoryEndpoint?dataFormat=" + type);
}
}
There is another way?
Thanks so much!
As you mentioned, you can use a constructor (or setters or any other Java/Framework instruments) if the parameters are static from a Camel point of view.
The parameters are configurable in the application, but after the application is started they do no more change. So, every message processed by the Camel route uses the same value.
In contrast, when the parameters are dynamic - i.e. they can change for every processed message, you can use the dynamic endpoint toD() of Camel. These endpoint addresses can contain expressions that are computed on runtime. For example the route
from("direct:start")
.toD("${header.foo}");
sends messages to a dynamic endpoint and takes the value from the message header named foo.
Or to use your example
.toD("cxf:bean:inventoryEndpoint?dataFormat=${header.dataFormat}");
This way you can set the dataformat for every message individually through a header.
You can find more about dynamic endpoints on this Camel documentation page

How to inject HttpServletRequest into a Spring AOP request (custom scenario)?

I know the standard way of writing an AOP advice around a controller method and that you can get access to the HttpServletRequest arg, if declared in controller method.
But my scenario is that I have a translation service, that is currently session-scoped maintaining the user's locale for translation. I feel this makes the service stateful and also I do not want it to be session-scoped, as I think it is really Singleton. But there are multiple places where the translation service methods are called and so I do not want to change the signature to add request/locale in these methods. The problem is that all the callers of the translation service's methods do not have access to HttpServletRequest (not controller methods)? Can I write an aspect around the translation service methods and somehow magically get access to HttpServletRequest regardless of whether it is available in the caller's context or not?
#Service
public class TranslationService {
public void translate(String key) {
...
}
}
#Aspect
#Component
public class LocaleFinder {
#PointCut("execution(* TranslationService.translate(..))")
private void fetchLocale() {}
#Around("fetchLocale()") // in parameter list
public void advice(JoinPoint joinpoint, HttpServletRequest request) { .... }
}
If now, the caller of translate does not have HttpServletRequest, can't I get request in the advice? Is there a workaround?
Can I write an aspect around the translation service methods and
somehow magically get access to HttpServletRequest regardless of
whether it is available in the caller's context or not?
Not easily. Actually, it would require a lot of effort.
The easy way to do it is to rely on RequestContextHolder. In every request, the DispatcherServlet binds the current HttpServletRequest to a static ThreadLocal object in the RequestContextHolder. You can retrieve it when executing within the same Thread with
HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.currentRequestAttributes()).getRequest();
You can do this in the advice() method and therefore don't need to declare a parameter.
You should be able to auto-wire a HttpServletRequest in your aspect. Spring provides a proxy to the current thread local request instance that way.
So just add:
#Autowired private HttpServletRequest request;
to your aspect. Better yet is to use constructor injection.

Resources