Handle Scenarios when exposing route as a restlet service - apache-camel

I have used rest servlet binding to expose route as a service.
I have used employeeClientBean as a POJO , wrapping the actual call to employee REST service within it, basically doing the role of a service client.
So, based on the method name passed, I call the respective method in employee REST service, through the employeeClientBean.
I want to know how how I can handle the scenarios as added in commments in the block of code.
I am just new to Camel, but felt POJO binding is better as it does not couple us to camel specific APIs like exchange and processor or even use
any specific components.
But, I am not sure how I can handle the above scenarios and return appropriate JSON responses to the user of the route service.
Can someone help me on this.
public void configure() throws Exception {
restConfiguration().component("servlet").bindingMode(RestBindingMode.json)
.dataFormatProperty("prettyPrint", "true")
.contextPath("camelroute/rest").port(8080);
rest("/employee").description("Employee Rest Service")
.consumes("application/json").produces("application/json")
.get("/{id}").description("Find employee by id").outType(Employee.class)
.to("bean:employeeClientBean? method=getEmployeeDetails(${header.id})")
//How to handle and return response to the user of the route service for the following scenarios for get/{id}"
//1.Passed id is not a valid one as per the system
//2.Failure to return details due to some issues
.post().description("Create a new Employee ").type(Employee.class)
.to("bean:employeeClientBean?method=createEmployee");
//How to handle and return correct response to the user of the route service for the following scenarios "
//1. Employee being created already exists in the system
//2. Some of the fields of employee passed are as not as per constraints on them
//3. Failure to create a employee due to some issues in server side (For Eg, DB Failure)
}

I fear you are putting Camel to bad use - as per the Apache documentation the REST module is supporting Consumer implementations, e.g. reading from a REST-endpoint, but NOT writing back to a caller.
For your use case you might want to switch framework. Syntactically, Ratpack goes in that direction.

Related

Domain driven design database validation in model layer

I'm creating a design for a Twitter application to practice DDD. My domain model looks like this:
The user and tweet are marked blue to indicate them being a aggregate root. Between the user and the tweet I want a bounded context, each will run in their respective microservice (auth and tweet).
To reference which user has created a tweet, but not run into a self-referencing loop, I have created the UserInfo object. The UserInfo object is created via events when a new user is created. It stores only the information the Tweet microservice will need of the user.
When I create a tweet I only provide the userid and relevant fields to the tweet, with that user id I want to be able to retrieve the UserInfo object, via id reference, to use it in the various child objects, such as Mentions and Poster.
The issue I run into is the persistance, at first glance I thought "Just provide the UserInfo object in the tweet constructor and it's done, all the child aggregates have access to it". But it's a bit harder on the Mention class, since the Mention will contain a dynamic username like so: "#anyuser". To validate if anyuser exists as a UserInfo object I need to query the database. However, I don't know who is mentioned before the tweet's content has been parsed, and that logic resides in the domain model itself and is called as a result of using the tweets constructor. Without this logic, no mentions are extracted so nothing can "yet" be validated.
If I cannot validate it before creating the tweet, because I need the extraction logic, and I cannot use the database repository inside the domain model layer, how can I validate the mentions properly?
Whenever an AR needs to reach out of it's own boundary to gather data there's two main solutions:
You pass in a service to the AR's method which allows it to perform the resolution. The service interface is defined in the domain, but most likely implemented in the infrastructure layer.
e.g. someAr.someMethod(args, someServiceImpl)
Note that if the data is required at construction time you may want to introduce a factory that takes a dependency on the service interface, performs the validation and returns an instance of the AR.
e.g.
tweetFactory = new TweetFactory(new SqlUserInfoLookupService(...));
tweet = tweetFactory.create(...);
You resolve the dependencies in the application layer first, then pass the required data. Note that the application layer could take a dependency onto a domain service in order to perform some reverse resolutions first.
e.g.
If the application layer would like to resolve the UserInfo for all mentions, but can't because it doesn't know how to parse mentions within the text it could always rely on a domain service or value object to perform that task first, then resolve the UserInfo dependencies and provide them to the Tweet AR. Be cautious here not to leak too much logic in the application layer though. If the orchestration logic becomes intertwined with business logic you may want to extract such use case processing logic in a domain service.
Finally, note that any data validated outside the boundary of an AR is always considered stale. The #xyz user could currently exist, but not exist anymore (e.g. deactivated) 1ms after the tweet was sent.

Query construction in custom Solr component plugin

I have developed a solr component which expands users query and adds additional clauses to the query. For this expansion we are making request to external REST api's. This query expansion logic is mainly in prepare() method. Everything works as expected in standalone mode. When we deploy this plugin in SolrCloud environment each shard is calling external REST api for query expansion.
My question is that can we make only one call to external REST api since its the same request sent from each shard to external service. How can we modify our component to make only one call per search request ?
In the prepare() method, right before your external API call, you can check RequestBuilder.isDistrib(). This boolean will be true for a request that is about to be distributed. You can then use this information to determine whether you can just execute the external request or you need to set one of the SolrCloud hosts that does this job.
How to determine the SolrCloud host to use for the external API? You could...
Hardwire one of the hosts into the component and check whether localhost is the hard wired host. That would unbalance the host load, though.
Have an arbitrary measurement that any component can check for itself, like hosts 1-10 fire the external request when the current minute is equal to the number of the host.
Even throw a dice in the search frontend, give the hostname to Solr via query parameter and have the component check this parameter (get it from ResponseBuilder.req.getParams()) against its local hostname.
You can get really creative there.
After you got an answer from the external API, you can use modifyRequest() to update all other hosts on the results.
Please read more in the Solr Wiki.
The approach you can use is like:
Rewrite your component into requestHandler (or just wrapper over standard requestHandler)
Make your requestHandler be aware of special flag which will trigger / not trigger your own custom logic. What I mean looks like this (I know it is not fancy):
public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp) throws Exception {
...
SolrParams params = req.getParams();
if (req.getParams().get("apiCallWasSent") == null) {
makeApiCall(req, rsp);
params = new ModifiableSolrParams(params);
params.add("apiCallWasSent", "true");
req.setParams(params);
}
...
super.handleRequestBody(req, rsp);
}
In my opinion those additional query clauses and everything what is related to the query itself should be handled by your own QParserPlugin. But component also can handle those clauses.

Cakephp 3 process code after page response

One of the requirements of the project I am working on is that I log the connections made to the site, due to the amount of processing being done to get as much information as possible I would like to process this after the page has been sent back to the user.
At the moment I am running my code in the afterFilter of my appController:
public function afterFilter(Event $event){
$log_request = new RequestsController;
$log_request->log_request();
}
I am attempting to run this in afterRender of my appController:
public function afterRender(Event $event, $viewFile){
$log_request = new RequestsController;
$log_request->log_request();
}
But I can not seem to get the code to execute or if it does then I do not know how to find out what the error being thrown is.
If somebody can point me towards an example of this being done or a concurrent method of doing this (it needs to be logged within a second of the request) I would appreciate it.
$log_request = new RequestsController; you don't instantiate controllers inside controllers. You want to learn the MVC design pattern first when using a MVC based framework or you'll end up with a non maintainable piece of horrible spaghetti code. I recommend you to do the blog tutorial to get a basic understanding.
If somebody can point me towards an example of this being done or a concurrent method of doing this (it needs to be logged within a second of the request) I would appreciate it.
Read this chapter: CakePHP Logging Taken from there:
Logging data in CakePHP is easy - the log() function is provided by the LogTrait, which is the common ancestor for many CakePHP classes. If the context is a CakePHP class (Controller, Component, View,...), you can log your data. You can also use Log::write() directly.
Add the log trait to the AppController, pass the request to the log() method and configure the logging to log these requests to whatever you prefer either in afterRender() or if you want to do it really late, do it in __destruct().

How to capture original endpoint URI within an expression (Recipient List EIP)

I'm attempting to use the Recipient List EIP to dynamically generate the consumer endpoint URI during runtime based on configuration entries in a database (http://camel.apache.org/how-to-use-a-dynamic-uri-in-to.html). I've got a number of routes that I want to handle this way so I'd like to build something that can handle multiple routes generically.
Therefore, my idea is to keep an in memory map of these URI values keyed on some type of identifying information (original endpoint URI seems like a logical choice) which would be updated if/when the database is updated to keep the routes in sync, and prevent having to go to the database for every exchange. Using the RouteBuilder, I am setting up the route with the recipient list and Bean expression.
from(endpointUri).recipientList(bean(MyBean.class, "getUri"));
I know that I can capture various objects such as the exchange, body, headers (as long as I know the name), etc using the Bean binding for the getUri method. Is it possible to somehow get the original endpoint URI value so that I can use it as a key to fetch the correct consumer endpoint?
The Exchange interface has getFromEndpoint() method which returns an Endpoint. The Endpoint interface has getEndpointUri() method which returns a String. Perhaps that's what you need? If that's not sufficient, you could set header value(s) at some point and then subsequently retrieve them later in your route.

Design question on dynamic Apache camel routes/context

We have ActiveMQ onto which the events that happen in the system are published. The project involves users adding entities to their watch-list and whenever there are events on those entities I would like an email to be sent out to the interested participants.
The use-case roughly translates to some one expressing an interest in a product information page on the catalog and an email being sent whenever any activity happens on that product (price goes down, there is a positive review etc.,). I had modelled this interaction as a Camel route.
So, for example, if the user says email me whenever this product's rating equals 5, then the following route would be added to the camel context:
from("activemq:topic:events.product.save").filter().xpath("/object[<object id>]/rating").isEqualTo("5").to("email:<user's email>")
Similarly if the user wants to be notified whenever there is a new comment on a product, another route would be created and so on. This could potentially, end up creating thousands of routes as each user starts adding their watches of interest.
Some questions that I have are:
Is this an acceptable way of creating dynamic routes? One option I am considering is to use recipient lists. But I haven't been able to come up with a solution that would make it elegant to route messages to the bean that would return the recipient list. For example for the case explained above would the bean have a bunch of if-else to see which recipient list to return?
The camelcontext has a method to load routes from a xml file but no method to persist the existing routes. What would be simplest (and efficient) way to persist these dynamically created routes? This thread in the camel-users list sums up my request.
Given the dynamic nature of your subscription requirements, you should use a database to store the information rather than trying to create dynamic routes. This is a much more scalable/appropriate use of technology...
Then you can only need a single static route or a POJO consumer (see below) that can process the product update messages using a simple POJO bean (bean-binding can help, etc). The POJO bean would then be responsible for querying the database to find all "interested" users and send an email using camel-mail
public class NotificationBean {
#Consume(uri="activemq:topic:events.product.save")
public void onUpdate(#XPath("/object/id") String id,
#XPath("/object/rating") String rating) {
//query database for subscriptions for this product ID/rating, etc.
//for each interested subscriber
//send email (camel-mail, etc)
}
public void addSubscription(String productID, Integer rating, String email) {
//create/update subscription entry in database, etc...
}
}

Resources