It seems like both serve the same purpose. Is there any difference that makes one useful in certain situations and not the other ?
In practice, they are very similar, but a Processor is more limited than a Bean. I generally use a Processor for simple use cases that just interact with the Exchange. Also, inline processors are a great way to interact without having to create a separate class.
Beans provide more flexibility and also support a true POJO approach. This allows you to more easily integrate with existing APIs (just need to convert the inputs/outputs to match, etc).
Beans also provide great features/flexibility with regards to Camel routing/EIP integration, including...
rich set of bindings that allow you to quickly bind data from the Exchange to attributes of a bean method, etc.
POJO consuming/producing allow you to interact with endpoints in a reusable manner
used as expressions/predicates (for POJO EIP implementation...filters, etc)
Boils down to a matter of preference, I'd say. I generally opt for the POJO approach and so I started using beans to do my processing, but over time I've slowly moved to using Processors.
I was feeling pain in the following cases:
Bean methods with more than one parameter
Trying to get data out of the exchange params / the message headers
I know that Camel 2.8 takes out some of the pain of these cases by allowing annotations in your bean which guide Camel on how to call your bean's methods. I didn't want to go this route -- felt wrong to put Camel annotations into a bean that shouldn't care that it's being called by Camel.
In the end we created an annotation-free, client-agnostic bean and a very thin Processor that pulls everything it needs from camel and passes it to that bean.
Just my 2 cents - the bean route really isn't a bad one - it'll do the job just as well (esp in 2.8)
EDIT
Many improvements have been made to camel's use of POJOs to process messages since this was written - this answer may no longer be applicable.
Related
I am slowly getting familiar with Camel, however I am struggling to understand the level of granularity at which it should be considered. Should Camel be used only if passing messages from one application to another, or it is also appropriate to use Camel to pass messages between components and / or layers within a single application?
For example, I have a requirement to expose a web service that accepts bookings, validates them and writes them to a queue. Would you recommend using camel in this scenario or does it really depend on the level of flexibility I want my solution to allow.
Put another way, if I was required to save the bookings to a database I would never have considered camel and instead just built it as a traditional app that calls a DAL to save the booking. Of course I could use camel-ibatis to insert the data but in this context using camel seems overkill.
Thank you for any pointers on this.
As you obviously suspect, it's somewhat of a grey line. The more that you need to flexible, the more benefit you'll get from using Camel.
Just this past week, I built a prototype of an app that needed to accept an HTTP post, put the data on a queue, and then pull messages from the queue and use them to update a Mongo database.
Initially, I used Camel, and it worked well. Then, the requirement for the HTTP POST was removed (it became just consuming messages from a queue and updating the database), and the database update became more complex than was easily supported via a simple string-based camel mongo endpoint spec, so I wound up doing away with camel, and rewrote it with just a jms connector and the Mongo api.
So, as usual, it depends. I would say that if you're just moving data between two endpoints, and there are no content-based decisions or routing, then you probably won't benefit from using Camel. Once you actually want to use one or more of the Enterprise Integration Patterns, then Camel will be a benefit.
As camel has lots of component, it looks like it can do every thing. But sometime it could be more straight forward if you can use the third party lib directly and don't have lots of business logic which can leverage the Enterprise Integration Patterns.
The real benefit of Camel is you can focus on how to route your message to meet your business logic needs without caring about implementation detail of the component.
as suggested, there are no hard rules...here is my take
use Camel to simplify technology challenges
complex message routing algorithms (EIPs)
technology integrations (components)
use Camel for these types of requirements
highly event based processes (EIPs)
exposing multiple interfaces to biz logic (http, file, jms, etc)
complex runtime management needs (lifecycle, policies)
that said...
don't use for just one simple use case
don't add unnecessary complexity
have a quorum of use cases/reasons/justification to use it
along these lines, I presented the following at ApacheCon focused on why/how to use Camel:
http://www.consulting-notes.com/2014/06/apachecon-2014-presentation-apache.html
I'm working on my first ActiveMQ deploy (actually the broker is Apollo). I'd like to use pooled connections as eventually we will have lots of producers and consumers, perhaps in the same VM, sending and receiving lots of messages.
But reading http://activemq.apache.org/how-do-i-use-jms-efficiently.html it really is not too clear to me what is the best path to efficiency:
1) camel?
2) spring?
3) PooledConnectionFactory? Is this class even compatible with Tomcat, sure likes to throw exceptions.
JCA does not look like an option since Tomcat 6.x is not supported.
Tomcat does not really care or even know anything about JMS. So, standard approach would do, like you stated.
As far as performance is concerned, Camel and Spring (which Camel JMS is based upon) does not really add anything to performance, rather the other way around. These frameworks instead add a lot of convenience when it comes to writing complex JMS applications. The PoolingConnecitonFactory class (or even the generic spring class [CachingConnectionFactory])2 enable reuse of objects and hence boosts performance when using Spring based JMS frameworks such as Camel.
we have a few various applications that stores its data and we need one common service which provides access to these data.
With the applications I mean for example Atlassian Jira, Confluence, SVN, Git, LDAP, few internal mysql databases, etc. Some of them offers you SOAP API, REST API, various command line clients, for some you have to directly access database to get data.
What we want is a common REST API interface, to access all possible data sources. Of course, we have to solve authentication and authorization, caching and many more tasks.
It seems that something like ESB - Enterprise service bus and EIP - Enterprise integration patterns is answer to our needs.
For start, we are playing and actually dig in to Apache Camel - it's not full EIP stack, it's "just" a integration framework. But I guess it's good enough for us right now.
My question is - what you mean about the solution? Are we on the good way?
Thanks!
Camel has a lot of connectors, so that would be a great start.
If you are afraid it is too thin, then take a look at Apache ServiceMix, which provides a deployment (OSGi) container for camel routes (and other things). Camel comes bundled within the standard service mix release out of the box.
The hard task is probably to design the generic API good enough to cover your use cases.
A GIT repo and a Database are very different, is this very generic? Do you only want to access "text" data or something?
I like the approach with camel non the less, since it's rather generic and flexible in these kind of scenarios. That you will need
Lets say I have a Rest API which can be accessed via e.g.: mypage.com/v1/users/1234
And i am using Java EE and HttpServlets for this rest api.
Is it a good idea to send all v1 requests to a single servlet and then pass it to my own structure to be more independent and maybe later switch more easily from HttpServlets to something else? Or better create and register a servlet for all types of ressources, so one for mypage.com/v1/users and one for mypage.com/v1/cars and so on
Is it much slower to use only one servlet or inefficient to do this or just inconvenient?
Maintenance is going to become very difficult quickly. Take a look at Jersey or Resteasy. The learning curve is small and jax-rs takes care of a whole lot of the boilerplate code you would have to write in vanilla servlets.
I would like to develop a web-app requiring data persistence using GWT and GAE. As I understand it, my only (or at least by far the most convenient) option for data persistence is GAE's Datastore, using JDO or JPA annotated objects. I would also like to be able to send my objects back and forth client-server using GWT Remote Procedure Calls (RPC), therefore my objects must be able to "detach". However, GWT RPC serialization cannot handle detached JDO/JPA objects and it doesn't appear as though it will in the near future.
My question: what is the simplest and most direct solution to this? Being able to share the same objects client/server with server-side persistence would be extremely convenient.
EDIT
I should clarify that I still wish to use GWT RPC with GAE's Datastore. I am just looking for the best solution that would allow all these technologies to work together.
Try use http://gilead.sourceforge.net/
I've recently found Objectify, which is designed to be a replacement for JDO. Not much experience with it yet but its simpler to use than JDO, seems more lightweight, and claims to get around the need for DTOs with GWT, though I haven't tried that particular feature yet.
Ray Cromwell has a temporary hack up. I've tried it, and it works.
It forces you to use Transient instead of Detachable entities, because GWT can't serialize a hidden Object[] used by DataNucleus; This means that the objects you send to the client can't be inserted back into the datastore, you must retrieve the actual datastore object, and copy all the persistent fields back into it. Ray's method uses reflection to iterate over the methods, retrieve the getBean() and setBean() methods, and apply the entity setBean() with your transient gwt object's getBean().
You should strive to use JDO, the JPA isn't much more than a wrapper class for now. To use this hack, you must have both getter and setter methods for every persistent field, using PROPER getBean and setBean syntax for every "bean" field. Well, ALMOST PROPER, as it assumes all getters will start with "get", when the default boolean field use is "is".
I've fixed this issue and posted a comment on Ray's blog, but it's awaiting approval and I'm not sure if he'll post it. Basically, I implemented a #GetterPrefix(prefix=MethodPrefix.IS) annotation in the org.datanucleus package to augment his work.
In case it doesn't get posted, and this is an issue, email x_AT_aiyx_DOT_info Re: #GetterPrefix for JDO and I'll send you the fix.
Awhile ago I wrote a post Using an ORM or plain SQL?
This came up last year in a GWT
application I was writing. Lots of
translation from EclipseLink to
presentation objects in the service
implementation. If we were using
ibatis it would've been far simpler to
create the appropriate objects with
ibatis and then pass them all the way
up and down the stack. Some purists
might argue this is Badâ„¢. Maybe so (in
theory) but I tell you what: it
would've led to simpler code, a
simpler stack and more productivity.
which basically matches your observation.
But of course that isn't an option with Google App Engine so you're pretty much stuck having a translation layer between client-side objects and your JPA entities.
JPA entities are quite rigid so they're not really appropriate for sending back and forth between the client anyway. Typically you want little bits from several entities when doing this (thus ending up with some sort of presentation-layer value object). That is your path forward.
Try this. It is a module for serializing GAE core types and send them to the GWT client.
You can consider using JSON. GWT has necessary API to parse & generate JSON string in the client side. You get a lot of JSON API for server side. I tried with google-gson, which is fine. It converts your JSON string to POJO model and viceversa. Hope this helps you providing a decent solution for your requirement
Currently, I use the DTO (DataTransferObject) pattern. Not necessarily as clean and plenty more boilerplate but GAE still requires a fair amount of boilerplate at current. ;)
I have a Domain Object mapped (usually) one-to-one with a DTO. When a client needs Domain info, a DAO(DataAccessObject) coughs up a DTO representation of the Domain object and sends that across the wire. When a DTO comes back, I hand the DAO the DTO which then updates all the appropriate Domain Objects.
Not as clean as being able to pass Domain Objects directly across the wire obviously but the limitations of GAE's JDO implementation and GWT's Serialization process means this is the cleanest way for me to handle this currently.
I believe Google's official answer for this is GWT 2.1 RequestFactory.
Given that you are using GWT and GAE, I'd suggest you stick to the official Google framework... I have a similar GWT / GAE based app and that's what I am doing.
By the way, setting up RequestFactory is a bit of pain in the ass. The current Eclipse plug-in doesn't include all the jars but I was able to find the help I needed, in Stackoverflow
I've been using Objectify as well, and I really like it. You still have to do some dancing around with pre/postLoad methods to translate e.g. Text to String and back.
since GWT ultimately compiles to JavaScript, for detached persistence it would need one of a few services available. the best known are HTML5 stores and Gears (both use SQLite!). of course, neither is widely deployed, so you'd have to convince your users to either use a modern browser or install a little-known plugin. be sure to degrade to a usable subset if the user doesn't comply
What about directly using Datastore API to load/store POJO domain objects?
It should be comparable to DTO approach, meaning e.g. that you have to manually handle all fields (if you don't use tricks like reflection-based automation) while it should give you more flexibility and full access to all Datastore features.