Apache Camel vs Apache Nifi - apache-camel

I am using Apache camel for quite long time and found it to be a fantastic solution for all kind of system integration related business need. But couple of years back I came accross the Apache Nifi solution. After some googleing I found that though Nifi can work as ETL tool but it is actually meant for stream processing.
In my opinion, "Which is better" is very bad question to ask as that depend on different things. But it will be nice if somebody can describe more about the basic comparison between the two and also the obvious question, when to use what.
It will help to take decision as per my current requirement, which will be the good option in my context or should I use both of them together.

The biggest and most obvious distinction is that NiFi is a no-code approach - 99% of NiFi users will never see a line of code. It is a web based GUI with a drag and drop interface to build pipelines.
NiFi can perform ETL, and can be used in batch use cases, but it is geared towards data streams. It is not just about moving data from A to B, it can do complex (and performant) transformations, enrichments and normalisations. It comes out of the box with support for many specific sources and endpoints (e.g. Kafka, Elastic, HDFS, S3, Postgres, Mongo, etc.) as well as generic sources and endpoints (e.g. TCP, HTTP, IMAP, etc.).
NiFi is not just about messages - it can work natively with a wide array of different formats, but can also be used for binary data and large files (e.g. moving multi-GB video files).
NiFi is deployed as a standalone application - it's not a framework or api or library or something that you integrate in to something else. It is a fully self-contained, realised application that is fully featured out of the box with no additional development. Though it can be extended with custom development if required.
NiFi is natively clustered - it expects (but isn't required) to be deployed on multiple hosts that work together as a cluster for performance, availability and redundancy.
So, the two tools are used quite differently - hopefully that helps highlight some of the key differences

It's true that there is some functional overlap between NiFi and Camel, but they were designed very differently:
Apache NiFi is a data processing and integration platform that is mostly used centrally. It has a low-code approach and prefers configuration.
Apache Camel is an integration framework which is mostly used in distributed solutions. Solutions are coded in Java. Example solutions are adapters, flows, API's, connectors, cloud functions and so on.
They can be used very well together. Especially when using a message broker like Apache ActiveMQ or Apache Kafka.
An example: A java application is enhanced with Camel so that it can send messages to Kafka. In NiFi the first step is consuming those messages from Kafka. Then in the NiFi flow the message is changed in various steps. In the middle the message is put on another Kafka topic. A Camel function (CamelK) in the cloud does various operations on the message, when it's finished it put the message on a Kafka topic. The message goes through a NiFi flow which at the end calls an API created with Camel.
In a blog I wrote in detail on the various ways to combine Camel and Nifi:
https://raymondmeester.medium.com/using-camel-and-nifi-in-one-solution-c7668fafe451

Related

What is mediation engine?

What is mediation engine as referred to in camels documentation(below link)?
https://camel.apache.org/manual/latest/faq/what-is-camel.html
A use-case example too would be greatly appreciated.
The mediation engine reference in this context is originating from the topic of Enterprise Application Integration and is closely related to what GoF Mediator Pattern does - That is encapsulate communication between entities. In the case of EAI, a mediator/mediation engine sits between multiple disparate systems and acts as a broker between them, instead of letting the systems communicate directly.
Mediation approach in EAI offers capabilities like
Reduced coupling between systems. For instance, you do not have to learn and implement a legacy mainframe protocol in modern systems, just because you want to get some data from main frames. A mediation engine like Apache Camel could talk REST over HTTPS at one end and some archaic mainframe protocol on the other end.
Ease of migration: Once the mainframes area replaced with something else, you could just change the mediation layer to do something about it, instead of modifying multiple impacted systems that used to talk to mainframes.
Access to single resource/service via multiple channels: Let's say you have an old system that currently does SOAP over HTTP but you would like to offer REST with JSON payload to some of your new customers. Instead of building completely new systems up-front for this purpose, you could throw Apache Camel as a mediator and it would accept JSON payloads at one end and SOAP on the other end. Whoever wants to talk JSON can go through Camel and whoever wants to do SOAP may continue in a direct connection to the legacy system. Someday if some hypothetical FooBar protocol becomes popular, and if Apache Camel provides you a FooBar component, your users who demands FooBar support could be routed through Camel, to the system that still speaks SOAP.
All these stuff is discussed in detail in the Enterprise Integration Patterns site and book. Apache Camel implements truck loads of patterns as described in the EIP Book. I hope this answer helped you to understand the mediation role Apache Camel could play in Enterprise IT ecosystems.
From Camel in action
The core feature of Camel is its routing and mediation engine. A
routing engine selectively moves a message around, based on the
route’s configuration. In Camel’s case, routes are configured with a
combination of enterprise integration patterns and a domain-specific
language.
In this link, https://camel.apache.org/manual/latest/faq/what-is-camel.html indicated projects that can be components of the Camel route to and from where messages can be sent and consumed (https://camel.apache.org/components/latest/activemq-component.html, https://camel.apache.org/components/latest/cxf-component.html).
Apache camel is kind of a ESB middleware. Mediation with respect to Camel would mean the following
Data format transformation : If Application A speaks JSON and Application B understands CSV format. You can use Apache Camel to transfer JSON to CSV.
Protocol transformation : If Application A knows only to call webservices but Application B prefers reading data from a message queue. You can use Apache Camel to receive this data by exposing a webservice and then push it to a queue for application B to consume.
Content transformation - Filtering or Enriching data : During this transformation process, you can also transform the data by filtering or enriching data fields based on what application B needs. In this way no change is required in A as it sends what it has and no change is required in application B as it gets what it needs.
Connectors : Many ESB now have in-built connectors to connect directly with ERP or SAS based applications. For example, a kafka connector. https://camel.apache.org/blog/Camel-Kafka-connector-intro/

Do Apache Camel and Apache Airflow overlap and how do they compare?

We are currently using Apache-Camel for ETL, that is, we take daily/weekly/monthly exports from various databases, perform needed actions and then publish the results somewhere for other databases to ingest.
Recently i saw a talk on Apache-Airflow, and it seems to me that it can do the work Camel is doing only easier. By easier i mean it looks like it would be more self-documenting and therefore easier to maintain. Am i correct? And why are there no comparisons between the two, like there are between Camel and Mule?
Apache Camel and Apache Airflow were written for different purposes. The former as a Enterprise Integration Framework, the latter as a platform to programmatically author, schedule and monitor workflows, this is why they are not generally compared side-by-side.
Apache Camel can be used for ETL: think of ETL as a process integrating the operational DB and the datawarehouse, and think of each step in the ETL data-processing process as a message.
Would it be easier to perform the task we are doing now, if we changed to Airflow? Well, generally how well suited a framework is for a specific company's needs depends on how things are set up on site. In our case we have chosen for Java and we want our processes to run on windows machines and on linux. The comparison then becomes:
Camel's main advantages are that it we are already using it, it's Java, and there is even a Spring boot auto-configuration.
The main disadvantages are that it is hard to maintain: understanding what exactly happens when and why, is hard. This is not directly caused by the features Camel has as a Enterprise Integration Framework, but because it is not tailored to simplify workflows.
Airflow is specifically written with scheduling interdependent jobs in mind, it even has a GUI to simplify this task.
For us it would require additional installations and it may not work with our Java-witten jobs out-of-the-box (i know that it is possible to call java from python, but this just adds more complexity).
For my needs i'm going to explore other options and maybe just leave things the way they are.
It depends on the type of problem(s) you are looking to solve. Apache Camel is an enterprise integration framework that implements well-known, accepted Enterprise Integration Patterns to provide specific solutions to types of well known problems.
Apache Airflow does not implement these integration patterns and therefore would be less useful in solving these specific types of problems.
From my experience with Camel, it is often misused as a generic platform to solve non enterprise-integration problems, which leads to dealing with the unnecessary overhead and constraints of the framework.
Using your ETL problem as an example, I would think that Apache Camel would be unnecessary unless you were doing some form of Message Routing or Message Transformation of the data that would warrant/benefit from using an integration solution such as Camel. The solutions that Apache Camel offers for these well-known integration problems are the real benefit to using Apache Camel over another tool or doing it by hand.
TLDR; To answer your question, Apache Camel is an Enterprise Integration Framework for solving specific types of integration problems and Apache Airflow is not. That is likely why there is no comparison between the two - they are apples and oranges, in a sense.
While you may be able to do some of the same things in both, Apache Camel will also have complex integration solutions out of the box that Airflow won't.

How to design/develop an integration layer or bus for different external services/apps

We are currently looking into replacing one of our apps with possibly an ESB or some similar tool and was looking for some insights into how best to approach this.
We currently have a stand alone service that consumes/interact with different external services and data sources, some delivered through SOAP Web Services and others we just use a DB connection. This service is exposed through SOAP and we have other apps that consume this service but are very tightly coupled to it, now we also have other apps that need to consume some of the external services and would like to replace this all together with an ESB or some sort of SOA platform.
What would be the best way to replace this 'external' services integration layer with an ESB? We were thinking of having a 'global' contract/API in which all of the services we consume are exposed as one single contract where all the possible operations and data structures that we use are exposed under one single namespace, would this be the best way of approaching this? and if so are there any tools that could help us automate this process or do we basically have to handcraft this contract/API?. This would also mean that for any changes to the underlying services/API's we will have to update this new API as well.
If not then the other option I see is to basically use the 'ESB' as a 'proxy' layer in which all of our sources are exposed as they are, so we would end up with several different 'contracts' / API endpoints, but I don't really see the value in this.
Also given the above what would be the best tool for the job? is a full blown ESB an overkill or are we much better rolling our own using something like Apache Camel or Spring Integration?.
A few more details:
We are currently integrating over 5 different external services with more to come in the future.
Only a couple of apps consuming our current app at the moment but several other apps/systems in the future will need to consume some of these external services.
We are currently using a single method of communication (SOAP) between these services but some apps might use pub/sub messaging in the future, although SOAP will still be the main protocol used.
I am new to ESB integration so I apologize in advance if I'm misunderstanding a lot of these technologies and the problems they are meant to solve.
Any help/tips/pointers will be greatly appreciated.
Thanks.
You need to put in some design thoughts of what you want to achieve over time.
There are multiple benefits and potential pitfalls with an ESB introduction.
Here are some typical benefits/use cases
When your applications are hard to change or have very different release cycles - then it's convenient to have an ESB in the middle that can adopt the changes quickly. This is very much the case when your organization buys a lot of COTS products and cloud services that might come with an update the next day that breaks the current API.
When you need to adapt data from one master data system to several other systems and they might not support the same interfaces, i.e. CRM system might want data imported via web services as soon as it's available, ERP want data through db/staging tables and production system wants data every weekend in a flat file delivered via FTP. To keep the master data system clean and easy to maintain, just implement one single integration service in the master data system, and adapt this interface to the various other applications within the ESB plattform instead.
Aggregation or splitting of data from various sources to protect your sensitive systems might be a use case. Say that you have an old system that can take a small updates of information at a time and it's not worth to upgrade this system - then an integration solution that can do aggreggation or splitting or throttling can be a good solution.
Other benefits and use cases include the ability to track and wire tap every message passing between systems - which can even be used together with business intelligence tools to gather KPI:s.
A conceptual ESB can also introduce a canonical message format that is used for all services that needs to communicate. If a lot of applications share the same data with several other applications (not only point to point) - then the benefits of a canonical message format can outweight the cost (which is/can be high). An ESB server might be useful to deal with canonical data as it is usually very good at mapping from one format to another.
However, introducing an ESB without a plan what benefits you are trying to achieve is not really a good thing, since it introduces overhead - you need another server to keep alive, you need perhaps another team to understand all data flows. You need particular knowledge with your integration product. Finally, you need to be able to have some governance around it so that your ESB initiative does not drift away from the goals/benefits you have foreseen.
You should choose some technology that you are comfortable with - or think you can be comfortable with. Apache Camel is indeed very powerful and my favorite integration engine - but it's not an ESB as it does not come with a runtime that you can use to deploy/manage/monitor your integration services with. You can use it together with most Java EE application servers or even better - Apache ServiceMix (= Karaf+Camel+ActiveMQ+CXF) which is built for this task.
The same goes with spring integration - you need to run it somewhere, app servers or what not.
There is a large set of different products, both open source and commercial that does these things.

How to integrate various application and provide common interface to access their data?

we have a few various applications that stores its data and we need one common service which provides access to these data.
With the applications I mean for example Atlassian Jira, Confluence, SVN, Git, LDAP, few internal mysql databases, etc. Some of them offers you SOAP API, REST API, various command line clients, for some you have to directly access database to get data.
What we want is a common REST API interface, to access all possible data sources. Of course, we have to solve authentication and authorization, caching and many more tasks.
It seems that something like ESB - Enterprise service bus and EIP - Enterprise integration patterns is answer to our needs.
For start, we are playing and actually dig in to Apache Camel - it's not full EIP stack, it's "just" a integration framework. But I guess it's good enough for us right now.
My question is - what you mean about the solution? Are we on the good way?
Thanks!
Camel has a lot of connectors, so that would be a great start.
If you are afraid it is too thin, then take a look at Apache ServiceMix, which provides a deployment (OSGi) container for camel routes (and other things). Camel comes bundled within the standard service mix release out of the box.
The hard task is probably to design the generic API good enough to cover your use cases.
A GIT repo and a Database are very different, is this very generic? Do you only want to access "text" data or something?
I like the approach with camel non the less, since it's rather generic and flexible in these kind of scenarios. That you will need

Apache SOA vs. Mule

I'm looking for a high level technical gap analysis of the Apache ESB/SOA stack (Servicemix, Camel, ActiveMQ, CXF) vs. comparable Mule technologies.
As well, I'm trying to better understand how these frameworks are viewed amongst developers in terms of learning curve, stability, scalability and overall ability to meet client requirements...
It's not really an answer, but too long to be added as a comment.
Gartner does such comparisons (example), so does Forrester (example1; example2), but their papers are:
expensive to obtain
focusing more on the market share and the hype, less on the technical capability to deliver a solution
mainly about commercial products - maybe because market share for open source is difficult to measure (no licenses sold)
I personally have experience with Oracle Fusion (bad), Tibco (better) and Vitria (outdated), but I'm not up to the challenge to do a detailed comparison...
Camel uses a Java Domain Specific Language in addition to Spring XML for configuring the routing rules and providing Enterprise Integration Patterns
Camel's API is smaller & cleaner (IMHO) and is closely aligned with the APIs of JBI, CXF and JMS; based around message exchanges (with in and optional out messages) which more closely maps to REST, WS, WSDL & JBI than the UMO model Mule is based on
Camel allows the underlying transport details to be easily exposed (e.g. the JmsExchange, JbiExchange, HttpExchange objects expose all the underlying transport information & behaviour if its required). See How does the Camel API compare to
Camel supports an implicit Type Converter in the core API to make it simpler to connect components together requiring different types of payload & headers
Camel uses the Apache 2 License rather than Mule's more restrictive commercial license
Mulesoft Anypoint is a ready to use full-stack integration platform. Apache components functionally provide similar capabilities but generally take more time to implement and support. Both allow dropping down to Spring / Java level therefore no true technical gaps in either. The choice would depend on the business goals, available budget, and the scope and number of the integration projects. Mule offers better time to market and is easier to operate, but ain't particularly cheap. Apache stack is free but developers' time (generally) is not.
Camel is a EAI Framework and It doesn't have it's own runtime but other side Mule is full ESB product having it's own run-time. Mule has lot of connector to integrate with other system and stand itself as light weight ESB. Developers have full liberty to write own connector or invoke existing Java library to avoid rework.

Resources