What are the advantages and disadvantages of using python or java when developing apache flink stateful function.
Is there any performance difference? which one is more efficient for the same operation?
Can we develop the application completely on python?
What are the features that one supports and the other does not.
StateFun support embedded functions and remote functions.
Embedded functions are bundled and deployed within the JVM processes that run Flink. Therefore they must be implemented in a JVM language (like Java) and they would be the most performant. The downside is that any change to the function code requires a restart of the Flink cluster.
Remote functions are functions that are executing in a separate process, and are invoked by the Flink cluster for every incoming message addressed to them. Therefore they are expected to be less performant than the embedded functions, but they provide a great flexibility in:
Choosing an implementation language
Fast scaling up and down
Fast restart in case of a failure.
Rolling upgrades
Can we develop the application completely on python?
Is it is possible to develop an application completely in Python, see the python greeter example.
What are the features that one supports and the other does not.
The current features are currently supported only in the Java SDK:
Richer routing logic from an ingress to a function. Any routing logic that you can describe via code.
Few more state types like a table and a buffer.
Exposing existing Flink sources and Sinks as ingresses and egresses.
Related
I'd appreciate some advice around the use of Stateful functions.
We are currently using Flink whereby we consume from a number of kafka streams, aggregate, run a computation and then output to a new stream.
The problem is that the computation element is provided by a different team whose language of choice is Python. We would like to provide them with the ability to develop and update their component independently of the streaming elements.
Initially, we just ported their code to Java.
Stateful functions seem to offer an alternative here whereby we would keep some of our functionality as is and host the model as a Stateful Function in Python. I'm wondering however, if there is any advantage to this over just hosting the computation module on its own pipeline and using AsyncFunction in Flink to interact with it.
If we were to move to Stateful functions I can't help feeling that we are adding complexity without using its power but I may be missing some important considerations around speed and resilience?
I want to begin by noting that Stateful Functions does have a DataStream interop module. This means you can use StateFun to handle the Python functions of your pipeline without rewriting the entire Flink Job.
That said, what advantages does Stateful Functions bring over using AsyncIO and doing it yourself?
Automated handling of connections, batching, back-pressuring, and retries. Even if you are using a single python function and no state, Stateful Functions has been heavily optimized to be as fast and efficient as possible with continual improvements from the community that you will get to leverage for free. StateFun has more sophisticated back pressuring and retry mechanisms in place than AsyncIO that you would need to redevelop on your own.
Higher level APIs. StateFuns Python SDK (and others) provide well defined, typed apis that are easy to develop against. The other team you are working with will only require a few lines of glue code to integrate with StateFun while the project will handle the transport protocols for you.
State! As the name of the project implies, stateful functions are well stateful. Python functions can maintain state and you will get Flink's exactly once guarantees out of the box.
I am using Apache camel for quite long time and found it to be a fantastic solution for all kind of system integration related business need. But couple of years back I came accross the Apache Nifi solution. After some googleing I found that though Nifi can work as ETL tool but it is actually meant for stream processing.
In my opinion, "Which is better" is very bad question to ask as that depend on different things. But it will be nice if somebody can describe more about the basic comparison between the two and also the obvious question, when to use what.
It will help to take decision as per my current requirement, which will be the good option in my context or should I use both of them together.
The biggest and most obvious distinction is that NiFi is a no-code approach - 99% of NiFi users will never see a line of code. It is a web based GUI with a drag and drop interface to build pipelines.
NiFi can perform ETL, and can be used in batch use cases, but it is geared towards data streams. It is not just about moving data from A to B, it can do complex (and performant) transformations, enrichments and normalisations. It comes out of the box with support for many specific sources and endpoints (e.g. Kafka, Elastic, HDFS, S3, Postgres, Mongo, etc.) as well as generic sources and endpoints (e.g. TCP, HTTP, IMAP, etc.).
NiFi is not just about messages - it can work natively with a wide array of different formats, but can also be used for binary data and large files (e.g. moving multi-GB video files).
NiFi is deployed as a standalone application - it's not a framework or api or library or something that you integrate in to something else. It is a fully self-contained, realised application that is fully featured out of the box with no additional development. Though it can be extended with custom development if required.
NiFi is natively clustered - it expects (but isn't required) to be deployed on multiple hosts that work together as a cluster for performance, availability and redundancy.
So, the two tools are used quite differently - hopefully that helps highlight some of the key differences
It's true that there is some functional overlap between NiFi and Camel, but they were designed very differently:
Apache NiFi is a data processing and integration platform that is mostly used centrally. It has a low-code approach and prefers configuration.
Apache Camel is an integration framework which is mostly used in distributed solutions. Solutions are coded in Java. Example solutions are adapters, flows, API's, connectors, cloud functions and so on.
They can be used very well together. Especially when using a message broker like Apache ActiveMQ or Apache Kafka.
An example: A java application is enhanced with Camel so that it can send messages to Kafka. In NiFi the first step is consuming those messages from Kafka. Then in the NiFi flow the message is changed in various steps. In the middle the message is put on another Kafka topic. A Camel function (CamelK) in the cloud does various operations on the message, when it's finished it put the message on a Kafka topic. The message goes through a NiFi flow which at the end calls an API created with Camel.
In a blog I wrote in detail on the various ways to combine Camel and Nifi:
https://raymondmeester.medium.com/using-camel-and-nifi-in-one-solution-c7668fafe451
We are currently using Apache-Camel for ETL, that is, we take daily/weekly/monthly exports from various databases, perform needed actions and then publish the results somewhere for other databases to ingest.
Recently i saw a talk on Apache-Airflow, and it seems to me that it can do the work Camel is doing only easier. By easier i mean it looks like it would be more self-documenting and therefore easier to maintain. Am i correct? And why are there no comparisons between the two, like there are between Camel and Mule?
Apache Camel and Apache Airflow were written for different purposes. The former as a Enterprise Integration Framework, the latter as a platform to programmatically author, schedule and monitor workflows, this is why they are not generally compared side-by-side.
Apache Camel can be used for ETL: think of ETL as a process integrating the operational DB and the datawarehouse, and think of each step in the ETL data-processing process as a message.
Would it be easier to perform the task we are doing now, if we changed to Airflow? Well, generally how well suited a framework is for a specific company's needs depends on how things are set up on site. In our case we have chosen for Java and we want our processes to run on windows machines and on linux. The comparison then becomes:
Camel's main advantages are that it we are already using it, it's Java, and there is even a Spring boot auto-configuration.
The main disadvantages are that it is hard to maintain: understanding what exactly happens when and why, is hard. This is not directly caused by the features Camel has as a Enterprise Integration Framework, but because it is not tailored to simplify workflows.
Airflow is specifically written with scheduling interdependent jobs in mind, it even has a GUI to simplify this task.
For us it would require additional installations and it may not work with our Java-witten jobs out-of-the-box (i know that it is possible to call java from python, but this just adds more complexity).
For my needs i'm going to explore other options and maybe just leave things the way they are.
It depends on the type of problem(s) you are looking to solve. Apache Camel is an enterprise integration framework that implements well-known, accepted Enterprise Integration Patterns to provide specific solutions to types of well known problems.
Apache Airflow does not implement these integration patterns and therefore would be less useful in solving these specific types of problems.
From my experience with Camel, it is often misused as a generic platform to solve non enterprise-integration problems, which leads to dealing with the unnecessary overhead and constraints of the framework.
Using your ETL problem as an example, I would think that Apache Camel would be unnecessary unless you were doing some form of Message Routing or Message Transformation of the data that would warrant/benefit from using an integration solution such as Camel. The solutions that Apache Camel offers for these well-known integration problems are the real benefit to using Apache Camel over another tool or doing it by hand.
TLDR; To answer your question, Apache Camel is an Enterprise Integration Framework for solving specific types of integration problems and Apache Airflow is not. That is likely why there is no comparison between the two - they are apples and oranges, in a sense.
While you may be able to do some of the same things in both, Apache Camel will also have complex integration solutions out of the box that Airflow won't.
I am in search of a tutorial that tells us to setup a basic apache flink machine learning. Current available
material is in scala language.
Flink's ML library does not support Java because its pipelining mechanism (being able to flexibly chain multiple Estimators and Transformers) heavily depends on Scala's implicit value resolution. Theoretically, it is possible to put the operations manually together, but this is quite tedious and not recommended.
More specifically, what usecases does Hazelcast Jet solve that Flink does not solve (equally well) and vice versa?
NOTE: I belong to Hazelcast Jet's core engineering team.
I'd say the main advantage of Hazelcast Jet isn't in offering a brand-new computing model, but in bringing the same level of convenience that Hazelcast is known for to the realm of DAG-based distributed computing.
If you currently have a Java application running in a cluster, adding Jet will be a snap: add the Maven dependency and write one line of code to start a Jet instance on the local member. The instances will self-discover to form their own cluster, and you can now submit your job to it.
If you want a dedicated distributed computing cluster, you can download the distribution ZIP to the cluster machines. Jet has native support for the most popular cloud environments, allowing the nodes you start to self-discover. You can then connect to the cluster using a Jet client.
Needless to say, Jet makes it very convenient to use a Hazelcast IMap or IList as a data source. Jet cluster can host Hazelcast structures directly; then you benefit from data locality and get the data with no network traffic. On the other hand, the choice of data source is completely unconstrained and there is public API dedicated to implementing fast, arbitrarily partitioned, custom data sources.
Jet solves the concerns of infinite streams processing like aggregating over time-based windows, dealing with reordered events and resilience to changes in the cluster topology (e.g., failure of individual Jet nodes) while maintaining the Exactly-Once processing guarantee.
Jet's main programming paradigm is the Pipeline API which is quite similar to java.util.stream API but adapted to the specifics of distributed computing (lambda serialization and other concerns).
Pipeline API builds upon a lower-level DAG-based model that is also exposed as public API.
in my opinion, flink seems to offer some very useful streaming features, wich are not yet offered by hatecast jet.
Different flexible window operators, wich also can handle out-of-order and late items.
fault tolerance on the cluster and delivery guarantees
Beside this it also seems to be more stable and well known at the moment.
For example, you can use it as a runtime for Apache Beam and then migrate easily between Google Data Flow on the cloud and your own deployment.
So I would currently use flink.
Best