How to run one source per task manager (or per node)? - apache-flink

I have implemented source which open fixed UDP port and listen it. So, I want to run exactly one source per task manager (in my case I run one task manager per node) because overwise a java.net.BindException: Address already in use exception will be thrown.
I notice this problem when test HA of Apache Flink. When I shut down one task manager the Apache Flick started trying to run two sources with the same port on one node.
So, how to run exactly one source per task manager (or per cluster node)?

It is currently not possible to dynamically enforce that exactly one task of a kind runs on each TaskManager. You can avoid that multiple source tasks are scheduled to the same machine by setting the number of slots to 1. However, then if you lose a machine and don't have a spare TaskManager, then you won't have enough slots to restart the job.
Alternatively, you could write your sources such that they are more resilient. For example, you could simple stop a source if they cannot bind to the specified port. Given that no other program can bind to the port, then you know that there is already another source task consuming data from this port.

Related

Camel file reading: race condition with 2 active servers

In our ESB project, we have a lot of routes reading files with file2 or ftp protocol for further processing. Important to notice, that the files we read locally (file2 protocol) are mounted network shares via different protocols (NFS, SMB).
Now, we are facing issues with race conditions. Both servers read the file and process it. We have reduced the possibility of that by using the preMove option, but from time to time the duplicate reading still occurs when both servers poll at the same millisecond. According to the documentation, an idempotentRepository together with readLock=idempotent could help, for example with HazelCast.
However, I'm wondering if this is a suitable solution for my issue as I don't really know if it will work in all cases. It is within milliseconds that both servers read the file, so the information that one server has already processed the file need to be available in the HazelCast grid at the point in time when the second server tries to read. Is that possible? What happens if there are minimal latencies (e.g. network related)?
In addition to that, the setting readLock=idempotent is only available for file2 but not for ftp. How to solve that issue there?
Again: The issue is not preventing dublicate files in general, it is solely about preventing the race condition.
AFAIK the idempotent repository should prevent in your case that both consumers read the same file.
The latency between detection of the file and the entry in hazelcast is not relevant because the file consumers do not enter what they read. Instead they both ask the repository for an exclusive read-lock. The first one wins, the second one is denied, so it continues to the next file.
If you want to minimize the potential of conflicts between the consumers you can turn on shuffle=true to randomize the ordering of files to consume.
For the problem with the missing readLock=idempotent on the ftp consumer: you could perhaps build a separate transfer-route with only 1 consumer that downloads the files. Then your file-consumer route can process them idempotent.

How to update internal state of nginx' module runtime?

Lets suppose I wish to write a nginx module that blocks clients by IP.
In order to do so, on initialization stage i read a file with IP addresses
that I have to block (black list) and store it in module's context.
Now I wish to update the black list without restarting nginx.
One of the possible solutions, is to add a handler on specific location.
e.g. if uri "/block/1.2.3.4" requested, my handler adds ip address 1.2.3.4 to the black list.
However, nginx runs several workers as separated processes, so only one particular worker will updated.
What is a common pattern to cope such problems?
But nginx does not require a restart (nor any downtime) in order to change the configuration!
See:
http://nginx.org/en/docs/control.html#reconfiguration
In order for nginx to re-read the configuration file, a HUP signal should be sent to the master process. The master process first checks the syntax validity, then tries to apply new configuration, that is, to open log files and new listen sockets. If this fails, it rolls back changes and continues to work with old configuration. If this succeeds, it starts new worker processes, and sends messages to old worker processes requesting them to shut down gracefully. Old worker processes close listen sockets and continue to service old clients. After all clients are serviced, old worker processes are shut down.
As an administrator, it would be my expectation that all modules would, in fact, be controlled in this way, too.
(Of course, if you require a lot of changes to the configuration very often, a different solution might be more appropriate.)
You give an explicit example of blocking access by IP. Are you sure you require a new module in order to accomplish the task? It would seem that a combination of the following standard directives might suffice already:
http://nginx.org/r/deny && http://nginx.org/r/allow
http://nginx.org/r/geo
http://nginx.org/r/map
http://nginx.org/r/if && http://nginx.org/r/return
If you're able to move the black list outside of the module's context, perhaps to a system file, a KV store, or SHM, that would allow each process to talk to a central source blacklist. I believe shmat() and futex will do the job and the overhead will be negligible.

Polling a database versus triggering program from database?

I have a process wherein a program running in an application server must access a table in an Oracle database server whenever at least one row exists in this table. Each row of data relates to a client requesting some number crunching performed by the program. The program can only perform this number crunching serially (that is, for one client at a time rather than multiple clients in parallel).
Thus, the program needs to be informed of when data is available in the database for it to process. I could either
have the program poll the database, or
have the database trigger the program.
QUESTION 1: Is there any conventional wisdom why one approach might be better than the other?
QUESTION 2: I wonder if programs have any issues "running" for months at a time (would any processes in the server stop or disrupt the program from running? -- if so I don't know how I'd learn there was a problem unless from angry customers). Anyone have experience running programs on a server for a long time without issues? Or, if the server does crash, is there a way to auto-start a (i.e. C language executable) program on it after the server re-boots, thus not requiring a human to start it specifically?
Any advice appreciated.
UPDATE 1: Client is waiting for results, but a couple seconds additional delay (from polling) isn't a deal breaker.
I would like to give a more generic answer...
There is no right answer that applies every time. Some times you need a trigger, and some times is better to poll.
But… 9 out of 10 times, polling is much more efficient, safe and fast than triggering.
It's really simple. A trigger needs to instantiate a single program, of whatever nature, for every shot. That is just not efficient most of the time. Some people will argue that that is required when response time is a factor, but even then, half of the times polling is better because:
1) Resources: With triggers, and say 100 messages, you will need resources for 100 threads, with 1 thread processing a packet of 100 messages you need resources for 1 program.
2) Monitoring: A thread processing packets can report time consumed constantly on a defined packet size, clearly indicating how it is performing and when and how is performance being affected. Try that with a billion triggers jumping around…
3) Speed: Instantiating threads and allocating their resources is very expensive. And don’t get me started if you are opening a transaction for each trigger. A simple program processing a say 100 meessage packet will always be much faster that initiating 100 triggers…
3) Reaction time: With polling you can not react to things on line. So, the only exception allowed to use polling is when a user is waiting for the message to be processed. But then you need to be very careful, because if you have lots of clients doing the same thing at the same time, triggering might respond LATER, than if you where doing fast polling.
My 2cts. This has been learned the hard way ..
1) have the program poll the database, since you don't want your database to be able to start host programs (because you'd have to make sure that only "your" program can be started this way).
The classic (and most convenient IMO) way for doing this in Oracle would be through the DBMS_ALERT package.
The first program would signal an alert with a certain name, passing an optional message. A second program which registered for the alert would wait and receive it immediatly after the first program commits. A rollback of the first program would cancel the alert.
Of cause you can have many sessions signaling and waiting for alerts. However, an alert is a serialization device, so if one program signaled an alert, other programs signaling the same alert name will be blocked until the first one commits or rolls back.
Table DBMS_ALERT_INFO contains all the sessions which have registered for an alert. You can use this to check if the alert-processing is alive.
2) autostarting or background execution depends on your host platform and OS. In Windows you can use SRVANY.EXE to run any executable as a service.
I recommend using a C program to poll the database and a utility such as monit to restart the C program if there are any problems. Your C program can touch a file once in a while to indicate that it is still functioning properly, and monit can monitor the file. Monit can also check the process directly and make sure it isn't using too much memory.
For more information you could see my answer of this other question:
When a new row in database is added, an external command line program must be invoked
Alternatively, if people aren't sitting around waiting for the computation to finish, you could use a cron job to run the C program on a regular basis (e.g. every minute). Then monit would be less needed because your C program will start and stop all the time.
You might want to look into Oracle's "Change Notification":
http://docs.oracle.com/cd/E11882_01/appdev.112/e25518/adfns_cqn.htm
I don't know how well this integrates with a "regular" C program though.
It's also available through .Net and Java/JDBC
http://docs.oracle.com/cd/E11882_01/win.112/e23174/featChange.htm
http://docs.oracle.com/cd/E11882_01/java.112/e16548/dbchgnf.htm
There are simple job managers like gearman that you can use to send a job message from the database to a worker. Gearman has among others a MySQL user defined function interface, so it is probably easy to build one for oracle as well.

What is the role of 'cluster' software in relation to MPI?

I'm a little confused regarding how a cluster implementation ("Beowulf cluster") relates to a communication protocol such as MPI. What software components are needed to set up a "cluster" using something like OpenMPI?
A cluster, as you know, is a bunch of computers networked together. When you have such configuration, you normally install and use the following:
MPI, for communication between processes
NFS, to have a network disk visible and shared to all nodes
NTP, to synchronize the time of the nodes so that you can compare log events and timestamps
bootp to boot the nodes from a remote node, so that each node restart fresh with a guaranteed good and uniform setup.
a set of cluster utilities to make your life easier, such as a distributed ssh to execute the same command on all nodes at the same time.
a task scheduler, or queue manager, such as Condor, LFS or others, that allow you to prioritize job submissions and eventually measure them for limiting/pricing.
a watchdog, so to reboot one node automatically if it gets stuck.
software control for UPS (so to shut down automatically in case of prolonged loss of power)
And much more. All this stuff is completely additional to MPI. MPI is just a communication channel between processes. MPI alone does not "make the cluster".
MPI, as you noted, will only provide communication between processes. If there will not be several people using the cluster, you really need nothing more (apart from some script to launch your program on all the nodes).
But, in reality we sadly seldom have our personal cluster. That's when you need a scheduler. The scheduler typically handles job submissions and resource allocation, possibly also taking care of prioritization, user management and other things to make your life easier.
Take a look at Oracle Grid Engine (nee Sun Grid Engine or CODINE).

Server Architecture for Embedded Device

I am working on a server application for an embedded ARM platform. The ARM board is connected to various digital IOs, ADCs, etc that the system will consistently poll. It is currently running a Linux kernel with the hardware interfaces developed as drivers. The idea is to have a client application which can connect to the embedded device and receive the sensory data as it is updated and issue commands to the device (shutdown sensor 1, restart sensor 2, etc). Assume the access to the sensory devices is done through typical ioctl.
Now my question relates to the design/architecture of this server application running on the embedded device. At first I was thinking to use something like libevent or libev, lightweight C event handling libraries. The application would prioritize the sensor polling event (and then send the information to the client after the polling is done) and process client commands as they are received (over a typical TCP socket). The server would typically have a single connection but may have up to a dozen or so, but not something like thousands of connections. Is this the best approach to designing something like this? Of the two event handling libraries I listed, is one better for embedded applications or are there any other alternatives?
The other approach under consideration is a multi-threaded application in which the sensor polling is done in a prioritized/blocking thread which reads the sensory data and each client connection is handled in separate thread. The sensory data is updated into some sort of buffer/data structure and the connection threads handle sending out the data to the client and processing client commands (I supposed you would still need an event loop of sort in these threads to monitor for incoming commands). Are there any libraries or typical packages used which facilitate designing an application like this or is this something you have to start from scratch?
How would you design what I am trying to accomplish?
I would use a unix domain socket -- and write the library myself, can't see any advantages to using libvent since the application is tied to linux, and libevent is also for hundreds of connections. You can do all of what you are trying to do with a single thread in your daemon. KISS.
You don't need a dedicated master thread for priority queues you just need to write your threads so that it always processes high priority events before anything else.
In terms of libraries, you will possibly benifit from Google's protocol buffers (for serialization and representing your protocol) -- however it only has first class supports for C++, and the over the wire (serialization) format does a bit of simple bit shifting to numeric data. I doubt it will add any serious overhead. However an alternative is ASN.1 (asn1c).
My suggestion would be a modified form of your 2nd proposal. I would create a server that has two threads. One thread polling the sensors, and another for ALL of your client connections. I have used in embedded devices (MIPS) boost::asio library with great results.
A single thread that handles all sockets connections asynchronously can usually handle the load easily (of course, it depends on how many clients you have). It would then serve the data it has on a shared buffer. To reduce the amount and complexity of mutexes, I would create two buffers, one 'active' and another 'inactive', and a flag to indicate the current active buffer. The polling thread would read data and put it in the inactive buffer. When it finished and had created a 'consistent' state, it would flip the flag and swap the active and inactive buffers. This could be done atomically and should therefore not require anything more complex than this.
This would all be very simple to set up since you would pretty much have only two threads that know nothing about the other.

Resources