Flink 1.5-SNAPSHOT web interface doesn't work - apache-flink

I recently came across a bug in Flink, reported (https://issues.apache.org/jira/browse/FLINK-8685) and found out that it has been reported and a pull request has been created (https://github.com/apache/flink/pull/5174).
Now I clone 1.5-SNAPSHOT, apply the patch and build Flink. Even though it builds (no matter patch is applied or not), when I run Flink (using start-cluster.sh), web dashboard doesn't work and command
tail log/flink-*-jobmanager-*.log returns "tail: cannot open 'log/flink-*-jobmanager-*.log' for reading: No such file or directory"
. I tested with a batch programs and surprisingly it returned results on terminal, but streaming programs and other things still don't work.
Any suggestions on this issue?
Thank you.

In case flink dashboard does not start change port in conf file and restart. Default port of flink could be occupied by other process in windows.
Also change log level for flink to debug.

Related

Why might Apache Flink write files on a Windows box, but not write files on a Linux container using simple FileSink and SimpleStringEncoder?

I'm working with the examples provided in 'flink-training' in the GitHub repository here. Specifically, I'm working on the 'ride-cleansing' example.
I've replaced the PrintSinkFunction with a simple FileSink configured as follows:
FileSink fileSink =
FileSink.forRowFormat(new Path(args[0]),
new SimpleStringEncoder<String>("UTF-8"))
.withRollingPolicy(DefaultRollingPolicy.builder()
.withRolloverInterval(Duration.ofMinutes(1))
.withInactivityInterval(Duration.ofSeconds(30))
.withMaxPartSize(512 * 512 * 512)
.build())
.build();
When I run this example on my local machine in Intellij, the expected directory are created and files are created to reflect the data streamed to the sink.
However, when I run this same example on a Linux box (on Google Colab), the directory is created, but no files are created, regardless of how long I leave it running (I've tried 10+ minutes).
On the Linux Container, I'm running the example using the gradle setup and the following command:
./gradlew :ride-cleansing:runJavaSolution --args="/content/datastream"
On the Windows box, I'm just executing the RideCleansingSolution 'main' with a simple 'Application' run configuration.
What might be different about my setup on the two systems that would decide whether data is written?
it might not work, but if you set up mono develop on whatever nx your using and write it all in c# via Xamarin in VS.NET23 it MIGHT work seamlessly across all platforms and arches... but I'm just spitballing here so `_o_/'

React App slowly exhausts file descriptor limit on Ubuntu server and crashes in few days

I have a React App which I am running using serve command from Jenkins pipeline. One thing to note here is that I configured my Jenkins jobs to run indefinitely for logs. Below screenshot show how my app is running.
Aforesaid setup is running on Ubuntu server and my React App stops once the per-process file descriptor hard limit of 4096 is exhausted.
After a lot of investigation, I found that index.html file is getting opened periodically and is not getting closed so File Descriptor count keeps increasing over time. Also, while using app I have observed that there is no increase in count of index.html files i.e. I think its not tied to direct app usage.
Sever Side events is also already implemented for notifications, and I didn't have this issue so not sure if this issue is related to SSE.
Lastly, this started happening since recent release where multiple dependencies were updated and I changed package manager from npm to yarn.
Please note :
I am not looking for answers suggesting to increase file descriptor limit on Linux but figure out and resolve actual issue.
This project is built using create-react-app.
Node version is v14.21.1

'clarinet integrate' quickly fails and nothing is logged to console?

Following https://docs.hiro.so/smart-contracts/devnet I can't get the command clarinet integrate to work. I have installed Docker on my mac and am running version 0.28.0 of clarinet. Running command within 'my-react-app/clarinet' where all clarity related files live (contracts, settings, tests, and Clarinet.toml).
My guess is it could be an issue with Docker?
The issue was that I downloaded my Devnet.toml file from a repo that was configured incorrectly. The configuration I needed was:
[network]
name = "devnet"
I increased the CPU and Memory in Docker as well.
There is an issue when the command attempts to spin up the stacks explorer, but I was informed that there are several existing issues with the stacks explorer from clarinet integrate at the moment.
Depending on how the last devnet was terminated, you could have some containers running. This issue should be fixed in the next incoming release, meanwhile, you'd need to terminate this stale containers manually.
Apart from Ludo's suggestions, I'd also look into your Docker resources. The default CPU/memory allocation should allow you to get started with Clarinet, but just in case, you could alter it to see if that makes a difference. Here's my settings for your reference:
Alternatively, to tease things out, you could reuse one of the samples (eg: hirosystems/stacks-billboard) instead of running your project. See if the sample comes up as expected; if it does, there could be something missing in your project.

Nodejs not able to concurrently access a file from two apps

I have two nodejs apps. One is writing logs to a file using rotating-file-stream.
The second app needs to consume these logs and process by reading the log file as it is changing. Basically I am trying to setup a crude disk based queue. The issue I am seeing is that the consumer app is not able to see the logs until the stream from producer app is closed.
On this consumer app I tried fs.watch, chokidar and tail (I am not even getting change event). However, when I tried running 'tail -f' on the terminal that is able to pick up changes right away.
From what I found it seems like it is a platform related issue, which in my case is macOS. Using fs.watchFile instead of fs.watch fixed the issue. Thankfully the libraries I am using (chokidar and tail) provides api flags to do just that.

Camel sftp doesn't poll on Unix more than 2 levels deep

Camel sftp is unable to poll more than 2 levels deep when the java code runs on Linux, but it works fine on Windows.
For example, polling files from
sftp://user#domain:22/folder1/folder2?...
works on both Unix and Windows. But, when I use something like
sftp://user#domain:22/folder1/folder2/folder3?...,
the route is always started yet the route running on Unix doesn't get the files in folder 3.
Route: route22 started and consuming from:sftp://user#domain:22/folder1/folder2/folder3?...
The sftp is to the same Unix machine and the same paths are used.
I have tried with stepwise true and false, as well as with recursive.
Could anyone shed some light on this please?
The problem was caused by a quartz trigger (attached to the route) that became corrupted. That happend because of a camel bug that makes camel unable to reconcile triggers when running in cluster mode if they fail due to database reasons.

Resources