I'm trying to do one action after the flink job is finished (make some change in DB). I want to do it in the same flink application with no luck.
I found that there is JobStatusListener that is notified in ExecutionGraph about changed state but I cannot find how I can get this ExecutionGraph to register my listener.
I've tried to completely replace ExecutionGraph in my project (yes, bad approach but...) but as soon as it is runtime library it is not called at all in distributed mode, only in local run.
I have next flink application in short:
DataSource.output(RichOutputFormat.class)
ExecutionEnvironment.getExecutionEnvironment().execute()
Can please anybody help?
Related
I am using PostgreSQL with sequelize on Javascript project. It is about managing project schedule and I want to update my data on specific condition. For example
There is one Project with 'due_date' and 'current_status'.
If I set 'due_date' then 'current_status' become 'ongoing'.
When current date passes 'due_date', then 'current_status' should become 'delayed'.
I want to make step 3 automatically, but I have no idea where to handle. Currently I am looking for hooks but I can not find any hooks that can satisfy it. Is there any solution for this?
You will need to schedule a job to actively look for rows that need updating.
The job can be fired by some external scheduler (e.g. cron) or by the pg_cron extension if you're able to install it.
We have a project in Octopus that has been configured to release to an environment on a schedule.
In the process definition we use a step template for Slack to send the team a notification when a release takes place. We would like to avoid sending this Slack message if the release was fired by the schedule - rather than user initiated.
I was hoping there would be a system variable that we could check before running the Slack step - but I can't seem to find anything documented as such, and google didn't turn anything up.
TIA
If you are using Octopus 2019.5.0 or later, there are two variables that will be populated if the deployment was created by a trigger.
Octopus.Deployment.Trigger.Id
Octopus.Deployment.Trigger.Name
You can see the details at https://github.com/OctopusDeploy/Issues/issues/5462
For your Slack step, you can use this run condition to skip it if the trigger ID is populated.
#{unless Octopus.Deployment.Trigger.Id}True#{/unless}
I hope that helps!
I've been trying to restart my Apache Flink from a previous checkpoint without much luck. I've uploaded the code to GitHub, here's the main class:
https://github.com/edu05/wordcount/blob/restart/src/main/java/edu/streaming/AppWithKafka.java
It's a simple word count program, only I'd like the program to continue with the counts it had already calculated after a restart.
I've read the docs and tried a few things but there must be something stupid missing, could someone please help?
Also: The end goal is to produce the output of the wordcount program into a compacted kafka topic, how would I go about loading the state of the app by first consuming the compacted topic, which in this case serves as both the output and the checkpointing mechanism of the program?
Many thanks
Flink's checkpoints are for automatic restarts after failures. If you want to do a manual restart, then either use a savepoint, or an externalized checkpoint.
If you've already tried this and are still having trouble, please provide more details about what you tried.
Is there a mechanism in Flink to send alerts/notifications when a job has failed?
I was thinking maybe if a restart strategy is applied the job will be aware that it is being restarted and client code can send notification to some sink, but couldn't find any relevant job context info
I'm not aware of a super-easy way to do this. A couple of ideas:
(1) The jobmanager is aware of failed jobs. You could poll /joboverview/completed, for example, looking for newly failed jobs. /jobs/<jobid>/exceptions can be used to get more info (docs).
(2) The CheckpointedFunction interface has an initializeState() method that is passed a context object that responds to an isRestored() method (docs). This is more-or-less the relevant job context you were looking for.
I have several HTTP callouts that are in a schedulable and set to run ever hour or so. After I deployed the app on the app exchange and had a salesforce user download it to test, it seems the jobs are not executing.
I can see the jobs are being scheduled to run accordingly however the database never seems to change. Is there any reason this could be happening or is there a good chance the flaw lies in my code?
I was thinking that it could be permissions however I am not sure (its the first app I am deploying).
Check if the organisation of your end user has added your endpoint to "remote site settings" in the setup. By endpoint I mean an address that's being called (or just the domain).
If the class is scheduled properly (which I believe would be a manual action, not just something that magically happens after installation... unless you've used a post-install script?) you could also examine Setup -> Apex Jobs and check if there are any errors. If I'm right, there will be an error about callout not allowed due to remote site settings. If not - there's still a chance you'll see something that will make you think. For example batch job has executed successfully but there were 0 iterations -> problem?
Last but not least - you can always try the debug logs :) Enable them in Setup (or open the developer console), fire the scheduled class's execute() manually and observe the results? How to fire it manually? Sth like this pasted to "execute anonymous":
MySchedulableClass sched = new MySchedubulableClass();
sched.execute(null);
Or - since you know what's inside the scheduled class - simply experiment.
Please note that if the updates you might be performing somehow violate for example validation rules your client has - yes, the database will be unchanged. But in such case you should still be able to see failures in Setup -> Apex Jobs.