Apache Camel SWF does not schedule second Activity in the workflow - apache-camel

I have created a sample apache camel swf application based on Spring Boot. It works as expected for the first activity but even though it seems to schedule the second activity, the activity does not get invoked.
Reference to sample application - https://github.com/kpkurian/camel-aws-swf-shopping-cart-wf
The log shows below line
2018-01-18 00:23:32.705 DEBUG 9456 --- [erMasterQueue 1] o.apache.camel.processor.SendProcessor : >>>> aws-swf://activity?activityList=&amazonSWClient=%23swfClient&domainName=name&eventName=<2nd activity name>&version=5.0
The history shows 2nd activity is not scheduled. I can see the WorkflowExecutionCompleted event in the end.
2nd activity not scheduled screenshot

Related

How can I see the status of the messages in a batch from Azure Portal?

I am new in the field of Azure. I am testing the functionality called "Send, receive, an batch process messages in Azure Logic Apps". This is the link of the documentation:
Batches in Logic Apps
I could do everything what in that tutorial exists and it works. I created a batch called "Test" (This is the Batch name). My question is: Is there a monitor in Azure portal where I can see which messages were created in that batch from the "batch sender" and therefore see the current status of these messages?.
In other words I would like to see which message was already processed by the "batch receiver" and which ones still remain to be processed. I would like to know if I can monitor this batch that I created.
I would like to see which message was already processed by the "batch receiver"
As of now there is no way we can alone monitor the batch process but Here is one of the workarounds for this, You can add a compose connector and add all the parameters as per the requirement.
Here is my logic app workflow
So each time this logic app gets triggered by the batch process we can check all the properties that we have mentioned for that particular message.
Here is the sample output:
You can check the processed message by the "batch receiver" from the run history of the logic app.
and therefore see the current status of these messages?.
You can monitor this from Azure Monitor logs. which gives number runs that are running, succeeded and failed.
and which ones still remain to be processed.
This depends on from where you are trying to send the messages
You can follow this document for Monitor logic apps by using Azure Monitor logs

Google App Engine Cron not triggering endpoint at specific times

We have multiple App Engine Cron entries triggering our App Engine application, but recently we detected a decrease on the number of the processed events handled by one of the endpoints of our application. By looking at the App Engine Cron logs for this specific Cron entry on StackDriver, we found out that, during the days we invesgated (March 11-15), that are missing entries. Most of the missing triggers coincide through the days (12:15, 14:15, 16:15, 18:15, 20:15, 22:15, 00:15).
The screenshot below displays one specific day, and the red lines indicate the missing entries:
There are no requests with HTTP status code different than 200.
This is the configuration of the specific Cron entry (replaced some words with XXX due to business restrictions):
- description: 'Hourly job for XXX'
url: /schedule/bigquery/XXX
schedule: every 1 hours from 00:15 to 23:15
timezone: UTC
target: XXX
retry_parameters:
min_backoff_seconds: 2.5
max_doublings: 5
Could someone # GCP side take a look? The task name is 53751dd6a70fb9af38f49993b122b79f.
it seems like if the request takes longer than an hour, then the next one gets skipped (i.e. cron doesn't launch the next iteration if the current iteration is still running)
maybe do the actual work in a separate task and then the only thing the cron task does is launch this separate task

hystrix stream not responding

I am using spring-cloud-starter-hystrix:1.2.3.RELEASE in a Spring Boot application. I have 1 HystrixCommand, that I can execute successfully.
After that I called
localhost:8080/hystrix.stream
however this Request loads forever and doesn't respond. On Google I cannot find anything about this.
This happens if no command has been executed yet and therefor there are no metrics to publish in the stream.
The 'workaround' is to execute a Hystrix command.
This happens in Hystrix 1.5.8 and earlier. The behavior was changed in Hystrix 1.5.9 that was released yesterday. It will now publish a ping message if there are no metrics to publish.
This change was made to fix a bug where the stream would not detect closed connections when there were no metrics to publish. See Hystrix bug 1430 for more information.
Make sure you have the #EnableHystrixDashboard annotation added to the dashboard application. Then go to http://{dashboard-application:port}/hystrix.stream. On this page you will be asked to enter the URL of Hystrix application which is annotated with #EnableCircuitBreaker and of which you want to monitor the stream.

Get status CompletedWithWarnings when insert InsightExternalData

My csv file have 3941495 record, I have checked my file, it has exactly 3941495 record after was compressed, but the result on server is 3941489 record. I got status "CompletedWithWarnings", and this is status message
The job was completed successfully, but failed in the part of the line. From data monitoring to download the error log, please check the failed line.
Does anyone know how to fix it? Or how to download the error log?
If you use Salesforce Analytics you can monitor an External Data Upload on UI. According documentation (see page 114):
When you upload an external data file, Wave Analytics kicks off a job
that uploads the data into the specified dataset. You can use the data
monitor to monitor and troubleshoot the upload job. The Jobs view
of the data monitor shows the status, start time, and duration of each
dataflow job and external data upload job. It shows jobs for the last
7 days and keeps the logs for 30 days.
In Wave Analytics, click the gear button and then click Data Monitor
to open the data monitor. The Jobs view appears by default. The Jobs
view displays dataflow and upload jobs. The Jobs view displays each
upload job name as . You can hover a job to
view the entire name.
To see the latest status of a job, click the Refresh Jobs button.
To view the run-time details for a job, expand the job node. The
run-time details display under the job. In the run-time details
section, scroll to the right to view information about the rows that
were processed.
To troubleshoot a job that has failed rows, view the error message.
Also, click the download button in the run-time details section to
download the error log.
Note: Only the user who uploaded the external data file can see the
download button. The error log contains a list of failed rows.

How do I run a cron job on Google App Engine immediately?

I have configured Google App Engine to record exception with ereporter.
The cron job is configured to run every 59 minutes. The cron.yaml is as follows
cron:
- description: Daily exception report
url: /_ereporter?sender=xxx.xxx#gmail.com # The sender must be an app admin.
schedule: every 59 minutes
How to do I run this immediately.
What I am trying to do here is simulate a 500 HTTP error and see the stack trace delivered immediately via the cron job.
Just go to the URL from your browser.
You can't using cron. Cron is a scheduling system, you could get it to run every minute.
Alternately you could wrap your entire handler in a try/except block and try to catch everything. (You can do this for some DeadlineExceededErrors for instance) then fire off a task which invokes ereporter handler, and then re-raise the Exception.
However in many cases Google infrastructure can be the cause of the Error 500 and you won't be able to catch the error. To be honest you are only likely to be able to cause an email sent for a subset of all possible Error 500's. The most reliable way probably be to have a process continuously monitor the logs, and email from there.
Mind you email isn't consider reliable or fast so a 1 min cron cycle is probably fast enough.
I came across this thread as I was trying to do this as well. A (hacky) solution I found was to add a curl command at the end of my cloudbuild.yaml file that triggers the file immediately per this thread. Hope this helps!
Make a curl request in Cloud Build CI/CD pipeline

Resources