Workday Apache Camel processor error while calling end point - apache-camel

We were using Apachel Camel workday processor and it was working fine until July 16th. Workday made a change at thier end to allow TLS version 3 since then Workday camel processor is not able to connect directly to workday and gives the following error.
caught generic exception in route: stackTrace : java.lang.IllegalStateException: Got the invalid http status value 'HTTP/1.1 401 Unauthorized' as the result of the RAAS 'URL'
at org.apache.camel.component.workday.producer.WorkdayDefaultProducer.process(WorkdayDefaultProducer.java:71)
Whereas i debugged the code, i used the same token that the workday default producer java class is using and called the end point using postman which is working fine. Just a note that this code was working fine until 16th of july.

Related

Googel App Engine: Intermittent Issue: Process terminated because the request deadline was exceeded. (Error code 123)

Problem you have encountered: I have deployed a spring boot application (backend) on Google App Engine. For past few days, I am getting below mentioned error intermittently:
Error: Process terminated because the request deadline was exceeded. (Error code 123)
Description:
I have deployed a spring boot application (backend) on Google App Engine.
For past few days, I am getting below mentioned error intermittently:
Error: Process terminated because the request deadline was exceeded. (Error code 123)
The application uses CloudSQL (MySQL) database. I have setup below mentioned auto-scale properties as well:
<sessions-enabled>true</sessions-enabled>
<warmup-requests-enabled>true</warmup-requests-enabled>
<instance-class>F2</instance-class>
<automatic-scaling>
<target-cpu-utilization>0.65</target-cpu-utilization>
<min-instances>10</min-instances>
<max-instances>20</max-instances>
<min-idle-instances>5</min-idle-instances>
<max-idle-instances>6</max-idle-instances>
<min-pending-latency>30ms</min-pending-latency>
<max-pending-latency>500ms</max-pending-latency>
<max-concurrent-requests>10</max-concurrent-requests>
</automatic-scaling>
<inbound-services>
<service>warmup</service>
</inbound-services>
What you expected to happen: The application hosted on GAE shouldn't fail with this intermittent error
Steps to reproduce: No steps available, its intermittent.
Other information (workarounds you have tried, documentation consulted, etc): I have tried multiple different combinations for the configurations of autoscaling, etc.
I just hat the exact same issue in a GAE Standard Java 8 Application - after A LOT of trial and error I found the issue was related to Cloud SQL - one symptom was that autoscaled instances restarted (probably crashed) frequently as if there was something stuck. The "Error: Process terminated because the request deadline was exceeded. (Error code 123)" also produced no further logs.
The solution (in our case) was (because that was the only thing we changed right before the error appeared all the time) that we frequently used a Cloud SQL Query in our application that went like this:
case when column=1 then 1 else -1 end
column could be NULL in some (rare) cases - no problem with our normal SQL client we tested this with, but for some weird reason, Cloud SQL and JDBC has a problem with this causing these instance issue
changing to
case when coalesce(column,0)=1 then 1 else -1 end
so that the there would never be a comparison against NULL in the case statement solved the problem

$http occasionally fails with ERR_EMPTY_RESPONSE

We're working on a client-server application, where the client is Angular + AngularJS hybrid, running on Chrome. The server is a spring MVC running on Tomcat.
The communication between the client and server is HTTPS.
Our client sends different REST requests to the server using the AngularJS $http service. Some of them repeat on predefined intervals.
Over the past week we started noticing that every now and then some of the repeating requests fail with ERR_EMPTY_RESPONSE (status is "failed"). For the same request it can fail and 20 seconds later (when the interval is reached) succeed.
It seems like this could potentially happen with every REST request but we mainly notice it on the repeating ones.
The failing requests seem to not reach the server at all as they don't appear in the localhost_access_log.txt file and there is a gap between the repeating requests that did reach the server that indicate the failing requests never reached the server.
The application has been working for quite some time now and without any issues regarding $http requests until last week (around March 25th 2019).
The code that sends these REST requests is not new and has not been changed in years.
It also doesn't look related to the latest Chrome update, as the issue reproduces with Chrome 63, and with the newest Chrome version (73).
We would appreciate your help with this issue.
Thanks.

Office 365 Management API activity/feed/subscriptions APIs returning InternalServerError

Am researching activity audits for the last couple days using an asp.net MVC project. I was using contentType=Audit.Exchange and contentType=Azure.ActiveDirectory successfully since the last half of yesterday and this morning up until about two hours ago.
I made no changes to my authorization/authentication code and the tokens look good. Also no changes to the calls themselves. I added some json handling for the response to list subscriptions and when I ran the app to test that code, suddenly I am getting an InternalServerError response to start subscription, list subscriptions and stop subscription. The error is returned after a long timeout (in fact I had to increase the default timeout value).
So as of about two hours ago all the APIs are returning InternalServerError after a long timeout. This is happening on the following APIs:
/activity/feed/subscriptions/start
/activity/feed/subscriptions/list
/activity/feed/subscriptions/stop
The body of the response message is empty. So does not include any error info as described in https://msdn.microsoft.com/office-365/office-365-management-activity-api-reference.
Seems crazy this could be a service outage, so I must be missing something really elemental?
Hmmm. With no further changes to the code, now am getting HTTP 200 responses. If that was a service outage, that was a heck of a long outage for 99.9% uptime.

LinkShare Merchandiser API web service throws error 7187145 (FSW server connection)

I started an app using the LinkShare Merchandiser Query API and queries against it were working just fine. I was able to run queries via IE, my app or Yahoo Pipes (the approach I settled on - feed my app from the service via Yahoo Pipes).
After a week or two of using this service while developing my app, it started to randomly return errors about 1 in 5 queries, always after what it seemed to be a timeout (response took 5+ sec). Now that I'm resuming the project after a 2-week hiatus, the query does not work at all. Not even once.
The error is always the same, same as what used to be when it randomly failed. It seemed (and still seems) to me that it is an internal problem on their side but I can't believe it has been (and is) broken 100% of the time for the past 48h.
Any query fails. A sample:
http://productsearch.linksynergy.com/productsearch?token=**token**&keyword="DVD+Player"&cat="Electronics"&MaxResults=20
And its response:
<?xml version="1.0" encoding="UTF-8"?>
<result>
<Errors>
<ErrorID>7187145</ErrorID>
<ErrorText>Internal error 18171650 occurred: FSW server connection.</ErrorText>
</Errors>
</result>
There is no documentation, cannot find any mentions of this on the web and have received no response from them as of yet. I'm not sure where to go from this at this point.
Ideas? Experiences? Should I dump this approach and start anew with a different provider? Any input will be appreciated.

Client requested session xx that was terminated due to FORWARDING_TO_NODE_FAILED

After few hours of working with new selenium 2.20 i get this error:
WARNING: Client requested session 1331671421031 that was terminated
due to FORWARDING_TO_NODE_FAILED
14.3.2012 5:46:55 org.openqa.grid.internal.ActiveTestSessions getExistingSession
what can be wrong. I never get this message with 2.19 version of selenium server.
There is also no other messages just this message all the time in console.
Per selenium wiki -
FORWARDING_TO_NODE_FAILED - The hub was unable to forward to the node. Out of memory errors/node stability issues or network problems
Are you sure you didn't have network related issue?

Resources