Is there a way in Hystrix to signal an error to the circuit breaker without using an exception? - hystrix

I'm using Hystrix in a scenario where some errors are communicated as normal results and not as exceptions.
An example would be an HystrixObservableCommand that returns a Observable<HttpResponse>:
If the observable emits a response with status code lower than 500, then the response should be considered as being successful.
If the observable emits a response with status code greater or equal than 500, or if it emits a Throwable, then the response should be considered to be an error and the circuit breaker error stats should be incremented.
I really don't want to throw an exception just to have Hystrix consider the response as an error.
Is there a way to achieve this?
Thanks,
Pedro

Related

OSB12c - Rest proxy service throwing Translation error in case of invalid JSON input

We have configured REST proxy service that accepts JSON input. If the input is not a well formed JSON OSB is throwing Translation error with HTTP 500 Staus code. Is that possible we can send Customized error message in this scenario
You need to create a global error handler for your pipeline and set the desired error message using a replace action here, followed by a "Reply" action.
Keep in mind that if you try to "read" the original request body in the global error handler, and if the original request was malformed, it will get thrown up to the system error handler and you will get the system error message again.
Here's a sample OSB 12.2.1.1 project you can use to try this: https://github.com/jvsingh/SOATestingWithCitrus/tree/develop/OSB/Samples/ServiceBusApplication1
The accompanying soapui project contains two requests. The malformed request should return this:
(I have only set the response here. You would also need to set the proper content type and decide whether you want to treat this as "success" or "failure" etc. in the reply action)

AppEngine Error Handling

All,
Is there a way to error out/exit execution out of a handler? For instance, if the incoming request doesn't contain the correct headers we want to send a 400 and exit/close the connection. However, whenever we use self.error(400) or self.response.set_status(400) any other code after it executes anyway So, for example:
class MyPastaHandler(webapp2.Handler):
def get():
if not self.request.headers.get('My-Custom-Header'):
self.error(400)
...
[more code]
self.response.out.write('{"success": "true"}')
When I submit a request w/o the said custom header, I get back a 400, but I also get the success json in the body of the response, which tells me that self.error(400) doesn't stop execution and neither does self.response.set_status(400).
So, the question is, is it possible to literally error out of a handler?
As it turns out, there is a simple way to exit after a 400 (or any custom error). As described by TheFluff in the AppEngine IRC channel, a simple, old-fashioned empty return after the self.error(400) will do the trick.
In webapp2 abort() is a shortcut to raise a HTTP exception: http://webapp-improved.appspot.com/guide/exceptions.html#abort

What happens when an async put, results in a contention exception, after the request has ended, on Appengine with NDB?

Using ndb, lets say I put_async'd 40 elements, with #ndb.toplevel, wrote an output to user and ended the request, however one of those put_async's resulted in a contention exception, would the response be 500 or 200? Or lets say If it it a task, would the task get re-executed?
One solution is get_result()'ing all those 40 requests before the request ending and catching those exceptions -if they occur- but I'm not sure whether it will affect performance.
As far as I understand, using #ndb.toplevel causes the handler wait for all async operations to finish before exiting.
From the docs:
As a convenience, you can decorate the request handler with #ndb.toplevel. This tells the handler not to exit until its asynchronous requests have finished. This in turn lets you send off the request and not worry about the result. https://developers.google.com/appengine/docs/python/ndb/async#intro
So by adding #ndb.toplevel that the response doesn't actually get returned until after the async methods have finished executing. Using #ndb.toplevel removes the need to call get_result on all the async calls that were fired off (for convenience). So based on this, the request would still return 500 if the async queries failed, because all the async queries needed to complete before returning. Updated: below
If using a task (I assume you mean task queue) the task queue will retry the request if the request fails.
So your handler could be something like:
def get(self):
deferred.defer(execute_stuff_in_background, param,param1)
template.render(...)
and execute_stuff_in_background would do all the expensive puts once the handler had returned. If there was a contention issue in the task, your original handler would still return 200.
If you suspect there is going to be a contention issue, perhaps consider sharding or using a fork-join queue implementation to handle the writes (see implementation here: http://www.youtube.com/watch?v=zSDC_TU7rtc#t=41m35)
Edit: Short answer
The request will fail (return 500) if the async requests fail, because #ndb.toplevel waits
for all results to finish before exiting.
Updated:Having looked at #alexis's answer below, I re-ran my original test (where I turned off datastore writes and called put_async in the handler decorated with #ndb.toplevel), the response raises 500 intermittently (I assume this depends on execution time). Based on this and #alexis's answer below, don't expect the result to be 500 if an async task throws an exception and the calling function is decorated with #ndb.toplevel
That's odd, I use toplevel and expect the opposite behavior. And that's what I observe. Did something change since the first answer to this question?
As the doc says:
This in turn lets you send off the request and not worry about the
result.
You can try the following unittest (using testbed):
#ndb.tasklet
def raiseSomething():
yield ndb.Key('foo','bar').get_async()
raise Exception()
#ndb.toplevel
def callRaiseSomething():
future = raiseSomething()
return "hello"
response = callRaiseSomething()
self.assertEqual(response, "hello")
This test passes. NDB logs a warning: "suspended generator raiseSomething(tests.py:90) raised Exception()", but it does not re-raise the exception.
ndb.toplevel only waits for the RPCs, but does nothing of the actual result.
If your decorated function is itself a tasklet, it will call get_result() on it first. At this point exceptions will be raised. Then it will wait for remaining 'orphaned' RPCs, and will only log something if an exception is raised.
So my response is: the request will succeed (return 200)

Camel Apache: can I use a retryWhile to re-send a request?

I would like to achieve the following kind of orchestration with CAMEL:
Client sends a HTTP POST request to CAMEL
CAMEL sends HTTP POST request to external endpoint (server)
External server replies with a 200 OK
CAMEL sends HTTP GET request to external endpoint (server)
External server replies
After step 5, I want to check the reply: if the reply is a 200 OK and state = INPROGRESS (this state can be retrieved from the received XML body), I want to re-transmit the HTTP GET to the external endpoint until the state is different from INPROGRESS.
I was thinking to use the retryWhile statement, but I am not sure how to build the routine within the route.
Eg, for checking whether the reply is a 200 OK and state = INPROGRESS, I can easily introduce a Predicate. So the retryWhile already becomes like:
.retryWhile(Is200OKandINPROGRESS)
but where should I place it in the route so that the HTTP GET will be re-transmitted ?
Eg: (only taking step 4 and 5 into account)
from("...")
// here format the message to be sent out
.to("external_server")
// what code should I write here ??
// something like:
// .onException(alwaysDo.class)
// .retryWhile(Is200OKandINPROGRESS)
// .delay(2000)
// .end ()
// or maybe it should not be here ??
I am also a bit confused how the "alwaysDo.class" should look like ??
Or ... should I use something completely different to solve this orchestration ?
(I just want to re-transmit as long as I get a 200 OK with INPROGRESS state ...)
Thanks in advance for your help.
On CAMEL Nabble, someone replied my question. Check out:
http://camel.465427.n5.nabble.com/Camel-Apache-can-I-use-a-retryWhile-to-re-send-a-request-td5498382.html
By using a loop statement, I could re-transmit the HTTP GET request until I received a state different from INPROGRESS. The check on the state needs to be put inside the loop statement using a choice statement. So something like:
.loop(60)
.choice()
.when(not(Is200OKandINPROGRESS)).stop() // if state is not INPROGRESS, then stop the loop
.end() // choice
.log("Received an INPROGRESS reply on QueryTransaction ... retrying in 5 seconds")
.delay(5000)
.to(httpendpoint")
.end() //loop
I never experimented what you are trying to do but it seems does not seem right.
In the code you are showing, the retry will only occur when an alwaysDo Exception is thrown.
The alwaysDo.class you are refering to should be the name of the Java Exception class you are expecting to handle. See http://camel.apache.org/exception-clause.html for more details.
The idea should be to make the call and inspect the response content then do a CBR based on the state attribute. Either call the GET again or terminate/continue the route.
You probably should write a message to the Apache Camel mailing list (or via Nabble) . Commiters are watching it and are very reactive.

How to get details of DownloadError?

I do the following:
try:
result = urlfetch.fetch(url=some_url,
...
except DownloadError:
self.response.out.write('DownloadError')
logging.error('DownloadError')
except Error:
self.response.out.write('Error')
logging.error('Error')
Is there any way to get some more detailed description on what happened?
You should use logging.exception to add the Exception to the ERROR logging message:
try:
result = urlfetch.fetch(url=some_url,
...
except DownloadError, exception:
self.response.out.write('Oops, DownloadError: %s' % exception)
logging.exception('DownloadError')
except Error:
self.response.out.write('Oops, Error')
logging.exception('Error')
In short, no. A download error is usually a timeout in our experience - something on the back end taking too long to respond (first byte). If it is receiving data, it looks like GAE will wait and throw a Deadline exception instead after your 10 seconds is up.
Does it ever succeed? Your choices on how to deal with d/l exceptions will vary depending on the back-end.
If you're taking the simplistic route and just retrying, beware of quotas and limiters - chances are your requests are indeed reaching the other system, and just aren't coming back in time. Very easy to blow past limiters this way.
J

Resources