Is Hystrix fallback implementation mandatory? - hystrix

I have a method calling a third party resource to retrieve a Dog object. If the call fails, I have no alternative path to return the Dog object.
So, what is the preferred apporach here:-
1. Implement hystrix fallback and return a null Dog object.
2. Hystrix will throw an exception when the call fails and with the catching of exception, null Dog object will be returned.
Option 1 or 2?
Is hystrix fallback a mandatory implementation requirement is you don't really have a fallback approach?
I think not, what's your opinion? What does hystrix guidelines suggest?

From what you are saying, both options 1 and 2 result in the same outcome - a null Dog being returned. I would say that handling it using a fallback in such case is better because it's cleaner. Fallback will be defined as part of the command, and as such would be consistent among all the uses of this command. This would not be true if you handled it as an exception. In that case you would need to remember to implement the exception handling in all of the places where the call is made.. and in the same way - and it's easy to make a mistake by forgetting about handling some place.
Is it mandatory to have a fallback?
It's not mandatory. If not provided, an exception will be raised.
In your case, you do have a fallback strategy - it is just very simple. It is actually covered in the Hystrix Wiki as a Silent Fail and you can read more about it on https://github.com/Netflix/Hystrix/wiki/How-To-Use#Common-Patterns-FailSilent
It is worth to think about which operations need a fallback, or rather, what the fallback should be. It is worth providing a fallback for a read operation. For a write operation it may not be a good idea, as described here: https://github.com/Netflix/Hystrix/wiki/How-To-Use#Fallback.
Defining a fallback strategy is part of the pattern. Imagine a project with few developers where Hystrix is used. It will feel nice to know where exactly to go to see the fallback strategy. That is a huge benefit.
One thing to keep in mind is that exceptions are more expensive to process so using them as a fallback strategy is a waste of resources. This won't be an issue in a small project with a small number of requests, but could cause problems in a system that processes a lot of such calls.

Related

APEX best practice

I am new to Salesforce apex coding. My first class that I am developing has 10 methods and is some 800 lines.
I haven’t added much of exception handling, so the size should swell further.
I am wondering, what the best practice for Apex code is... should I create 10 classes with 1 method instead of letting 1 class with 10 methods.
Any help on this would be greatly appreciated.
Thanks
Argee
What do you use for coding? Try to move away from Developer Console. VSCode has some decent plugins like Prettier or Apex PMD that should help you with formatting and making methods too complex. ~80 lines/method is so-so. I'd worry about passing long lists of parameters and having deeply nested code in functions rather than just their length.
There are general guidelines (from other languages, there's nothing special about Apex!) that ideally function should fit on 1 screen so programmer can see it whole without scrolling. Read this one, maybe it'll resonate with you: https://dzone.com/articles/rule-30-%E2%80%93-when-method-class-or
I wouldn't split it into separate files just for sake of it, unless you can clearly define some "separation of concerns". Say 1 trigger per object, 1 trigger handler class (ideally derived from base class). Chunkier bits not in the handler but maybe in some "service" style class that has public static methods and can operate whether called from trigger, visualforce, lightning web component, maybe some one-off data fix would need these, maybe in future you'd need to expose part of it as REST service. And separate file for unit tests (as blasphemous as it sounds - try to not write too many comments. As you're learning you'll need comments to remind yourself what built-in methods do but naming your functions right can help a lot. And a well-written unit test is better at demonstrating the idea behind the code, sample usage and expected errors than comments that can be often overlooked).
Exception handling is an art. Sometimes it's good to just let it throw an exception. If you have a method that creates Account, Contact and Opportunity and say Opportunity fails on validation rule - what should happen? Only you will know what's good. Exception will mean the whole thing gets rolled back (no "widow" Accounts) which sucks but it's probably "more stable" state for your application. If you naively try-catch it without Database.rollback() - how will you tell user to not create duplicates with 2nd click. So maybe you don't need too much error handling ;)

Handling poison messages in Apache Flink

I am trying to figure out the best practices to deal with poison messages / unhandled exceptions with Apache Flink. We have a Job doing real time event processing of location data from IoT devices. There are two potential scenarios where this can arise:
Data is bad in some way - e.g. invalid value
Data triggers a bug due to some edge case we have not anticipated.
Currently, all my data processing stops because of just one message.
I've seen two suggestions:
Catch the exceptions - this requires me wrapping every piece of logic with something to catch every runtime exception
Use side outputs as a kind of DLQ - from what I can tell this seems to be a variation on #1 where I have to catch all the exceptions and send them to the side output.
Is there really no way to do this other than wrap every piece of logic with exception handling? Is there no generic way to catch exceptions and not have processing continue?
I think the idea is not to catch all kinds of exceptions and send them elsewhere, but rather to have well-tested and functioning code and use dead letters only for invalid inputs.
So a typical pipeline would be
source => validate => ... => sink
\=> dead letter queue
As soon as your record passes your validate operator, you want all errors to bubble up, as any error in these operators may result in corrupted aggregates and data that - once written - cannot be reverted easily.
The validate step would work with any of the two approaches that you outlined. Typically, side-outputs have better semantics, but you may end up with more code.
Now you may have a service with high SLAs and actually want it to produce output even if it is corrupted just to produce data. Or you have simple transformation pipeline, where you'd miss some events but keep the majority (and downstream can deal with incomplete data). Then you are right that you need to wrap the code of all operators with try-catch. However, you'd typically still would only do it for the fragile operators and not for all of them. Trivial operators should be tested and then trusted to work. Further, you'd usually only catch specific kinds of exceptions to limit the scope to the kind of expected exceptions that can happen.
You might wonder why Flink doesn't have it incorporated as a default pattern. There are two reasons as far as I can see:
If Flink silently ignores any kind of exception and sends an extra message to a secondary sink, how can Flink ensure that the throwing operator is in a sane state afterwards? How can it avoid any kind of leaks that may happen because cleanup code is not executed?
It's more common in Java to let the developers explicitly reason about exceptions and exception handling. It's also not straight-forward to see what the requirements are: Do you want to have the input only? Do you also want to store the exception? What about the operator state that may have influenced the outcome? Should Flink still fail when too many errors have been received in a given time window? It quickly becomes a huge feature for something that should not happen at all in an ideal world where high quality data is ingested and properly processed.
So while it looks easy for your case because you exactly know which kinds of information you want to store, it's not easy to have a solution for all purposes, especially since the extra code that a user has to write is tiny compared to the generic solution.
What you could do is to extract most of the complicated logic things into a single ProcessFunction and use side-outputs as you have outlined. Since it's a central piece, you'd only need to write the side-output function once. If it's done multiple times, you could extract a helper function where you pass your actual code as a RunnableWithException lambda which hides all the side-output logic. Make sure you use plenty of finally blocks to ensure a sane state.
I'd also add quite a few IT cases and use mutation testing to harden your pipeline quicker. If you keep your test data inline, the mutants may also exactly simulate your unexpected data issues, such that your validate operator gets more complete.

Why is Auth._currentUser not recommended in angular_devise?

Per angular_devise docs, https://github.com/cloudspace/angular_devise, using Auth._currentUser is frowned upon.
Auth._currentUser
Auth._currentUser will be either null or the currentUser's object
representation. It is not recommended to directly access
Auth._currentUser, but instead use Auth.currentUser().
Why is this a bad idea?
I'm currently using it in an application because it allows me to be more concise and add functionality with fewer lines of code.
Underscored properties and methods are considered private, this means that this part of API can be changed without notice and their usage can lead to unforeseen circumstances.
Since getting current user is asynchronous operation, promise-based Auth.currentUser() should be used instead.
The last but not the least concern is that calls to Auth.currentUser can be spied in specs, this improves testability.

How to model the state "needs discussion" for a bug/feature in Bugzilla 3.0.x

In our group, we have to model the state "needs discussion" for Bugzilla.
Therefore, a custom RESOLVED - to be discussed status was introduced. The appropriate group of people searches for issues that have this sort of "resolution" status and discusses these offline.
In my opinion, this is not the proper way as the bug/feature clearly is not resolved if there still is need for discussion. This is also reflected in the standard life cycle of a bug. It is sort of misleading, as "needs discussion" items show up in your list of resolved bugs.
One way I can think of, would be to make a sort of "virtual user", representing the group that has to be involved in the discussion. This has the advantage, that one can search for the bugs easily. One could also setup a mailing list to notify the users.
I wonder how one can appropriately model this needs discussion state of a bug in Bugzilla 3.0.x. (And: what is the Mozilla-way solution?)
As with any software system there are a multitude of ways to address your need.
Before you start with mechanism it would be good to think about requirements.
Do you want bugs the need discussion to be counted as "open" still, or do you consider them "resolved". Do you even collect those types of metrics?
The requirements I derive from your question are
Don't want to see them in normal searches
Do want to be able to see them when looking explicitely
Need to be able to finalize the discussion, and "bring the bug back" to normal
Would like to notify people that there is a discussion to be held
Would like the bugs to not look like they are resolved
If those are really the requirements, and you don't care that "for discussion" bugs are showing up as resolved for metrics etc, then I think what you have is probably good enough, except for point 5.
Some other alternatives
Create a "Discussion" product (or component).
Create a custom lifecycle (I wouldn't recommend that though).
Assign to the "Discuss-me" group/user
Use a "super bug", and mark the bugs a blocking the "Discussion super bug"
Create a "discuss this issue" bug, and mark the bug as blocked by the discussion (this reflects reality the closest, but that doesn't make it the best option).
But, as I say, think about what you're trying to achieve first, and then choose the mechanism based on that. There are different trade-offs around the amount of bug-fiddling you have to do to make them all work :-).

Should an API assume the proper calling sequence, or should it verify the assumptions?

I have 2 api calls to implement, let's call them "ShouldDoSomething" and "DoSomething". The first is a test to see if a particular action needs to be taken and the other actually takes that action. Naturally, taking the action is only necessary or valid if the test returns true. There are cases where the test is needed without actually taking the action, so both are necessary.
Should the action call internally run the test and become a no-op if it's not needed, or should the implementation assume that it will only ever be called in the case where the test has already returned true? It seems simpler and safer to validate the assumptions, but then the 'action' call has 3 possible return states instead of two (success, failure, and not needed). Another approach would be to make the test an assertion and abort if the action was called unnecessarily. (this is in C, so exceptions aren't really a good idea)
But it comes down to a choice - simpler api calls or fewer api calls?
Pick a policy according to what you think is cleanest for the intended use cases. Document it clearly and carefully. The most important thing with API decisions like this is documentation, not the specifics of what convention is chosen.
If you decide to use the first option, do you actually need to add a "not needed" return state, or can it simply indicate "success" if the operation is not needed? Don't add complexity unless it's necessary.
I think it would be best to go with:
Another approach would be to make the
test an assertion and abort if the
action was called unnecessarily.
So for example if "DoSomething" is called and "ShouldDoSomething" is telling you that you shouldn't take that particular action then "DoSomething should abort without doing anything. Do you really need to expose "ShouldDoSomething" to the client, if its only testing whether "DoSomething" should be called?
I think it also comes down to what will be changed. Could it be fatal if called inappropriately? If so I think you should put assertions in. If it will be harmless then it doesn't really matter.
I would say safer is the way to go.
Hope this helps.
Stephen Canon is correct about consistency and documentation.
Personally I tend to err on trying to protect people from themselves unless it is a serious performance penalty. In most situations I would lean to checking ShouldDoSomething internally from DoSomething and return an error if DoSomething wasn't required. That indicates a logic error to the user and is a lot nicer than letting them blow up or leaving something in an invalid state that might be hard to debug down the line.
The simplest API (from a user perspective) would be to have the function DoSomethingIfNeeded(), and let you take care of the conditions. Why bother the user?
The least friendly user interface is the one that says: "Now call one of these functions! No, not that one!".

Resources