I am beginner to spring aop and i am going through spring aop documentation to understand the concepts but failed to understand 'target object'.
the documentation says target object is the "object being advised by one or more aspects. Also referred to as the advised object".
what is the meaning of being advised by one or more aspects here? can anyone explain me what is target object in Lyman terms as i am still a beginner.
For a simple explanation of some basic AOP terms please refer to my other answer. Please read that one first before continuing to read here.
So the target object is the (Java or Spring) component to which you want to add new behaviour, usually a cross-cutting concern, i.e. some behaviour that is to be applied to many classes in your code base.
An aspect is a class in which you implement that cross-cutting concern and also determine where and how to apply it. The where is defined by a pointcut, some kind of search expression finding the relevant parts of your code base to apply the behaviour to. The how is implemented in an aspect method called an advice.
So when we say that an aspect advises an object, it means that it adds (cross-cutting) behaviour to it without changing the class itself.
In Spring AOP this is mostly method interception, i.e. doing something before or after a method executes.
In the more powerful AspectJ you can also intercept changes of member variables and constructor execution. Furthermore you can change the class structure itself by adding new members or methods or making the target class implement an interface etc.
Is it possible to define multiple targets like below:
#Before(value = "com.test.createUpdateDeletePointCut() && (target(com.testlab.A) || target(com.testlab.B))")
Related
I Am working with flink 1.15.2, should i use Row or GenericRowData that inherit RowData for my own data type?, i mostly use streaming api.
Thanks.
Sig.
In general the DataStream API is very flexible when it comes to record types. POJO types might be the most convenient ones. Basically any Java class can be used but you need to check which TypeInformation is extracted via reflection. Sometimes it is necessary to manually overwrite it.
For Row you will always have to provide the types manually as reflection cannot do much based on class signatures.
GenericRowData should be avoided, it is rather an internal class with many caveats (strings must be StringData and array handling is not straightforward). Also GenericRowData becomes BinaryRowData after deserialization. TLDR This type is meant for the SQL engine.
The docs are actually helpful here, I was confused too.
The section at the top titled "All Known Implementing Classes" lists all the implementations. RowData and GenericRowData are described as internal data structures. If you can use a POJO, then great. But if you need something that implements RowData, take a look at BinaryRowData, BoxedWrapperRowData, ColumnarRowData, NestedRowData, or any of the implementations there that aren't listed as internal.
I'm personally using NestedRowData to map a DataStream[Row] into a DataStream[RowData] and I'm not at all sure that's a good idea :) Especially since I can't seem to add a string attribute
I am new to Salesforce apex coding. My first class that I am developing has 10 methods and is some 800 lines.
I haven’t added much of exception handling, so the size should swell further.
I am wondering, what the best practice for Apex code is... should I create 10 classes with 1 method instead of letting 1 class with 10 methods.
Any help on this would be greatly appreciated.
Thanks
Argee
What do you use for coding? Try to move away from Developer Console. VSCode has some decent plugins like Prettier or Apex PMD that should help you with formatting and making methods too complex. ~80 lines/method is so-so. I'd worry about passing long lists of parameters and having deeply nested code in functions rather than just their length.
There are general guidelines (from other languages, there's nothing special about Apex!) that ideally function should fit on 1 screen so programmer can see it whole without scrolling. Read this one, maybe it'll resonate with you: https://dzone.com/articles/rule-30-%E2%80%93-when-method-class-or
I wouldn't split it into separate files just for sake of it, unless you can clearly define some "separation of concerns". Say 1 trigger per object, 1 trigger handler class (ideally derived from base class). Chunkier bits not in the handler but maybe in some "service" style class that has public static methods and can operate whether called from trigger, visualforce, lightning web component, maybe some one-off data fix would need these, maybe in future you'd need to expose part of it as REST service. And separate file for unit tests (as blasphemous as it sounds - try to not write too many comments. As you're learning you'll need comments to remind yourself what built-in methods do but naming your functions right can help a lot. And a well-written unit test is better at demonstrating the idea behind the code, sample usage and expected errors than comments that can be often overlooked).
Exception handling is an art. Sometimes it's good to just let it throw an exception. If you have a method that creates Account, Contact and Opportunity and say Opportunity fails on validation rule - what should happen? Only you will know what's good. Exception will mean the whole thing gets rolled back (no "widow" Accounts) which sucks but it's probably "more stable" state for your application. If you naively try-catch it without Database.rollback() - how will you tell user to not create duplicates with 2nd click. So maybe you don't need too much error handling ;)
the full sentence taken from the EJB3.2 specifications:
When interacting with a reference to the no-interface view, the client
must not make any assumptions regarding the internal implementation of
the reference, such as any instance-specific state that may be
present in the reference
I'm actually trying to understand what that actually mean and I was wondering if someone could kindly provide some examples.
EDIT:
The Above sentence is take from Section 3.4.4 Session Bean’s No-Interface View, maybe this info is helpful
When generating a no-interface view proxy, the EJB container must create a subclass of the EJB class and override all public methods to provide proxying behavior (like security, transactions).
You can get a reference to the bean with (eg. for passing it to another ejb):
NoInterfaceBean bean = ejbContext.getBusinessObject(NoInterfaceBean.class);
This returns a reference with a class-type that is the same as the bean class itself (normally if the EJB has a business interface it would return the interface class), but it is not a reference to an instance of NoInterfaceBean (but to that proxy class with the same name). Think of it as a reference to a pimped version of your bean, about which you
must not make any assumptions regarding the internal implementation
It's basically the same with "normal" EJB's. You know that there is some magic around your bean instance, but since you get the interface as class-type it's already clear that every class implementing the interface can have a different internal implementation.
So the specification emphasizes this difference at that point. Even if it looks like a reference to your concrete class it is none (as they say in the next paragraph of the specification JSR-000345 Enterprise JavaBeansTM 3.2 Final Release:
Although the reference object is type-compatible with the
corresponding bean class type, there is no prescribed relationship
between the internal implementation of the reference and the
implementation of the bean instance.
I am reading The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt, David Thomas. When I was reading about a term called orthogonality I was thinking that I am getting it right. I was understanding it very well. However, at the end of the chapter a few questions were asked to measure the level of understanding of the subject. While I was trying to answer those questions to myself I realized that I haven't understood it perfectly. So to clarify my understandings I am asking those questions here.
C++ supports multiple inheritance, and Java allows a class to
implement multiple interfaces. What impact does using these facilities
have on orthogonality? Is there a difference in impact between using multiple
inheritance and multiple interfaces?
There are actually three questions bundled up here: (1) What is the impact of supporting multiple inheritance on orthogonality? (2) What is the impact of implementing multiple interfaces on orthogonality? (3) What is the difference between the two sorts of impact?
Firstly, let us get to grips with orthogonality. In The Art of Unix Programming, Eric Raymond explains that "In a purely orthogonal design, operations do not have side effects; each action (whether it's an API call, a macro invocation, or a language operation) changes just one thing without affecting others. There is one and only one way to change each property of whatever system you are controlling."
So, now look at question (1). C++ supports multiple inheritance, so a class in C++ could inherit from two classes that have the same operation but with two different effects. This has the potential to be non-orthogonal, but C++ requires you to state explicitly which parent class has the feature to be invoked. This will limit the operation to only one effect, so orthogonality is maintained. See Multiple inheritance.
And question (2). Java does not allow multiple inheritance. A class can only derive from one base class. Interfaces are used to encode similarities which the classes of various types share, but do not necessarily constitute a class relationship. Java classes can implement multiple interfaces but there is only one class doing the implementation, so there should only be one effect when a method is invoked. Even if a class implements two interfaces which both have a method with the same name and signature, it will implement both methods simultaneously, so there should only be one effect. See Java interface.
And finally question (3). The difference is that C++ and Java maintain orthogonality by different mechanisms: C++ by demanding the the parent is explicitly specified, so there will be no ambiguity in the effect; and Java by implementing similar methods simultaneously so there is only one effect.
Irrespective of any number of interfaces/ classes you extend there will be only one implementation inside that class. Lets say your class is X.
Now orthogonality says - one change should affect only one module.
If you change your implementation of one interface in class X - will it affect other modules/classes using your class X ? Answer is no - because the other modules/classes are coding by interface not implementation.
Hence orthogonality is maintained.
What are the pros and cons of having multiple inheritance?
And why don't we have multiple inheritance in C#?
UPDATE
Ok so it is currently avoided because of the issue with clashes resolving which parent method is being called etc. Surely this is a problem for the programmer to resolve. Or maybe this could be resolve simularly as SQL where there is a conflict more information is required i.e. ID might need to become Sales.ID to resolve a conflict in the query.
Here is a good discussion on the pitfalls of multiple inheritance:
Why should I avoid multiple inheritance in C++?
Here is a discussion from the C# team on why they decided not to allow multiple inheritance:
http://blogs.msdn.com/csharpfaq/archive/2004/03/07/85562.aspx
http://dotnetjunkies.com/WebLog/unknownreference/archive/2003/09/04/1401.aspx
It's just another tool in the toolbox. Sometimes, it is exactly the right tool. If it is, having to find a workaround because the language actually prohibits it is a pain and leads to good opportunities to screw it up.
Pros and cons can only be found for a concrete case. I guess that it's quite rare to actually fit a problem, but who are the language designers to decide how I am to tackle a specific problem?
I will give a pro here based on a C++ report-writer I've been converting to REALbasic (which has interfaces but only single-inheritance).
Multiple inheritance makes it easier to compose classes from small mixin base classes that implement functionality and have properties to remember state. When done right, you can get a lot of reuse of small code without having to copy-and-paste similar code to implement interfaces.
Fortunately, REALbasic has extends methods which are like the extension methods recently added to C# in C# 3.0. These help a bit with the problem, especially as they can be applied to arrays. I still ended up with some class hierarchies being deeper as a result of folding in what were previously multiply-inherited classes.
The main con is that if two classes have a method with the same name, the new subclass doesn't know which one to call.
In C# you can do a form of multiple inheritance by including instances of each parent object within the child.
class MyClass
{
private class1 : Class1;
private class2: Class2;
public MyClass
{
class1 = new Class1;
class2 = new Class2;
}
// Then, expose whatever functionality you need to from there.
}
When you inherit from something you are asserting that your class is of that (base) type in every way except that you may implement something slightly differently or add something to it, its actually extremely rare that your class is 2 things at once. Usually it just has behavour common to 2 or more things, and a better way to describe that generally is to have your class implement multiple interfaces. (or possibly encapsulation, depending on your circumstances)
It's one of those help-me-to-not-shoot-myself-in-the-foot quirks, much like in Java.
Although it is nice to extend fields and methods from multiple sources (imagine a Modern Mobile Phone, which inherits from MP3 Players, Cameras, Sat-Navs, and the humble Old School Mobile Phone), clashes cannot be resolved by the compiler alone.