the full sentence taken from the EJB3.2 specifications:
When interacting with a reference to the no-interface view, the client
must not make any assumptions regarding the internal implementation of
the reference, such as any instance-specific state that may be
present in the reference
I'm actually trying to understand what that actually mean and I was wondering if someone could kindly provide some examples.
EDIT:
The Above sentence is take from Section 3.4.4 Session Bean’s No-Interface View, maybe this info is helpful
When generating a no-interface view proxy, the EJB container must create a subclass of the EJB class and override all public methods to provide proxying behavior (like security, transactions).
You can get a reference to the bean with (eg. for passing it to another ejb):
NoInterfaceBean bean = ejbContext.getBusinessObject(NoInterfaceBean.class);
This returns a reference with a class-type that is the same as the bean class itself (normally if the EJB has a business interface it would return the interface class), but it is not a reference to an instance of NoInterfaceBean (but to that proxy class with the same name). Think of it as a reference to a pimped version of your bean, about which you
must not make any assumptions regarding the internal implementation
It's basically the same with "normal" EJB's. You know that there is some magic around your bean instance, but since you get the interface as class-type it's already clear that every class implementing the interface can have a different internal implementation.
So the specification emphasizes this difference at that point. Even if it looks like a reference to your concrete class it is none (as they say in the next paragraph of the specification JSR-000345 Enterprise JavaBeansTM 3.2 Final Release:
Although the reference object is type-compatible with the
corresponding bean class type, there is no prescribed relationship
between the internal implementation of the reference and the
implementation of the bean instance.
Related
I am beginner to spring aop and i am going through spring aop documentation to understand the concepts but failed to understand 'target object'.
the documentation says target object is the "object being advised by one or more aspects. Also referred to as the advised object".
what is the meaning of being advised by one or more aspects here? can anyone explain me what is target object in Lyman terms as i am still a beginner.
For a simple explanation of some basic AOP terms please refer to my other answer. Please read that one first before continuing to read here.
So the target object is the (Java or Spring) component to which you want to add new behaviour, usually a cross-cutting concern, i.e. some behaviour that is to be applied to many classes in your code base.
An aspect is a class in which you implement that cross-cutting concern and also determine where and how to apply it. The where is defined by a pointcut, some kind of search expression finding the relevant parts of your code base to apply the behaviour to. The how is implemented in an aspect method called an advice.
So when we say that an aspect advises an object, it means that it adds (cross-cutting) behaviour to it without changing the class itself.
In Spring AOP this is mostly method interception, i.e. doing something before or after a method executes.
In the more powerful AspectJ you can also intercept changes of member variables and constructor execution. Furthermore you can change the class structure itself by adding new members or methods or making the target class implement an interface etc.
Is it possible to define multiple targets like below:
#Before(value = "com.test.createUpdateDeletePointCut() && (target(com.testlab.A) || target(com.testlab.B))")
Does anybody knows why BlockData class doesn't directly implement IContent?
I know that during BlockData is being retrieve from database, proxy created by Castle implements IContent.
If StackOverflow isn't suitable place for this kind of a question, please move it.
Johan Björnfot at EPiServer explains some of the details in this post.
Excerpt:
"In previous versions of CMS was pages (PageData) the only content type that the content repository (traditionally DataFactory) handled. In CMS7 this has changed so now content repository (IContentRepository) handles IContent instances. This means that the requirement for a .NET type to be possible to save/load from content repository is that it implements the interface EPiServer.Core.IContent.
There are some implementations of IContent built into CMS like PageData and ContentFolder (used to group shared block instances) and it is also possible to register custom IContent implementations.If you look at BlockData though you will notice that it doesn’t implement IContent, how is then shared block instances handled?
The answer is that during runtime when a shared block instance is created (e.g. through a call to IContentRepository.GetDefault where T is a type inheriting from BlockData) the CMS will create a new .NET type inheriting T using a technic called mixin where the new generated subclass will implement some extra interfaces (including IContent)."
BlockData does implement IContent as it is intended to work both when added to another content item such as a PageData instance (a.k.a. Local Block), and as a standalone instance (a.k.a.Shared Block). In latter case the interface is added by using a mix-in though Castle Windsor so that it can be referenced.
The decision for this construct was based on wanting to be able to use the same rendering templates regardless if a block is local or shared. Therefor the choice stood between having a large number of empty properties on local blocks or the current solution using mixins. Both options were tested and mixins was selected as the preferred solution even though it's not a perfect one.
BlockData "does implement IContent", just do:
var myContent = (IContent)myBlock;
But, if you're by any chance handling a Block which itself is a property (not a ContentReference), that cast will throw an exception.
This will be true for 100% of all cases (... using Math.Round).
I've read the entire Swift book, and watched all the WWDC videos (all of which I heartily recommend). One thing I'm worried about is data encapsulation.
Consider the following (entirely contrived) example:
class Stack<T>
{
var items : T[] = []
func push( newItem: T ) {
items.insert( newItem, atIndex: 0 )
}
func pop() -> T? {
if items.count == 0 {
return nil;
}
return items.removeAtIndex( 0 );
}
}
This class implements a stack, and implements it using an Array. Problem is, items (like all properties in Swift) is public, so nothing is preventing anyone from directly accessing (or even mutating) it separate from the public API. As a curmudgeonly old C++ guy, this makes me very grumpy.
I see people bemoaning the lack of access modifiers, and while I agree they would directly address the issue (and I hear rumors that they might be implemented Soon (TM) ), I wonder what some strategies for data hiding would be in their absence.
Have I missed something, or is this simply an omission in the language?
It's simply missing at the moment. Greg Parker has explicitly stated (in this dev forums thread) that visibility modifiers are coming.
Given that there aren't headers, the standard Objective-C tricks won't work, and I can't think of another trick to limit visibility that doesn't involve lots of bending over backwards. Since the language feature has been promised I'm not sure it's worth any big investment.
On the bright side since this feature is in flux, now is a great time to file a radar and influence how it turns out.
Updated answer for future reference.
From Apple's documentation:
Access Levels
Swift provides three different access levels for
entities within your code. These access levels are relative to the
source file in which an entity is defined, and also relative to the
module that source file belongs to.
Public access enables entities to
be used within any source file from their defining module, and also in
a source file from another module that imports the defining module.
You typically use public access when specifying the public interface
to a framework.
Internal access enables entities to be used within any
source file from their defining module, but not in any source file
outside of that module. You typically use internal access when
defining an app’s or a framework’s internal structure.
Private access
restricts the use of an entity to its own defining source file. Use
private access to hide the implementation details of a specific piece
of functionality. Public access is the highest (least restrictive)
access level and private access is the lowest (or most restrictive)
access level.
As a matter of fact I was delighted Swift finally adopted static typing so conforming to the theory for code with optimal OO properties, still the fall of the headers breaks the very meniang of Object Orienting programming, namely encapsulation. A way out would be like for Eiffel to automaticaly extract the headers but without specifying which are the public interfaces and which the private ones, it would be wortheless. I am really lambasted at this move of Apple's.
I got this problem,
"The deserializer has no knowlege of any type that maps to this contract"
After googling, I reached this post
The deserializer has no knowlege of any type that maps to this contract
where the answer says, the base class have to declare "KnownTypes" like
[DataContract, KnownType(typeof(Subclass)) ...],
If I have to declare this in my parent class, [DataContract, KnownType(typeof(Subclass))], doesn't it break the principles of OO Design that parent class doesn't have to know about subclass?
What is the right way of doing this?
The serializer is designed in a way that, if it serializes an object, it should be able to read it back. If you attempt to serialize an object with a declared type of 'Base', but an actual type of 'Derived' (see example below), if you want to be able to read back from the serialized object an instance of 'Derived', you need to somehow annotate the XML that the instance is not of the type of which it was declared.
[DataContract]
public class MyType
{
[DataMember]
public object obj = new Derived();
}
The serialized version of the type would look something like the XML below:
<MyType>
<obj actualType="Derived">
<!-- fields of the derived type -->
</obj>
</MyType>
When the type is being deserialized, the serializer will look at the "actualType" (not actual name) attribute, and it will have to find that type, initialize it, and set its properties. It's a potential security issue to let the serializer (with in Silverlight lives is a trusted assembly and has more "rights" than the normal user code) to create arbitrary type, so that's one reason for limiting the types which can be deserialized. And based on the design of the serializer (if we can serialize it, we should be able to deserialize it), the serialization fails for that reason as well.
Another problem with that is that the serialized data is often used to communicate between different services, in different computers, and possibly with different languages. It's possible (and often it is the case) that you have a class in a namespace in the client which has a similar data contract to a class in the server side, but they have different names and / or reside in different namespaces. So simply adding the CLR type name in the "actualType" attribute won't work in this scenario either (the [KnownType] attribute helps the serialzier map the data contract name / namespace to the actual CLR type). Also, if you're talking to a service in a different language / platform (i.e., Java), CLR type names don't even make sense.
Another more detailed explanation is given at the post http://www.dotnetconsult.co.uk/weblog2/PermaLink,guid,a3775eb1-b441-43ad-b9f1-e4aaba404235.aspx - it talks about [ServiceKnownType] instead of [KnownType], but the principles are the same.
Finally, about your question: does it break that OO principle? Yes, that principle is broken, that's a price to pay for being able to have lose coupling between the client and services in your distributed (service-oriented) application.
Yes it breaks the principles of OO design. This is because SOA is about sharing contracts (the C in ABC of services) and not types, whereas OO is about type hierarchies. Think like this the client for a service may not be even in an OO language but SOA principles can still be applied. How the mapping is done on server side is an implementation issue.
When using FxCop 1.36 for a WPF application with a single window that has yet to be modified, I get the InterfaceMethodsShouldBeCallableByChildTypes error with the following details:
Target : #System.Windows.Markup.IComponentConnector.Connect(System.Int32,System.Object) (IntrospectionTargetMember)
Resolution : "Make 'MainWindow' sealed (a breaking change if
this class has previously shipped), implement the method
non-explicitly, or implement a new method that exposes
the functionality of 'IComponentConnector.Connect(int,
object)' and is visible to derived classes."
Help : http://msdn2.microsoft.com/library/ms182153(VS.90).aspx (String)
Category : Microsoft.Design (String)
CheckId : CA1033 (String)
RuleFile : Design Rules (String)
Info : "Explicit method implementations are defined with private
accessibility. Classes that derive from classes with
explicit method implementations and choose to re-declare
them on the class will not be able to call into the
base class implementation unless the base class has
provided an alternate method with appropriate accessibility.
When overriding a base class method that has been hidden
by explicit interface implementation, in order to call
into the base class implementation, a derived class
must cast the base pointer to the relevant interface.
When calling through this reference, however, the
derived class implementation will actually be invoked,
resulting in recursion and an eventual stack overflow."
Created : 08/12/2008 22:26:37 (DateTime)
LastSeen : 08/12/2008 22:41:05 (DateTime)
Status : Active (MessageStatus)
Fix Category : NonBreaking (FixCategories)
}
Should this simply be ignored?
Ignore it, this is standard code that is in every WPF application and you don't see people complaining about net being able to call IComponentConnector.Connect from derived classes - so it's probably safe.
In general I think you should handle FxCop output as suggestions that have to be considered carefully, I've got a lot of bad advice from FxCop in the past.
Depends on what you expect an inheriter to do.
If you are not expecting this class to be inherited then it should be sealed and the problem goes away.
If you are expecting it to be inherited then you are taking the ability of the inheriting class to override the interface methods and still call them (i.e. base.methodname()). If that is your intent then you can ignore the warning.
However, that is not expected behaviour for inheritable classes so you should expose the interface publicly (i.e. An implicit interface instead of an explicit interface).