Stateless Session Beans Object Identity - ejb-3.1

The Specifications for EJB 3.2 Section 3.4.7.2 Stateless Session Beans show following code to demonstrate equality:
#EJB Cart cart1;
#EJB Cart cart2;
...
if (cart1.equals(cart1)) { // this test must return true
...
}

Well, it is not obvious, we are infact talking about references to proxies managed by the container.
However the example is mostly used toghether with the Stateful bean where the same equals instruction return false. The container in that case returns a new reference, and you end up with two distinct beans.
UPDATE
Uhm...I made a mistake, as I have read cart1.equals(cart2). In such case, I have to say that it is almost obvious as you have noticed.
However, just because it is not a simple reference that you have obtained with new, but it is something that the app server is managing for you, it is good to know that it is serving you the same object identity. This especially true and fundamental in the case of a Stateful.
Right now I cannot think of any useful purpose for a Stateless bean for having the same object identity compared to a Stateful but I am sure that there are some examples.

Related

How to handle multiple entity update in the same transaction in Spring Data REST

Is anyone having an idea on how to handle multiple entity updates within the same transaction in Spring Data REST ? The same thing can be handle within Spring controller methods using the #Transactional annotation. If I am correct, Spring Data REST executes every execution event within separate transactions. So multiple entity updates cannot be handled in a proper way.
I am having issues updating 2 entities (ABC and PQR) within the same transaction and rolling back the ABC entity when the PQR entity is failed.
// ABC repository
#RepositoryRestResource
public interface ABCEntityRepository extends MongoRepository<ABC, String> {
}
// PQR repository
#RepositoryRestResource
public interface PQREntityRepository extends MongoRepository<PQR, String> {
}
// ABC repository handler
#RepositoryEventHandler
public class ABCEventHandler {
#Autowired
private PQREntityRepository pqrEntityRepository;
#HandleBeforeSave
public void handleABCBeforeSave(ABC abc) {
log.debug("before saving ABC...");
}
#HandleAfterSave
public void handleABCAfterSave(ABC abc) {
List<PQR> pqrList = pqrEntityRepository.findById(abc.getPqrId());
if (pqrList != null && !pqrList.isEmpty()) {
pqrList.forEach(pqr -> {
// update PQR objects
}
}
// expect to fail this transaction
pqrEntityRepository.saveAll(pqrList);
}
}
since #HandleAfterSave method is executed in a separate transaction, calling HandleAfterSave method means the ABC entity updation is already completed and cannot rollback, therefore. Any suggestion to handle this ?
Spring Data REST does not think in entities, it thinks in aggregates. Aggregate is a term coming from Domain-Driven Design that describes a group of entities for which certain business rules apply. Take an order along side its line items for example and a business rule that defines a minimum order value that needs to be reached.
The responsibility to govern constraints aligns with another aspect that involves aggregates in DDD which is that strong consistency should/can only be assumed for changes on an aggregate itself. Changes to multiple (different) aggregates should be expected to be eventually consistent. If you transfer that into technology, it's advisable to apply the means of strong consistency – read: transactions – to single aggregates only.
So there is no short answer to your question. The repository structure you show here virtually turns both ABCEntity and PQREntity into aggregates (as repositories only exist for aggregate roots). That means, OOTB Spring Data REST does not support updating them in a single transactional HTTP call.
That said, Spring Data REST allows the declaration of custom resources that can take responsibility of doing that. Similarly to what is shown here, you can simply add resources on additional routes to completely implement what you imagine yourself.
Spring Data REST is not designed to produce a full HTTP API out of the box. It's designed to implement certain REST API patterns that are commonly found in HTTP APIs and will very likely be part of your API. It's build to avoid you having to spend time on thinking about the straight-forward cases and only have to plug custom code for scenarios like the one you described, assuming what you plan to do here is a good idea in the first place. Very often requests like these result in the conclusion that the aggregate design needs a bit of rework.
PS: I saw you tagged that question with spring-data-mongodb. By default, Spring Data REST does not support MongoDB transactions because it doesn't need them. MongoDB document boundaries usually align with aggregate boundaries and updates to a single document are atomic within MongoDB anyway.
I'm not sure I understood your question correctly, but I'll give it a try.
I'd suggest to have a service with both Repositories autowired in, and a method annotated with #Transactional that updates everything you want.
This way, if the transaction fails anywhere inside the method, it will all rollback.
If this does not answer your question, please clarify and I'll try to help.

Should elements ever be made available outside of a page object?

This is a question I cannot find a definitive source on and am hoping to get some answers based on users previous experience mainly with explanations as to why a certain approach DID NOT work out.
I am using webdriver for automation via Protractor and am having a debate on whether or not page elements should ever be made available outside of page object itself. After researching it appears that there are a few different approaches people take and I cannot fully grasp the long term implications of each.
I've seen the following different implementations of the page object model:
Locators are declared in the page object and exported
This is my least favorite approach as it means element are actually being identified in the test. This seems like a bad standard to set as it could encourage automaters to use new locators, not from the page object, directly in the application.
Also any which require any dynamic information cannot be directly set when PO is initialized require further editing.
pageobject.js
export default class HomePage {
constructor() {
this.passwordField = '#password';
this.usernameField = '#user';
}
}
test.js
const homePage = new HomePage();
$(homePage.usernameField ).sendKeys('admin');
$(homePage.passwordField ).sendKeys('password');
Elements declared in page object and exported, locators not
pageobject.js
export default class HomePage {
constructor() {
this.passwordField = $('#password');
this.usernameField = $('#user');
}
}
test.js
const homePage = new HomePage();
homePage.usernameField.sendKeys('admin);
homePage.passwordField.sendKeys('password);
Elements declared in Page Object and only used directly within the page object, only methods exported
This is the approach I have used in the past and we ended up with many, many functions. For instance we had setUsename(), getCurrentUsername(), getUsernameAttibute(), verifyUsernameExists() and same for password element and many other elements. Our page object became huge so I don't feel like this is the best approach any longer.
One of the advantage however is that our tests look very clean and are very readable.
pageobject.js
export default class HomePage {
constructor() {
var passwordField= $('#password');
var usernameField = $('#user');
}
setUserName(name){
username.sendKeys(name);
};
setPassword(password){
passwordField.sendKeys(password);
};
}
test.js
const homePage = new HomePage();
homePage.setUsername('admin');
homePage.setPassword('123');
I'm very interested to get some feedback on this so hopefully you can take the time to read.
I prefer and believe that the last approach is the best.
Keeping aside the fact that we are talking about automation, any good/great software has following traits.
It is composed of individual modules/pieces/components
Each individual module/piece/component is cohesive in terms of data/information (selector, webdriver API calls in case of automation) specific to its API/methods to interact with the data.
The last approach provides just that with the added benefit of test cleanliness that you pointed out.
However, most of the times, for whatever reasons, we tend to ignore the modularity and make the existing POs bloated. This i have seen and was part of. So, in way, POs getting bloated is not because of the approach but the way automators/testers/developers being conscious to keep POs modular, composed and simpler. This is true whether it is about POs or the application functional code
Boated POs:
Specific to the problem of bloated POs, check if you can separate out the common elements out of the POs. For ex. Header, Footer, Left Nav, Right Nav etc are common across pages. They can be separated out and the POs can be composed of those individual pieces/sections.
Even within the main content, separate common content (if its across two or more pages if not all) into their own component API if its logical and can be reused across pages.
It is normal that the automation specs perform extensive regression for ex. an element is a password field, length of the text box is so and so etc. In such cases it doesn't make sense to add methods for every single spec use case (or expect). Good thing is, here also the commonality takes the centerstage. Basically, provide API that is used across specs and not if its used in one spec, literally.
Take for ex. a password field should be masked. Unlikely that you would want to test that in multiple spec files. Here, either we can add a method for it in LoginPO like isPasswordMasked() or we can let the password field accessible from LoginPO and the spec do the actual check for password type field. By doing this, we are still letting LoginPO in control of password field information that matters to other API (login(), logout() etc) i.e only PO knows how and where to get that password element. With the added advantage of pushing spec testing to spec file.
POs that expect/assert
At any point, its not a good idea to make any testing (or expect) to be part of the PO API. Reason(s):
The PO and its API is reusable across suites and should be easy for anyone to look at it and understand. Their primary responsibility is to provide common API.
They should be as lean as possible (to prevent bloating up)
More importantly, browser automation is slower inherently. If we add test logic to the POs and its API methods, we will only make it slower.
FWIW, i have not come across any web page that mandates a bloated API.
Exposing Elements in POs:
It depends on the use case i believe. May be an element that is used in only one spec can be a base case to expose. That said, in general, the idea is that the specs should be readable for the tester/developer AND for those who look at them later. Whether its done with meaningful element variable name or a method name is largely a preference at best. On the other hand, if an element is meant to have a bit of involved interaction (for ex hover open menu link) its definitely a candidate for exposing through API alone.
Hope that adds some clarification!
The last way is the correct way to implement page objects. The idea behind the page object is that it hides the internals of the page and provides a clean API for a script to call to perform actions on the page. Locators and elements should not be exposed. Anything you need to do to the page should be exposed by a public method.
One way you can avoid a getter and setter for each field on the page is to consolidate methods. Think about actions that the user would take on the page. Rather than having .setUsername(), .setPassword(), and .clickLoginButton() methods, you should just have a login() method that takes the username and password as parameters and does all the work to log in for you.
References
Martin Fowler is generally considered the inventor of the "page object" concept but others coined the name "page object". See his description of the page object.
Selenium's documentation on page objects.

React Simple Global Entity Cache instead of Flux/React/etc

I am writing a little "fun" Scala/Scala.js project.
On my server I have Entities which are referenced by uuid-s
(inside Ref-s).
For the sake of "fun", I don't want to use flux/redux architecture but still use React on the client (with ScalaJS-React).
What I am trying to do instead is to have a simple cache, for example:
when a React UserDisplayComponent wants the display the Entity User with uuid=0003
then the render() method calls to the Cache (which is passed in as a prop)
let's assume that this is the first time that the UserDisplayComponent asks for this particular User (with uuid=0003) and the Cache does not have it yet
then the Cache makes an AjaxCall to fetch the User from the server
when the AjaxCall returns the Cache triggers re-render
BUT ! now when the component is asking for the User from the Cache, it gets the User Entity from the Cache immediately and does not trigger an AjaxCall
The way I would like to implement this is the following :
I start a render()
"stuff" inside render() asks the Cache for all sorts of Entities
Cache returns either Loading or the Entity itself.
at the end of render the Cache sends all the AjaxRequest-s to the server and waits for all of them to return
once all AjaxRequests have returned (let's assume that they do - for the sake of simplicity) the Cache triggers a "re-render()" and now all entities that have been requested before are provided by the Cache right away.
of course it can happen that the newly arrived Entity-s will trigger the render() to fetch more Entity-s if for example I load an Entity that is for example case class UserList(ul: List[Ref[User]]) type. But let's not worry about this now.
QUESTIONS:
1) Am I doing something really wrong if I am doing the state handling this way ?
2) Is there an already existing solution for this ?
I looked around but everything was FLUX/REDUX etc... along these lines... - which I want to AVOID for the sake of :
"fun"
curiosity
exploration
playing around
I think this simple cache will be simpler for my use-case because I want to take the "REF" based "domain model" over to the client in a simple way: as if the client was on the server and the network would be infinitely fast and zero latency (this is what the cache would simulate).
Consider what issues you need to address to build a rich dynamic web UI, and what libraries / layers typically handle those issues for you.
1. DOM Events (clicks etc.) need to trigger changes in State
This is relatively easy. DOM nodes expose callback-based listener API that is straightforward to adapt to any architecture.
2. Changes in State need to trigger updates to DOM nodes
This is trickier because it needs to be done efficiently and in a maintainable manner. You don't want to re-render your whole component from scratch whenever its state changes, and you don't want to write tons of jquery-style spaghetti code to manually update the DOM as that would be too error prone even if efficient at runtime.
This problem is mainly why libraries like React exist, they abstract this away behind virtual DOM. But you can also abstract this away without virtual DOM, like my own Laminar library does.
Forgoing a library solution to this problem is only workable for simpler apps.
3. Components should be able to read / write Global State
This is the part that flux / redux solve. Specifically, these are issues #1 and #2 all over again, except as applied to global state as opposed to component state.
4. Caching
Caching is hard because cache needs to be invalidated at some point, on top of everything else above.
Flux / redux do not help with this at all. One of the libraries that does help is Relay, which works much like your proposed solution, except way more elaborate, and on top of React and GraphQL. Reading its documentation will help you with your problem. You can definitely implement a small subset of relay's functionality in plain Scala.js if you don't need the whole React / GraphQL baggage, but you need to know the prior art.
5. Serialization and type safety
This is the only issue on this list that relates to Scala.js as opposed to Javascript and SPAs in general.
Scala objects need to be serialized to travel over the network. Into JSON, protobufs, or whatever else, but you need a system for this that will not involve error-prone manual work. There are many Scala.js libraries that address this issue such as upickle, Autowire, endpoints, sloth, etc. Key words: "Scala JSON library", or "Scala type-safe RPC", depending on what kind of solution you want.
I hope these principles suffice as an answer. When you understand these issues, it should be obvious whether your solution will work for a given use case or not. As it is, you didn't describe how your solution addresses issues 2, 4, and 5. You can use some of the libraries I mentioned or implement your own solutions with similar ideas / algorithms.
On a minor technical note, consider implementing an async, Future-based API for your cache layer, so that it returns Future[Entity] instead of Loading | Entity.

Deprecation of TableRegistry::get()

I'd like to ask what are your thought on deprecation of the TableRegistry::get() static call in CakePHP 3.6?
In my opinion it was not a good idea.
First of all, using LocatorAwareTrait is wrong on many levels. Most important, using traits in such way can break the Single Responsibility and Separation of Concerns principles. In addition some developers don't want to use traits as all because they thing that it breaks the object oriented design pattern. They prefer delegation.
I prefer to use delegation as well with combination of flyweight/singleton approach. I know that the delegation is encapsulated by the LocatorAwareTrait but the only problem is that it exposes the (get/set)TableLocator methods that can be used incorrectly.
In other words if i have following facade:
class Fruits {
use \Cake\ORM\Locator\LocatorAwareTrait;
public function getApples() { ... }
public function getOranges() { ... }
...
}
$fruits = new Fruits();
I don't want to be able to call $fruits->getTableLocator()->get('table') outside of the scope of Fruits.
The other thing you need to consider when you make such changes is the adaptation of the framework. Doing TableRegistry::getTableLocator()->get('table') every time i need to access the model is not the best thing if i have multiple modules in my application that move beyond simple layered architecture.
Having flyweight/singleton class like TableRegistry with property get to access desired model just makes the development more straight forward and life easier.
Ideally, i would just like to call TR::get('table'), although that breaks the Cake's coding standards. (I've created that wrapper for myself anyways to make my app bullet proof from any similar changes)
What are your thoughts?

Ninject ActivationBlock as Unit of Work

I have a WPF application with MVVM. Assuming object composition from the ViewModel down looks as follows:
MainViewModel
OrderManager
OrderRepository
EFContext
AnotherRepository
EFContext
UserManager
UserRepository
EFContext
My original approach was to inject dependencies (from the ViewModelLocator) into my View Model using .InCallScope() on the EFContext and .InTransientScope() for everything else. This results in being able to perform a "business transaction" across multiple business layer objects (Managers) that eventually underneath shared the same Entity Framework Context. I would simply Commit() said context at the end for a Unit of Work type scenario.
This worked as intended until I realized that I don't want long living Entity Framework contexts at the View Model level, data integrity issues across multiple operations described HERE. I want to do something similar to my web projects where I use .InRequestScope() for my Entity Framework context. In my desktop application I will define a unit of work which will serve as a business transaction if you will, typically it will wrap everything within a button click or similar event/command. It seems that using Ninject's ActivationBlock can do this for me.
internal static class Global
{
public static ActivationBlock GetNinjectUoW()
{
//assume that NinjectSingleton is a static reference to the kernel configured with the necessary modules/bindings
return new ActivationBlock(NinjectSingleton.Instance.Kernel);
}
}
In my code I intend to use it as such:
//Inside a method that is raised by a WPF Button Command ...
using (ActivationBlock uow = Global.GetNinjectUoW())
{
OrderManager orderManager = uow.Get<OrderManager>();
UserManager userManager = uow.Get<UserManager>();
Order order = orderManager.GetById(1);
UserManager.AddOrder(order);
....
UserManager.SaveChanges();
}
Questions:
To me this seems to replicate the way I do business on the web, is there anything inherently wrong with this approach that I've missed?
Am I understanding correctly that all .Get<> calls using the activation block will produce "singletons" local to that block? What I mean is no matter how many times I ask for an OrderManager, it'll always give me the same one within the block. If OrderManager and UserManager compose the same repository underneath (say SpecialRepository), both will point to the same instance of the repository, and obviously all repositories underneath share the same instance of the Entity Framework context.
Both questions can be answered with yes:
Yes - this is service location which you shouldn't do
Yes you understand it correctly
A proper unit-of-work scope, implemented in Ninject.Extensions.UnitOfWork, solves this problem.
Setup:
_kernel.Bind<IService>().To<Service>().InUnitOfWorkScope();
Usage:
using(UnitOfWorkScope.Create()){
// resolves, async/await, manual TPL ops, etc
}

Resources