Does using dependency injection in cucumber-jvm consider bad practise? - selenium-webdriver

I am still very new in Cucumber-jvm/Selenium, so I am very sorry if the question sounds dumb.
Background: I'm an intern in a big company. I'm doing an automated software testing for a product. There is already an existing automated test steps. What we're going to do is to extend the project and add our own steps. The problem is almost all of the steps have kinda same hook methods. I asked a question before on how to avoid to run hook methods and a very good man said to use tags on the hook methods. That's before I found that almost all the hook methods in the previous project are kinda same. This made me think that it's not very fast/optimize because if hook methods are global then every time I run a feature file, it will just do all the hook methods that are same. After days of doing some more coding and research, I found dependency injection using picocontainer and I thought this is a good way of fixing the current issue, but I read some of articles that said dependency injection is consider a bad practice.
My question: Consider what I stated above, does using dependency injection with picocontainer in cucumber-jvm here consider bad practise? If it is, is there more good solution to this?
(Optional Background) I don't think this is important but I'm just going to include it, the hook methods that are almost 95% in every steps:
#Before
public void keepScenario(Scenario scenario){
this.scenario = scenario;
fWait = new FluentWait<WebDriver>(BrowserDriver.getCurrentDriver());
fWait.withTimeout(Duration.ofSeconds(10));
fWait.ignoring(WebDriverException.class);
}
#After
public void screenshotOnFailure(){
if (scenario.getStatus().equals("failed")) {
BrowserDriver.getScreenshot(scenario);
}
}

Dependency injection solves the problem of sharing state between multiple step definition files in a scenario. Injecting steps into other steps might be considered a bad practice, but on a whole DI itself isn't. But none of this appears to be your direct problem.
Your problem appears to be that you have multiple hooks that do the same thing. You can either remove these duplicate hooks or use a very strict way to select which features and glue you'll use (check the CucumberOptions on your runner or commandline arguments). If narrowed down to a single class, it will only use the steps and hooks in that class.
Alternatively you can just remove the duplicate hooks.

Related

Server-side rendering of CSS modules

I am going to import styles with CSS Module and make it work with server-side rendering. I tried the following methods but each one has its own caveat. What is the best possible way to require('.style.scss') if any side effects?
Using the css-modules-require-hook:
Advantage: Easy to configure. You just need to call the hook at the beginning of server code. You don't need to modify components.
Caveat: It modifies the require.extensions global object, which is deprecated.
Using the isomorphic-style-loader:
Advantage: No more hooks to require.extensions.
Caveat: Wrapping components with HOCs that uses the React Context, which is an experimental API and likely to break in future releases of React.
Using the webpack-isomorphic-tools:
Advantage: No dependency on require.extensions or Context (AFAIK).
Caveat: Wrapping server inside webpack-isomorphic-tools instance. And can we please get rid of webpack-assets.json?
Bundling server with Webpack:
Advantage: No more hooks or injections.
Caveat: In development, it is very cumbersome to bundle everything whenever the code changes and even makes it harder to debug in a large bundled .js file. Not sure - you may need to pass a bundled .js to test runner.
Disclaimer:
The advantages and caveats below are just my two cents, and actually I love all the libraries, plugins and approaches they took to solve the problem and really appreciate their efforts.
I am not a native English speaker, please correct me if I misrepresent myself.
In the end, I decided to hook the require.extensions in development. Probably it is not the best way, showing warning messages on console, like checksum mismatch, but in development mode, I can ignore it.

Is it sane to use React `context` to access model mutators in a Flux-less app?

I'm starting a new React app and, seeing all the news in the ecosystem, I want to go slow and actually consider my choices, starting with just React/Webpack/Babel, and introducing more.
The first of these choices is whether to use Flux or not (more precisely, Redux, which looks great and seems to have won the flux wars). Here is where I am:
I understand Redux's benefits, summarized on SO by Dan Abramov. They look great, but I'd rather introduce things one step at a time.
In plain React, parent→child communication is done with props, and child→parent communication happens with callbacks. See Doc / Communicate Between Components, or SO / Child to parent communication in React (JSX) without flux, or this codeacademy Redux tutorial which starts out by saying "no need for Redux if you're fine with plain React and all your data in a root component".
Which looks just fine for my purpose...
... however, the sad part is that these callbacks have to be passed down the component chain, which becomes quickly tedious as the levels of nesting grow.
To solve this without introducing new dependencies, I found two articles (1: Andrew Farmer, 2: Hao Chuan) encouraging usage of the recently-introduced context feature of React.
→ Using context would let me expose my model-mutating callbacks to my child components. To me it doesn't sound like a horrible misuse: I wouldn't be passing model data, just references to functions for binding on event handlers.
Does it sound sane?
Any other plain-React suggestion for convenient child→parent communication?
Thanks.
Answering my own question after watching Dan Abramov's Getting Started with Redux series, which I warmly recommend.
Yes it looks like it's sane: Redux faced the very same problem and solved it with Context (at least initially, the implementation may have changed). It is implemented and packaged (among other things) in the react-redux bindings under the <Provider> component and the connect() function.
Initially, at the start of step 24 - Passing the Store Down Explicitly via Props , we have a Todo app with a Redux store available as top-level variable. This sucks (for 1. testability/mockability, 2. server rendering needing "a different store instance for every request because different requests have different data"), so the store is demoted from top-level variable to a root component prop.
As in my case, having to pass the store as prop to each component is annoying, so in 25 - Passing the Store Down Implicitly via Context, Dan demonstrates using Context to pass the Redux store to arbitrarily nested components.
It is followed by 26 - Passing the Store Down with <Provider> from react-redux, explaining how this was encapsulated in the react-redux bindings.
And 27 - Generating Containers with connect() from React Redux further encapsulates generation of a container component from a presentational component.
Personally, I find the question quite simple to answer, if you think about the way dependency injection in Angular works. In Angular you have your DOM, and then you have all those services, providers, factories, constants and whatnot, which are independent of the DOM structure and can be imported simply by mentioning their name when creating modules or controllers.
I liken the use of this.context to DI. The difference w.r.t to Angular is that instead of stating the dependencies using function parameter names, you use childContextTypes and instead of getting the dependencies as function arguments, you get them through this.context.
In this sense, asking the question whether passing model mutators through this.context is sane, boils down to whether it makes sense in Angular to register your model for dependency injection. I've never seen a problem with the latter, therefore I deduce that the former is also quite OK.
I'm not saying that it suits the spirit of the library, though. Dependency injection and in particular managing dependencies between injected component is not as explicit, so one may argue that it's not the React way. I leave this philosophical discussion to others.

AngularJS: Best practices

I don't think this question is considered opinion-based since it is asking about best practices, but at the same time I don't think there are any industry standards for AngularJS best practices, so it's hard to avoid opinions in some cases. In other cases, one way of programming is clearly better than the other. I am not sure which category this question falls under.
I don't think there are any official AngularJS best practices. There are guidelines, and some people have released their own best practices, but I haven't stumbled upon an industry standard.
I have a question about how I'm organizing my modules and the code itself. It feels like I have a lot of small files and too many folders, even though I have broken down the app into common components and core features. It's still a relatively small application, but it's likely to grow.
The biggest question I have was how I should organize my submodules. Currently each submodule is broken down into various files, separating controllers, directives, and services. I was wondering if I should just define them all in one file, and then if the submodule gets too large, break it down again.
For instance, right now I have a module app.buttonToggle, with various files:
buttonToggle.app.js
angular.module('app.buttonToggle', []);
buttonToggleCtrl.js
angular.module('app.buttonToggle').controller('ButtonToggleCtrl', ButtonToggleCtrl);
function ButtonToggleCtrl() ...
...
buttonToggle.js
angular.module('app.buttonToggle').directive('buttonToggle', buttonToggle);
function buttonToggle() ...
...
Instead would it be better to combine them into one file?
buttonToggle.app.js
angular.module('app.buttonToggle', [])
.controller('ButtonToggleCtrl', function() { ... })
.directive('buttonToggle', function() { ... });
What would be the benefit of doing the latter, if any? I like how it uses less code, but is it harder to test?
I recommend using John Papa's style guide - it is what I use to help make these decisions and keep up with and potentially contribute to the best practices. It will help you with this issue. These style guides are becoming very popular to help you with your questions.
In this case, the community agrees you should have one component per file and use a task runner like gulp or grunt to package for shipping.
https://github.com/johnpapa/angular-styleguide
The main benefit here
angular.module('app.buttonToggle').controller('ButtonToggleCtrl', ButtonToggleCtrl);
is that (due to JS hoisting) controller constructor can annotated in readable manner, e.g.
ButtonToggleCtrl.$inject = ['...'];
function ButtonToggleCtrl(...) {
It isn't an issue when automatic annotation is used.
It is also a bit more convenient to debug named functions rather than anonymous ones, though Angular provides meaningful feedback on errors in all cases.
What would be the benefit of doing the latter, if any? I like how it
uses less code, but is it harder to test?
It isn't. All of module components can be tested separately irregardless of how they were defined, as long as their definitions don't contain syntactic errors.

pitfalls of IIFEs with AngularJS

I have an application made with angularJS, whole application is composed of IIFEs(Immediately Invoked Function Expression). Every module, directive, controller is itself an IIFE and they are around 100s.
I want to know what is the performance pitfall when an app has so many IIFEs.
It is OK to use IIFEs with AngularJS?
How good is using libs like browserify and requirejs with AngularJS for managing the dependencies?
Can you please throw some light on this?
The question you need to ask first is whether you've got some internal parts within the IIFE that you don't want to expose to the global scope.
The whole point of creating a closure in this manner is to mimic encapsulation and avoid polluting the global scope.
The performance pitfall is not so easy to measure; I think the performance issue is negligible when you create the IIFE (don't forget you're just creating a function). One performance issue that I might think of is that when you reference a function variable form an inner function, you need to travel through the scope chain and get it from the closure object.
I would recommend you to look at patterns like module pattern, revealing module pattern etc. I personally use the revealing module pattern.
Browserify and requireJS are two libraries that implement two different specs; the commonJS and AMD respectively. These two specifications try to accommodate a functionality that is not supported by ES3 or ES5; That is a way of defining module and then loading them in specified locations.
If you want to define modules and load them synchronously in a similar manner to hose nodeJS works, you can use Browserify. In addition, Browserify allows you to use the same modules both for client-side and server-side (as long as you're using nodeJS).
On the other hand if you want to asynchronously load your modules you can go with AMD and requireJS, but you won't be able to reuse them on the back-end.
Finally, bare in mind that everything you've mentioned is not directly connected to angularJS; those are some good JavaScript practises to overcome some issues of the language itself. It can be well used whether you're working with angular or not.
I would recommend that you use either browserify or requireJS; it will benefit you in the long run. Just imagine having 100 JS files; you would need to out the manually into your html in the correct order based on the dependency graph. You can easily come across issues like race conditions where one file should have been inserted before another one.
As for the performance overhead of IIFEs, you shouldn't have any serious issues.
As another answer said, IIFEs are a good general practice, independent of AngularJS.
If you're concerned about the performance of IIFEs, go ahead and check out these performance tests on JSPerf (or write your own):
http://jsperf.com/iife-antecedents-in-js
http://jsperf.com/immediate-anonymous-function-invocation
While some ways of IIFE creation are clearly much slower than others, I wouldn't worry about the performance of IIFEs unless you're in a large loop.

In GAE entities, how can I execute code automatically every time an entity object is created or deleted?

If the solution is to override init() and del() from db.Model / db.Expando / db.PolyModel, then do I need to call the superclass functions?
Would be great to see some example code in the answers or on links. Thanks a lot.
I've written a couple of blog posts on this subject: One on high level pre- and post- put hooks, and one on the low level hook support. One of these is probably what you're looking for.
I think that the best way to accomplish what you want to do is to use datastore API hooks.
Using that approach will allow you to avoid messing directly with the classes and superclasses. That could get complex and messy and buggy very quickly.
If this is the java implementation, you can implement javax.jdo.listener.StoreCallback on your domain model and then define method jdoPreStore(). This method will automatically be called each time you persist to the datastore.

Resources