Application Variable in WPF While Maintaining Testability - wpf

I am working on a WPF application, using the MVVM Pattern.
Each ViewModel will need access to a security object, that essentially provides information about the rights the user has. Because this object only needs to be populated once at start up, and because populating it is (at least potentially) expensive, I want to keep it in state for the lifetime of the application.
I can make it a static variable in App, which would make it available to the whole application (at least that's my understanding). This would make my ViewModel implementations very difficult to test, since the App.SecurityObject call would be inline in each ViewModel. I would have to make sure App was available for each test and mock the App.SecurityObject call (I'm not even sure this would work, actually).
We are using StructureMap, so I could create a SecurityObjectProvider and configure a it with a Singleton lifecycle in the container, and simply make it part of every ViewModel constructor. The downside would be that (as I said) the provider would have to be part of every View Model constructor.
There are other, hacky workarounds I can think of, but they would involve creating methods (perhaps in the View Model base class) that would allow injecting the security object after instantiation for testing purpose only. I usually try to avoid this kind "for testing only" code.
It seems like this would be a common problem , but I can't find any SO questions that are completely on point.

Security concerns are often best adressed by Thread.CurrentPrincipal. If it's at all possible to fit your security concerns into that model (calling Principal.IsInRole and so on) that is by far the desirable solution.
It's pretty easy to unit test because you just need to set Thread.CurrentPrincipal before invoking the SUT and then make sure you revert it to its original value in the Fixture Teardown phase.
If Thread.CurrentPrincipal doesn't suit your need, I would suggest either an injected dependency or a Decorator that handles security. Security is often a Cross-Cutting Concern, so it is often preferable to handle it as declaratively as possible. In other words, if you can model it by Guards and Assertions, you don't need to actively call it, and you would be able to use a more AOP-like approach (such as a Decorator).
If that's not possible either, you can model it as an Ambient Context. This may look a bit like the Service Locator anti-pattern, but the difference is that it's strongly typed and has a Local Default that ensures that it never throws NullReferenceExceptions or their like because it protects its invariants.

The service locator pattern might help you out. You still implement the functionality as a service, but you have your VMs go through a static class to obtain the service, rather than have it injected:
var securityService = ServiceLocator.Resolve<ISecurityService>();
When running unit tests, you can configure your service locator to return mocks/stubs.

I think I would use some kind of Service Locator to get object. And in tests I would mock it out.

Related

Is it correct to access scope inside service or factory in angularjs

Disclaimer : I know there are certain questions which suggest not to access scope inside service or factory but here I am expecting the impact in terms of coding guidelines / whether it is advisable if not then I need proper justification.
We have angular js project and this project is old. Now after refactoring one of my colleague moved the common implementation from directive to service. While doing so , to access the scope of directive he manually started doing as below :
angular.element('<test-dir></test-dir>').scope();
What I felt is this is not the proper way to write the service/factory. I felt we are making the things complicated and suggested to remove the above part of code.
To justify the same I told :
1. This will make unit testability complicated and now we are trying to test the service the way we used to test directive.
2. And we are making this service tightly coupled with directive.
3. Service is not meant to access the scope.
But I think I am not able to convince him as I don't have much point to justify it. Can someone please suggest if I my understanding is correct and give proper justification to convice him. Thanks!
No, the services/factory are supposed to be working with data and should have the logic to process the data provided to them. Preferably, they should not refer to DOM objects or scope variables.
I personally believe that passing $scope to a service is a bad idea, because it creates a kinda circular reference: the controller depends on the service and the service depends on the scope of the controller.
On top of being confusing in terms of relations, things like this one end up getting in the way of the garbage collector.
A Service is just a function for the business layer of the application. It acts as a constructor function and is invoked once at runtime with new, much like you would with plain JavaScript (Angular is just calling a new instance under the hood for us).Use a service when you want to just create things that act as public APIs.
The service class should work on the data provided by the controller, making it reusable at any other place if needed. Also keeping the scope away from it, keeps the code lean and clean thus better maintainability.
I prefer to put a domain object in the controller scope and pass that to the service. This way the service works irrespective of it being used inside a controller or maybe inside another service in the future.

Easymock with #TestSubject enhanced with CGLIB

Is there a way to make EasyMock's #TestSubject annottation to work when the test subject object is enhanced with CGLIB?
Scenario: the #TestSubject object is a Spring bean which was enhanced with CGLIB in order to apply some aspect (assuming that for some reason Spring couldn't use JDK-based proxy). In this case, simply using #TestSubject and EasyMockSupport.injectMocks(this) does not really work. EasyMock injects the mock, however during execution the mock is not actually used due to how the internals of a CGLIB enhanced class work. In the end it is used the original reference the object had, not the mock.
The only approach I know is to create a setter in the test subject, and to inject the mock manually calling the setter. However sometimes I do not have access/permission/time to change the subject code to include the setter.
cglib classes are always final what prevents the creation of another proxy. This is therefore not possible to do. Rather, you would need to discover that a class is a cglib proxy already and rather enhance its base class.

Unit testing a .NET Web API/SQL project with existing methods?

I have inherited a Web API 2 project written in C#/.NET that uses ADO.NET to access an SQL Server database.
The data access layer of the project contains many methods which look similar to this:
public class DataAccessLayer
{
private SqlConnection _DBConn;
public DataAccessLayer()
{
_DBConn = new SqlConnection(ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString);
}
public string getAllProductsAsJSON()
{
DataTable dt = new DataTable();
using (SqlConnection con = _DBConn)
{
using (SqlCommand cmd = new SqlCommand("SELECT productId, productName FROM product ORDER BY addedOn DESC", con))
{
cmd.CommandType = CommandType.Text;
// add parameters to the command here, if required.
con.Open();
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dt);
return JsonConvert.SerializeObject(dt);
}
}
}
// ... more methods here, but all basically following the above style of
// opening a new connection each time a method is called.
}
Now, I want to write some unit tests for this project. I have studied the idea of using SQL transactions to allow for insertion of mock data into the database, testing against the mock data, and then rolling back the transaction in order to allow for testing against a "live" (development) database, so you can have access to the SQL Server functionality without mocking it out completely (e.g. you can make sure your views/functions are returning valid data AND that the API is properly processing the data all at once). Some of the methods in the data access layer add data to the database, so the idea is that I would want to start a transaction, call a set of DAL methods to insert mock data, call other methods to test the results with assertions, and then roll back the entire test so that no mock data gets committed.
The problem I am having is that, as you can see, this class has been designed to create a new database connection every single time that a query is made. If I try to think like the original developer probably thought, I could see how it could make at least some sense to do this, considering the fact that these classes are used by a web API, so a persistent database connection would be impractical especially if a web API call involves transactions, because you then do need a separate connection per request to maintain separation.
However, because this is happening I don't think I can use the transaction idea to write tests as I described, because uncommitted data would not be accessible across database connections. So if I wrote a test which calls DAL methods (and also business-logic layer methods which in turn call DAL methods), each method will open its own connection to the database, and thus I have no way to wrap all of the method calls in a transaction to begin with.
I could rewrite each method to accept an SQLConnection as one of its parameters, but if I do this, I not only have to refactor over 60 methods, but I also have to rework every single place that such methods are called in the Web API controllers. I then have to move the burden of creating and managing DB connections to the Web API (and away from the DAL, which is where it philosophically should be).
Short of literally rewriting/refactoring 60+ methods and the entire Web API, is there a different approach I can take to writing unit tests for this project?
EDIT: My new idea is to simply remove all calls to con.Open(). Then, in the constructor, not just create the connection but also open it. Finally, I'll add beginTransaction, commitTransaction and rollbackTransaction methods that operate directly upon the connection object. The core API never needs to call these functions, but the unit tests can call them. This means the unit test code can simply create an instance, which will create a connection which persists across the entire lifetime of the class. Then it can use beginTransaction, then do whatever tests it wants, and finally rollbackTransaction. Having a commitTransaction is good for completeness and also exposing this functionality to the business-logic layer has potential use.
There are multiple possible answers to this question, depending on what exactly you are trying to accomplish:
Are you primarily interested in unit testing your application logic (e.g., controller methods), rather than the data access layer itself?
Are you looking to unit test the logic inside your data access layer?
Or are you trying to test everything together (i.e., integration or end-to-end testing)?
I am assuming you are interested in the first scenario, testing your application logic. In that case, I would advise against connecting to the database at all (even a development database) in your unit tests. Generally, unit tests should not be interacting with any outside system (e.g., database, filesystem, or network).
I know you mentioned you were interested in testing multiple parts of the functionality all at once:
I have studied the idea of using SQL transactions [...] so you can have access to the SQL Server functionality without mocking it out completely (e.g. you can make sure your views/functions are returning valid data AND that the API is properly processing the data all at once).
However, that rather goes against the philosophy of unit testing. The whole point of a unit test is to test a single unit in isolation. Typically, this unit ("System Under Test", or SUT, in more technical terms) is a single method inside some class (for instance, an action method in one of your controllers). Anything other than the SUT should be stubbed or mocked out.
To accomplish this, broadly speaking, you will need to refactor your code to use dependency injection, and also use a mocking framework in your tests:
Dependency Injection: If you are not using a dependency injection framework already, chances are your controller classes are instantiating your DataAccessLayer class directly. This approach will not work for unit tests - instead, you will want to refactor the controller class to accept its dependencies via the constructor, and then use a dependency injection framework to inject the real DataAccessLayer in your application code, and inject a mock/stub implementation in your tests. Some popular dependency injection frameworks include Autofac, Ninject, and Microsoft Unity. Depending on which framework you choose, this may also require that you refactor DataAccessLayer a bit so it implements an interface (e.g., IDataAccessLayer).
Mocking Framework: In your tests, rather than using the real DataAccessLayer class directly, you will instead create a mock, and set up expectations on that mock. Some popular mocking frameworks for .NET include Moq, RhinoMocks, and NSubstitute).
Granted, if the code was not initially written with unit testing in mind (i.e., no dependency injection), this may involve a fair amount of refactoring. This is where alltej's suggestion comes in with creating a wrapper for interacting with the legacy (i.e., untested) code.
I strongly recommend you read the book The Art of Unit Testing: With Examples in C# (by Roy Osherove). That will help you understand the ideology behind unit testing a bit better.
If you are actually interested in testing multiple parts of your functionality at once, then what you are describing (as others have pointed out) is integration, or end-to-end testing. The setup for this would be entirely different (and often more challenging), but even then, the recommended approach would be to connect to a separate database (specifically for integration testing, separate even from your development database), rather than rolling back transactions.
When working with legacy system, what I would do is create a wrapper for this DLLs/projects to isolate communication with legacy and to protect integrity of your new subsystem/domain or bounded context. This isolation layer is known as anticorruption layer in DDD terminology. This layer contains interfaces written in terms of your new bounded context. The interface adapts and interacts with your API layer or to other services in the domain. You can then write unit/mock tests with this interfaces. You can also create an integration tests from your anticorruption layer which will eventually call the database via the legacy dlls.
Actually, from what I see in the code, the DAL creates only one connection in the constructor and then it keeps using it to fire commands, one command per method in the DAL. It will only create new connections if you create another instance of the DAL class.
Now, what you are describing is multiple test into, integration, end to end and I am not convinced that the transaction idea, while original, is actually doable.
When writing integration tests, I prefer to actually create all the data required by the test and then simply remove it at the end, that way nothing is left behind and you know for sure if your system works or not.
So imagine you're testing retrieving account data for a user, I would create a user, activate them, attach an account and then test against that real data.
The UI does not need to go all the way through, unless you really want to do end to end tests. If you don't, then you can just mock the data for each scenario you want to test and see how the UI behaves under each scenario.
What I would suggest is that you test your api separately, test each endpoint and make sure it works as expected with integration tests covering all scenarios needed.
If you have time, then write some end to end tests, possibly using a tool like Selenium or whatever else you fancy.
I would also extract an interface from that DAL in preparation of mocking the entire layer when needed. That should give you a good start in what you want to do.

#Singleton vs #ApplicationScope

For a project I need to have a unique ID generator. So I thought about a Singleton with synchronized methods.
Since a Singleton following the traditional Singleton pattern (private static instance) is shared accross Sessions, I'm wondering if the #Singleton Annotation is working the same way?
The documentation says: Identifies a type that the injector only instantiates once.
Does it mean, that a #Singleton will be independent per User Session (which is bad for an id-generator)? Should I prefer an old school Singleton with Class.getInstance() over an Injection of an #Singleton-Bean?
Or should I use neither nor and provide the Service within an #ApplicationScoped bean?
it musst be guaranteed that only ONE thread, independent of the user session can access the method to generate the next id. (It's not solvable with auto-increment database ids)
Edit: JSF 2.2, CDI and javax.inject.* i'm talking about :)
All those kinds of singletons (static, #javax.inject.Singleton, #javax.ejb.Singleton and #javax.enterprise.context.ApplicationScoped) are created once per JVM.
An object that is created once per user session must be annotated with #javax.enterprise.context.SessionScoped so no, singletons will not be instantiated per user session.
Notice that there are two #Singleton annotations, one in javax.inject and the other in the javax.ejb package. I'm referring to them by their fully-qualified names to avoid confusion.
The differences between all those singletons are subtle and I'm not sure I know all the implications, but a few come to mind:
#javax.ejb.Singleton is managed by the EJB container and so it can handle transactions (#javax.ejb.TransactionAttribute), read/write locking and time-outs (#javax.ejb.Lock, #javax.ejb.AccessTimeout), application startup (#javax.ejb.Startup, #javax.ejb.DependsOn) and so on.
#javax.enterprise.context.ApplicationScoped is managed by the CDI container, so you won't have the transaction and locking features that EJB has (unless you use a post-1.0 CDI that has added transactions), but you still have lots of nice things such as #javax.enterprise.inject.Produces, #javax.annotation.PostConstruct, #javax.inject.Named, #javax.enterprise.inject.Disposes (but many of these features are available to EJBs too).
#javax.inject.Singleton is similar to #ApplicationScoped, except that there is no proxy object (clients will have a reference to the object directly). There will be less indirection to reach the real object, but this might cause some issues related to serialization (see this: http://docs.jboss.org/weld/reference/latest-2.2/en-US/html_single/#_the_singleton_pseudo_scope)
A plain static field is simple and just works, but it's controlled by the class loader so in order to understand how/when they are instantiated and garbage collected (if ever), you will need to understand how class loaders work and how your application server manages its class loaders (when restarting, redeploying, etc.). See this question for more details.
javax.inject.Singleton - When used on your bean, you have to implement writeResolve() and readReplace to avoid any serialization issues. Use it judiciously based on what your bean actually has in it.
javax.enterprise.context.ApplicationScoped - Allows the container to proxy the bean and take care of serialization process automatically. This is recommended to avoid unprecedented issues.
For More information refer this page number 45.

White labeling CakePHP: What's the best way to provide customization hooks/callbacks to implementers?

I'm developing a CakePHP application that we will provide as a white label for people to implement for their own companies, and they'll need to have certain customization capabilities for themselves.
For starters, they'll be able to do anything they want with the views, and they can add their own Controllers/Models if they need to add completely new stuff. However, I'd rather advise against touching my controllers and models, to make version upgrading easier.
Esentially, the customization capabilities I'm planning to give them are going to be quite basic, I just need to call "something" when certain things happen, so they can do things like update external systems, e-mail themselves/the clients, things like that.
I'm wondering what's the best way to do this?
My plan is to have a "file" (with one class) for each controller of mine, to keep things reasonably organized. This file will have a bunch of empty methods that my code will call, and they'll be able to add code inside those methods to do whatever they need to do.
The specific question is, should this class full of empty methods be a Component? A Controller? Just a regular plain PHP class?
I'll need to call methods in this class from my Controllers, so I'm guessing making it a Controller is out of the question (unless maybe it's a controller that inherits from mine? or mine inherits from theirs, probably).
Also, I'd need the implementer of these methods to have access to my Models and Components, although I'm ok with making them use App::Import, I don't need to have the magic $this->ModelName members set.
Also, does this file I create (etiher Component or Controller) have to live in the app folder next to the other (my) controllers/components? Or can I throw it somewhere separate like the vendors folder?
Have you done something like this before?
Any tips/advice/pitfalls to avoid will be more than welcome.
I know this is kind of subjective, I'm looking to hear from your experience mostly, if you've done this before.
Thanks!
Two ideas that spring to mind:
create abstract templates (controllers, models, whatever necessary) that your clients can extend
write your controllers/components/models as a plugin within their own namespace
Ultimately you seem to want to provide an "enhanced" Cake framework, that your clients still have to write their own Cake code in (I don't know how this goes together with your idea of "basic customization capabilities" though). As such, you should write your code in as much an "optional" manner as possible (namespaced plugins, components, AppModel enhancements, extra libs) and provide documentation on how to use these to help your clients speed up their work.
I would setup a common set of events and use something like the linked to event system to handle this.
That lets the clients manage the event handler classes (read the readme to see what I mean) and subscribe to and broadcast events application-wide.
Also - if you want to have your users not muck about with your core functionality I recommend you package your main app as a plugin.
http://github.com/m3nt0r/eventful-cakephp

Resources