How can I get the innermost autofac scope or a named scope in a WPF application?
I need this because I use MarkupExtension to resolve VM. So in my MarkupExtension I need the current (inner-most) Lifetimescope.
Thanks
I found a way...
Container has an event ChildLifetimeScopeBeginning(object sender, ILifetimeScopeBeginningEventArgs args) that I can use for my MakupExtensions like this:
Container.ChildLifetimeScopeBeginning += (sender, args) =>
{
Debug.Write($"Begin new LifetimeScope: {args.LifetimeScope.Tag??"Unnamed"}");
//Set the current Lifetime scope for the MarkupExtension
ContainerTypeResolverExtension.Container = args.LifetimeScope;
};
So now when every new scope is created I can replace the MarkupExtension inner container.
If there is a better way I'm all for suggestions.
There is no native "request lifetime" sort of integration for WPF apps. If you are creating nested lifetime scopes, it's up to you to track them. The need to track your own lifetime scopes - both for creation and disposal - is up to you. Maybe you have some sort of local variable you set, maybe you have some other mechanism.
However, simply adding an event handler on the container won't do it.
Every lifetime scope has its own event for child lifetime scopes beginning.
Let's say you have this:
var builder = new ContainerBuilder();
var container = builder.Build();
var scope1 = container.BeginLifetimeScope();
var scope2 = scope1.BeginLifetimeScope();
var scope3 = scope1.BeginLifetimeScope();
Attaching an event to the container will not find the scopes that are nested outside the container. It will see scope1 but it won't see scope2 or scope3.
You also didn't explain what you're doing with the "most nested lifetime scope" so it isn't clear which of these is "most nested" - seems like scope2 an scope3 are equally nested, so which is the right one?
Finally, there are threading issues to consider. If you're spawning lifetime scopes in async methods or doing anything where different threads will create different scopes, simply tracking [basically] the last created scope will bring you into a world of trouble where you'll end up trying to resolve things from a scope that lives on a thread that doesn't exist anymore.
In the really, really limited case where you do no multithreading/async and never have nested lifetime scopes, maybe handling the event is OK. I would not recommend future readers of the question follow this track.
A better way is somewhat impossible to provide because there's no context given. Do you have a reference to the scope? Can you pass it into the method rather than trying to magically locate it? There are lots of ways to solve the problem, but there's not enough here to provide a good answer. Consider doing some more searches on this, and if it still doesn't yield anything, ask a new question with a lot more information, maybe a repro, and more explanation of what you tried.
Related
Writing my first Backbone app - came across a predicament wherein i am unable to choose which is the best way to move forward.
Scenario : User clicks an edit button , a new view is loaded . Approach is as below.
renderEditView: function(){
if(my.namespace.view){
my.namespace.view.render();
}else{
my.namespace.view= new editView({model:my.namespace.model});
}
}
Basically, i am assigning my view to a namespaced variable and resuing it as required. Didn't face any problems as such.
But some advocate recreating the View again using new editView({model:xxx}); whenever the edit button is clicked . i Would like to know which one is the better practice and why?
P.S: i am aware of the 'event ghosting' problem in BB apps, and the excellent solution provided by Derick Bailey .But still would love to know the pros and cons between the approaches.
This is indeed an opinion, because either way will work if you (as you mention) take care of cleaning up previous views if you decide to instantiate a new one every time you want to re-render. It's important to avoid duplicating lingering events from every instance that you want to replace by creating a new one.
Personally I have used both strategies and never had problems with them so far.
When re-using a view, I bind the view as a property to the controller object that renders the view, pretty much the same way you do it.
Theoretically, I don't see a reason to re-instantiate a view if it was already created before. It isn't that you really require a new instance, it's just that you want to re-render it.
Sidenote
For re-rendering views, Backbone Marionette offers regions, which are convenience objects that allow you to do things like:
var myView = new MyView();
var region = new Marionette.Region({el: "#container"});
region.show(myView);
In case you would decide to instantiate a new view every time, these regions take care that previously rendered views are properly cleaned up.
I'd like to know if using
angular.extend($scope,MyService);
Does it break oop encapsulation principle ?
Does it smell like MyService.call($scope) ?
Could you face variable and function conflicts ?
Is this a bad/good practice?
Typically from my experience services are injected into the controller and then will be called from that scope. I wouldn't say that using extend to copy over functions and properties is necessarily bad, but it might mitigate some of the purposes of IoC (inversion of control), which is what injection in angular is based on.
Does it break oop...?
My understanding is that what you would get from this call is additional functions and service calls directly on your scope. This doesn't break OOP in the sense that scope is an object and would have functions applied. Provided those functions + properties make sense on the scope it seems like a fine thing to do from that perspective.
Does it smell like MyService.call($scope)?
As I said in the first paragraph - I don't see why you wouldn't just call the service and either share data or pass in references to objects to the service. Another pattern that is common in angular is to use a promise to process returned data in your scope. That looks like:
MyService.callFunction(parameters).then(function (data) {
// process data here. Since this runs in $scope you can also use $scope normally.
// $scope.$apply() is in progress and will complete when the function returns
});
All the service does is provide the data to the scope then. Point is that I think there are better patterns than "extend".
Can you face conflicts?
In the call angular.extend(a, b); data, properties and functions are copied from b to a. If something already exists on a it will be overwritten with the data from b. So technically the short answer is "yes", you can face conflicts.
The bottom line
So at the end of the day this isn't a bad pattern but there are probably more common patterns I would try to use first.
I am currently using backbone to implement my app. As part of memory management, I will trigger a teardown of all the views when I am switching views
teardown: ->
for viewName, view of #subViews
view.teardown()
for object, events of #objectEvents
#_handleObjectEvents("off", object, events)
#off()
#remove()
#undelegateEvents()
#
Is this approach sufficient to ensure that most of the memory issues are resolved? The challenge I see here is that I need to track all the subviews of each view and call teardown for all main views and subviews as part of the cleanup.
I did some searching and found that backbone also has these two events: 'listenTo' and 'stopListening' where we control the binding of events to models at the view level.
view.listenTo(model, 'change', view.render);
view.stopListening(model);
My questions is, is there an overlap between my teardown implementation and using 'stopListening'? Can I just solely use 'stopListening' for memory management?
The short answer is yes, there is an overlap.
The more complicated answer is listenTo/stopListening methods introduced in Backbone 0.9.9 already use on/off methods but with some useful addition – they store current event listeners in internal object called _listeners.
The benefit of using this object is that you always know full list of all your listeners – you can iterate over it and remove specific elements from it (remember that a listener is just a function and a function is just an object).
So, you can call it this way:
this.stopListening(emitting_object, ["reset", "add"]) // Removes listeners for "reset" and "add" on emitting_object
this.stopListening(emitting_object) // Removes all listeners on emitting_object
this.stopListening() // Iterates over _listeners object and removes all listeners (probably the most usable case)
So, using this method, you can convert your teardown method to something like this:
this.teardown = function(){
this.stopListening();
...
}
I'd recommend using listenTo method. The niceness of it is that when you use the remove method on your view, it will automatically unbind (call stopListening) on what it's listening to. According to Derrick Bailey, it also unbinds the events under the events property.
What I will do, since I am in the process of upgrading my app to 0.9.9 from 0.9.2 (which actually still works so far), is just switch around all of my ons/offs to listenTo and stopListening. I also, mostly, have close methods on there. I will, however, still call undelegateEvents, just in case. Doesn't hurt to know that you're still getting rid of the event listening.
I`m learning the concepts of composite applications.
I created prism application using unity container.
One of my regions configed as content control - In this Region, I want show just single view.
I`m using view injection in the next way:
object lastView;
// View injection
IRegion region = regionManager.Regions["MainRegion"];
var ordersView = container.Resolve<OrdersView>();
lastView = ordersView;
region.Add(ordersView, "OrdersView");
region.Activate(ordersView);
This the views in this region are switched frequently.
Before Im switching view Im using region.remove(lastView)
and than adding the next view like the code above.
Im not sure that its a good implementation, I have a few questions:
When I`m using region.remove method, Is the removed view being disposed?
Because if not after long run I will have serious memory leaks.
What is the best way implement single view in a region while avoiding memory leaks?
Thanks
By memory leaks I guess you're talking about whether the Garbage Collector is going to collect that view or not - e.g. is the container still referencing it when you remove it.
The decision on whether to keep a reference of the object after it's resolved is based on the type of the LifeTime Manager you used when you registered that object.
To answer your question shortly - The default LifeTime manager used with RegisterType is the TransientLifetimeManager, in which Unity creates a new instance of the requested type for each call to the Resolve or ResolveAll method.
What you're probably looking for is the ExternallyControlledLifetimeManager:
This lifetime manager allows you to register type mappings and existing objects with the container so that it maintains only a weak reference to the objects it creates when you call the Resolve or ResolveAll method or when the dependency mechanism injects instances into other classes based on attributes or constructor parameters within that class. This allows other code to maintain the object in memory or dispose it and enables you to maintain control of the lifetime of existing objects or allow some other mechanism to control the lifetime.
If you want to control the lifetime of your views, consider using the RegisterType with this LifeTime Manager.
Also, according to this article - The only lifetime managers which calls Dispose on resolved instances are ContainerControlledLifetimeManager (which creates singelton instances) and HierarchicalLifetimeManager. In these cases, the Dispose is only called when the lifetime manager is disposed.
I have a WPF application with MVVM. Assuming object composition from the ViewModel down looks as follows:
MainViewModel
OrderManager
OrderRepository
EFContext
AnotherRepository
EFContext
UserManager
UserRepository
EFContext
My original approach was to inject dependencies (from the ViewModelLocator) into my View Model using .InCallScope() on the EFContext and .InTransientScope() for everything else. This results in being able to perform a "business transaction" across multiple business layer objects (Managers) that eventually underneath shared the same Entity Framework Context. I would simply Commit() said context at the end for a Unit of Work type scenario.
This worked as intended until I realized that I don't want long living Entity Framework contexts at the View Model level, data integrity issues across multiple operations described HERE. I want to do something similar to my web projects where I use .InRequestScope() for my Entity Framework context. In my desktop application I will define a unit of work which will serve as a business transaction if you will, typically it will wrap everything within a button click or similar event/command. It seems that using Ninject's ActivationBlock can do this for me.
internal static class Global
{
public static ActivationBlock GetNinjectUoW()
{
//assume that NinjectSingleton is a static reference to the kernel configured with the necessary modules/bindings
return new ActivationBlock(NinjectSingleton.Instance.Kernel);
}
}
In my code I intend to use it as such:
//Inside a method that is raised by a WPF Button Command ...
using (ActivationBlock uow = Global.GetNinjectUoW())
{
OrderManager orderManager = uow.Get<OrderManager>();
UserManager userManager = uow.Get<UserManager>();
Order order = orderManager.GetById(1);
UserManager.AddOrder(order);
....
UserManager.SaveChanges();
}
Questions:
To me this seems to replicate the way I do business on the web, is there anything inherently wrong with this approach that I've missed?
Am I understanding correctly that all .Get<> calls using the activation block will produce "singletons" local to that block? What I mean is no matter how many times I ask for an OrderManager, it'll always give me the same one within the block. If OrderManager and UserManager compose the same repository underneath (say SpecialRepository), both will point to the same instance of the repository, and obviously all repositories underneath share the same instance of the Entity Framework context.
Both questions can be answered with yes:
Yes - this is service location which you shouldn't do
Yes you understand it correctly
A proper unit-of-work scope, implemented in Ninject.Extensions.UnitOfWork, solves this problem.
Setup:
_kernel.Bind<IService>().To<Service>().InUnitOfWorkScope();
Usage:
using(UnitOfWorkScope.Create()){
// resolves, async/await, manual TPL ops, etc
}