is it a bad practice to have a static field? - static

I use a static field in this situation because I think it is time consuming to recreate the object at each request.
private static AnalysedCompanies db = new AnalysedCompanies();
public class AnalysedCompanies:DbContext
{
...
}
I use Entity Framework code first.
than I have methods for saving and loading data from the database trough the db object.
Is the static db object going to cause a bottleneck? Is this the right thing to do?

In a ASP.net Web Application, static are shared by all users, so yes, that's pretty bad as it means that User A can possibly see/modify data that User B sees and leads to all sorts of headaches.
Static fields are fine for static data, that is data that a) is shared for everyone and b) isn't modified by users (as changes are global to all other users). I do use statics for stuff like System Configuration or objects that can be safely shared.
I think the main problem is this: "I think it is time consuming" - don't guess, measure. There are many profilers available for .net. If you have performance issues, measure to see if it really is a problem and then act.

Related

Domain Driven Design (DDD) and database generated reports

I'm still investigating DDD, but I'm curious to know about one potential pitfall.
According to DDD an aggregate-root shouldn't know about persistence, but doesn't that mean the entire aggregate-root ends up being instantiated in memory?
How could the aggregate-root, for instance, ask the database to group and sum a lot of data if it's not supposed to know about persistence?
According to DDD an aggregate-root shouldn't know about persistence, but doesn't that mean the entire aggregate-root ends up being instantiated in memory?
Oh no, it's worse than that; the entire aggregate (the root and all of the subordinate entities) get loaded instantiated in memory. Essentially by definition, you need all of the state loaded in order to validate any change.
How could the aggregate-root, for instance, ask the database to group and sum a lot of data if it's not supposed to know about persistence?
You don't need the aggregate-root to do that.
The primary role of the domain model is to ensure the integrity of the book of record by ensuring that all writes respect your business invariant. A read, like a database report, isn't going to change the book of record, so you don't need to load the domain model.
If the domain model itself needs the report, it typically defines a service provider interface that specifies the report that it needs, and your persistence component is responsible for figuring out how to implement that interface.
According to DDD an aggregate-root shouldn't know about persistence, but doesn't that mean the entire aggregate-root ends up being instantiated in memory?
Aggregate roots are consistency boundaries, so yes you would typically load the whole aggregate into memory in order to enforce invariants. If this sounds like a problem it is probably a hint that your aggregate is too big and possibly in need of refactoring.
How could the aggregate-root, for instance, ask the database to group and sum a lot of data if it's not supposed to know about persistence?
The aggregate wouldn't ask the database to group and sum data - typically you would load the aggregate in an application service / command handler. For example:
public class SomeUseCaseHandler : IHandle<SomeCommand>
{
private readonly ISomeRepository _someRepository;
public SomeUseCaseHandler(ISomeRepository someRepository)
{
_someRepository = someRepository;
}
public void When(SomeCommand command)
{
var someAggregaate = _someRepository.Load(command.AggregateId);
someAggregate.DoSomething();
_someRepository.Save(someAggregate);
}
}
So your aggregate remains ignore of how it is persisted. However, your implementation of ISomeRepository is not ignorant, so can do whatever is necessary to fully load the aggregate. So you could have your persistence implementation group/sum when loading the aggregate, but more often you would probably query a read model:
public class SomeUseCaseHandler : IHandle<SomeCommand>
{
private readonly ISomeRepository _someRepository;
private readonly ISomeReadModel _someReadModel;
public SomeUseCaseHandler(ISomeRepository someRepository, ISomeReadModel readModel)
{
_someRepository = someRepository;
_someReadModel = someReadModel;
}
public void When(SomeCommand command)
{
var someAggregaate = _someRepository.Load(command.AggregateId);
someAggregate.DoSomethingThatRequiresTheReadModel(_someReadModel);
_someRepository.Save(someAggregate);
}
}
You haven't actually said what your use case is though. :)
[Update]
Just noticed the title refers to database generated reports - this will not go through your domain model at all, it would be a completely separate read model. CQRS applies here

Static methods in GOSU and Thread-safety

I have below function in a .gs class, which gets called when accessing specific Claim information -
public static function testVisibility(claim : Claim) : boolean {
if(claim.State == ClaimState.TC_OPEN){
return true;
}
else{
return false;
}
}
My question -
a) If two users are accessing their respective Claims information, this function should get called twice - first time it should receive the Claim instance of first user, second time Claim instance of second user. If the accessing in simultaneous - will two copies of the same function be invoked? Should not be the case, as static function is only one copy. So, if it's one copy, how is thread safety ensured? Will the function be called one-after-another?
b) Like Java, does Gosu also use Heap to run the static functions?
It seems you are confusing a little about the definition here. Thread-safe is only a mechanism created to protect the integrity of data shared between threads. Therefore, your example function is thread-safe, no matter if it is static or not.
a) For the reason mentioned above, there would be no thread-safety problem here, because you are working with 2 different sets of data.
b) Provided that Gosu is built to run on JVM, and produce .class files, I believe for the most part (if not 100%, beside the syntax) it will behave like Java.
This is a cliche confusion when we start loving any programming language.
Consider 100 people accessing a web-application exactly at a particular point of time, here as per your doubt, the static variable/function will return/share the content value for all the 100 people.
The fact is, data sharing won't happen here because for each server connection, each separate THREAD is created and the entire application works on that thread (called as one-thread-per-connection).
SO if have a static/global variable, that particular variable will work on 100 different threads, and content/data of each thread will be secure and cant be accessed from other threads(directly). This is how web applications works.
If we need to share some variables/Classes among threads, we have to make it singleton.
Eg, For database connections, we don't need to create the connection all the time if already an established connection exists. In that case the connection class will be singleton.
Hope this make sense. :)
-Aravind

Short lived DbContext in WPF application reasonable?

In his book on DbContext, #RowanMiller shows how to use the DbSet.Local property to avoid 1.) unnecessary roundtrips to the database and 2.) passing around collections (created with e.g. ToList()) in the application (page 24). I then tried to follow this approach. However, I noticed that from one using [} – block to another, the DbSet.Local property becomes empty:
ObservableCollection<Destination> destinationsList;
using (var context = new BAContext())
{
var query = from d in context.Destinations …;
query.Load();
destinationsList = context.Destinations.Local; //Nonzero here.
}
//Do stuff with destinationsList
using (var context = new BAContext())
{
//context.Destinations.Local zero here again;
//So no way of getting the in-memory data from the previous using- block here?
//Do I have to do another roundtrip to the database here to get the same data I wanted
//to cache locally???
}
Then, what is the point on page 24? How can I avoid the passing around of my collections if the DbSet.Local is only usable inside the using- block? Furthermore, how can I benefit from the change tracking if I use these short-lived context instances not handing over any cache data to each others under the hood? So, if the contexts should be short-lived for freeing resources such as connections, have I to give up the caching for this? I.e. I can’t use both at the same time (short-lived connections but long-lived cache)? So my only option would be to store the results returned by the query in my own variables, exactly what is discouraged in the motivation on page 24?
I am developing a WPF application which maybe will also become multi-tiered in the future, involving WCF. I know Julia has an example of this in her book, but I currently don’t have access to it. I found several others on the web, e.g. http://msdn.microsoft.com/en-us/magazine/cc700340.aspx (old ObjectContext, but good in explaining the inter-layer-collaborations). There, a long-lived context is used (although the disadvantages are mentioned, but no solution to these provided).
It’s not only that the single Destinations.Local gets lost, as you surely know all other entities fetched by the query are, too.
[Edit]:
After some more reading in Julia Lerman’s book, it seems to boil down to that EF does not have 2nd level caching per default; with some (considerable, I think) effort, however, ones can add 3rd party caching solutions, as is also described in the book and in various articles on MSDN, codeproject etc.
I would have appreciated if this problem had been mentioned in the section about DbSet.Local in the DbContext book that it is in fact a first level cache which is destroyed when the using {} block ends (just my proposal to make it more transparent to the readers). After first reading I had the impression DbSet.Local would always return the same reference (Singleton-style) also in the second using {} block despite the new DbContext instance.
But I am still unsure whether the 2nd level cache is the way to go for my WPF application (as Julia mentions the 2nd level cache in her article for distributed applications)? Or is the way to go to get my aggregate root instances (DDD, Eric Evans) of my domain model into memory by one or some queries in a using {} block, disposing the DbContext and only holding the references to the aggregate instances, this way avoiding a long-lived context? It would be great if you could help me with this decision.
http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
http://www.codeproject.com/Articles/435142/Entity-Framework-Second-Level-Caching-with-DbConte
http://blog.3d-logic.com/2012/03/31/using-tracing-and-caching-provider-wrappers-with-codefirst/
The Local property provides a “local view of all Added, Unchanged, and Modified entities in this set”. Like all change tracking it is specific to the context you are currently using.
The DB Context is a workspace for loading data and preparing changes.
If two users were to add changes at the same time, they must not know of the others changes before they saved them. They may discard their prepared changes which suddenly would lead to problems for other other user as well.
A DB Context should be short lived indeed, but may be longer than super short when necessary. Also consider that you may not save resources by keeping it short lived if you do not load and discard data but only add changes you will save. But it is not only about resources but also about the DB state potentially changing while the DB Context is still active and has data loaded; which may be important to keep in mind for longer living contexts.
If you do not know yet all related changes you want to save into the database at once then I suggest you do not use the DB Context to store your changes in-memory but in a data structure in your code.
You can of course use entity objects for doing so without an active DB Context. This makes sense if you do not have another appropriate data class for it and do not want to create one, or decide preparing the changes in them make more sense. You can then use DbSet.Attach to attach the entities to a DB Context for saving the changes when you are ready.

Ninject ActivationBlock as Unit of Work

I have a WPF application with MVVM. Assuming object composition from the ViewModel down looks as follows:
MainViewModel
OrderManager
OrderRepository
EFContext
AnotherRepository
EFContext
UserManager
UserRepository
EFContext
My original approach was to inject dependencies (from the ViewModelLocator) into my View Model using .InCallScope() on the EFContext and .InTransientScope() for everything else. This results in being able to perform a "business transaction" across multiple business layer objects (Managers) that eventually underneath shared the same Entity Framework Context. I would simply Commit() said context at the end for a Unit of Work type scenario.
This worked as intended until I realized that I don't want long living Entity Framework contexts at the View Model level, data integrity issues across multiple operations described HERE. I want to do something similar to my web projects where I use .InRequestScope() for my Entity Framework context. In my desktop application I will define a unit of work which will serve as a business transaction if you will, typically it will wrap everything within a button click or similar event/command. It seems that using Ninject's ActivationBlock can do this for me.
internal static class Global
{
public static ActivationBlock GetNinjectUoW()
{
//assume that NinjectSingleton is a static reference to the kernel configured with the necessary modules/bindings
return new ActivationBlock(NinjectSingleton.Instance.Kernel);
}
}
In my code I intend to use it as such:
//Inside a method that is raised by a WPF Button Command ...
using (ActivationBlock uow = Global.GetNinjectUoW())
{
OrderManager orderManager = uow.Get<OrderManager>();
UserManager userManager = uow.Get<UserManager>();
Order order = orderManager.GetById(1);
UserManager.AddOrder(order);
....
UserManager.SaveChanges();
}
Questions:
To me this seems to replicate the way I do business on the web, is there anything inherently wrong with this approach that I've missed?
Am I understanding correctly that all .Get<> calls using the activation block will produce "singletons" local to that block? What I mean is no matter how many times I ask for an OrderManager, it'll always give me the same one within the block. If OrderManager and UserManager compose the same repository underneath (say SpecialRepository), both will point to the same instance of the repository, and obviously all repositories underneath share the same instance of the Entity Framework context.
Both questions can be answered with yes:
Yes - this is service location which you shouldn't do
Yes you understand it correctly
A proper unit-of-work scope, implemented in Ninject.Extensions.UnitOfWork, solves this problem.
Setup:
_kernel.Bind<IService>().To<Service>().InUnitOfWorkScope();
Usage:
using(UnitOfWorkScope.Create()){
// resolves, async/await, manual TPL ops, etc
}

Can I get the instances of alive objects of a certain type in C#?

This is a C# 3.0 question. Can I use reflection or memory management classes provided by .net framework to count the total alive instances of a certain type in the memory?
I can do the same thing using a memory profiler but that requires extra time to dump the memory and involves a third party software. What I want is only to monitor a certain type and I want a light-weighted method which can go easily to unit tests. The purpose to count the alive instances is to ensure I don't have any expected living instances that cause "memory leak".
Thanks.
To do it entirely within the application you could do an instance-counter, but it would need to be explicitly coded and managed inside each class--there's no silver bullet that I'm aware of to let you query the framework from within the executing code to see how many instances are alive.
What you're asking for is really the domain of a profiler. You can purchase one or build your own, but it requires your application to run as a child process of the profiler. Rolling your own isn't an easy undertaking, by the way.
If you want to consider the instance counter it would have to be something like:
public class MyClass : IDisposable
public MyClass()
{
++ClassInstances;
}
public void Dispose()
{
--ClassInstances;
}
private static new object _ClassInstancesLock;
private static int _ClassInstances;
private static int ClassInstances
{
get
{
lock (_ClassInstancesLock)
{
return _ClassInstances
}
}
}
This is just a really rough sample, no tests for compilation; 0% guarantee for thread-safety (critical for this type of approach) and it leaves the door wide open for Dispose to be called, the instance counter to decrement, but for the object not to properly GC. To diagnose that bundle of joy you'll need, you guessed it, a professional profiler--or at least windbg.
Edit: I just noticed the very last line of your question and needed to say that my above approach, as shoddy and failure-prone as it is, is almost guaranteed to deceive and lie to you about the true number of instances if you're experiencing a leak. The best tool, IMO, for attacking these problems is ANTS Memory Profiler. Version 5 is a double-edge in that they broke Performance and Memory profiler into two seperate SKUs (used to be bundled together) but Memory Profiler 5.0 is absolutely lightning fast. Profiling these problems used to be slow as molases, but they've gotten around it somehome.
Unless this is for a personal project with 0 intent of redistribution you should invest the few hundred dollars needed for ANTS--but by all means use it's trial period first. It's a great tool for exactly this kind of analysis.
The only way I see to do this is without any form of instrumentation to use the CLR Profiling API to track object lifetimes. I'm not aware of any APIs available to the managed code to do the same thing, and, so far as I know, CLR doesn't keep the list of live objects anywhere (so even with profiler API you have to create the data structures for that yourself).
VB.NET has a feature where it lets you track objects in debugger, but it actually emits additional code specifically for that (which basically registers all created objects in internal list of weak references). You could do that as well, using e.g. PostSharp to post-process your assemblies.

Resources