Selecting data from multiple tables in MVC - sql-server

I'm am in the middle of creating my first, substantial .NET MVC application. I have come across a problem and I am not quite sure the proper way to go about it.
In my application I have quite a large database. For a number of features I need to select data from up to 5 tables and send it back to the view, and I am not quite sure how to go about it since view takes either Models or View Models?
I understand the concept of View Models quite well, but is creating one every time I need to send data from multiple tables the only solution to this? And if so could anyone tell me the best practices when doing it
Thanks in advance for any help

Yep, you'll have to have a View Models per view. I work on application with about 600 views and we tried re-cycling view models and it ended up in tears. Now there is a model for each view (mostly).
To send data from multiple tables you'll need to run joins on your tables and select into a view model.
Here I presume you use Entity Framework:
public class ComplexViewModel
{
public String Name { get; set; }
public String Category { get; set; }
public String Level { get; set; }
}
var db = new MyDbContext();
var result = from name in db.Names
join category in db.Categories on name.CategoryId equals category.CategoryId
join level in db.Levels on category.LevelId equals level.LevelId
select new ComplexViewModel()
{
Name = name.Name,
Category = category.CategoryName,
Level = level.LevelName,
};
return result.ToList();
More examples of joins can be found are recommended to review.

Related

best event sourcing db strategy

I want to setup a small event sourcing lib.
I read a few tutorials online, everything understood so far.
The only problem is, in these different tutorials, there are two different database strategies, but without any comments why they use the one they use.
So, I want to ask for your opinion.
And important, why do you prefer the solution you choose.
Solution is the db structure where you create one table for each event.
Solution is the db structure where you create only one generic table, and save the events as serialized string to one column.
In both cases I'm not sure how they handle event changes, maybe they create a whole new one.
Kind regards
I built my own event sourcing lib and I opted for option 2 and here's why.
You query the event stream by aggregate id not event type.
Reproducing the events in order would be a pain if they are all in different tables
It would make upgrading events a bit of pain
There is an argument to say you can store events on a per aggregate but that depends of the requirements of the project.
I do have some posts about how event streams are used that you may find helpful.
6 Code Smells With Your CQRS Events and How to Avoid Them
Aggregate Root – How to Build One for CQRS and Event Sourcing
How to Upgrade CQRS Events Without Busting Your Event Stream
Solution is the db structure where you create only one generic table, and save the events as serialized string to one column
This is by far the best approach as replaying events is simpler. Now my two cents on event sourcing: It is a great pattern, but you should be careful because not everything is as simple as it seems. In a system I was working on we saved the stream of events per aggregate but we still had a set of normalized tables, because we just could not accept that in order to get the latest state of an object we would have to run all the events (snapshots help but are not a perfect solution). So yes event sourcing is a fine pattern, it gives you a complete versioning of your entities and a full auditing log, and it should be used just for that, not as a replacement of a set of normalized tables, but this is just my two cents.
I think best solution will be to go with #2. And even you can save your current state together with the related event at the same time if you use a transactional db like mysql.
I realy dont like and recommend the solution #1.
If your concern for #1 is about event versioning/upgrading; then declare a new class for each new change. Dont be too lazy; or be obsess with reusing. Let the subscribers know about changes; give them the event version.
If your concers for #1 is about something like querying/interpreting events; then later you can easily push your events to an nosqldb or eventstore at any time (from original db).
Also; the pattern I use for eventsourcing lib is something like that:
public interface IUserCreated : IEventModel
{
}
public class UserCreatedV1 : IUserCreated
{
public string Email { get; set; }
public string Password { get; set; }
}
public class UserCreatedV2 : IUserCreated
{
// Fullname added to user creation. Wrt issue: OA-143
public string Email { get; set; }
public string Password { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class EventRecord<T> where T : IEventModel
{
public string SessionId { get; set; } // Can be set in emitter.
public string RequestId { get; set; } // Can be set in emitter.
public DateTime CreatedDate { get; set; } // Can be set in emitter.
public string EventName { get; set; } // Extract from class or interface name.
public string EventVersion { get; set; } // Extract from class name
public T EventModel { get; set; } // Can be set in emitter.
}
public interface IEventModel { }
So; make event versioning and upgrading explicit; both in domain and codebase. Implement handling of new events in subscribers before deploying origin of new events. And; if not required, dont allow direct consuming of domain events from external subscribers; put an integration layer or something like that.
I wish my thoughts will be useful for you.
I read about an event-sourcing approach that consists in:
having two tables: aggregate and event;
base on you use cases either:
a. creates and registry on aggregate table, generating an ID, version = 0 and a event type and create an event on event table;
b. retrieve from aggregate table, events by ID or event type, apply business cases and then update aggregate table (version and event type) and then create an event on event table.
although I this approach updates some fields on aggregate table, it leaves event table as append only and improves performace as you have the latest version of an aggregate in aggregate table.
I would go with #2, and if you really want to have an efficient way of search via event type, I would just add an index on that column.
Here are the two strategies to access the data about a subject involved in this case.
1) current state and 2) event sequencing.
With current state we process the events but keep only the last state of the subject.
With event sequencing we keep the events and rebuild the current state by processing the events every time we need the state.
Event sequencing is more reliable as we can track everything that happened causing the current state but it's definitely not efficient. It's a common sense to keep also intermediate states (snapshots) not only the last one to avoid reprocessing all the events all the time. Now we have reliability and performance.
In crypto currencies there are the event sequencing and local snapshots - the local in the name is because blockchains are distributed and data are replicated.

Use Views in Entity Framework

I am using Entity Framework on a project, but am finding the large queries, especially those which use LEFT joins, to be very tedious to write, and hard to debug.
Is it common, or accepted practice, to make use of Views in the database, and then use those views within the EntityFramework? Or is this a bad practice?
the question is not very clear but there is no absolute right or wrong in Software. it all depends on your case.
there is native support for views in ef core but there is no native support for views in EF < 6. at least not in the current latest version 6.3. there is, however, a work around to this. in database first you would create your view via sql normally and when you reverse engineer your database, EF will treat your view as a normal model and will allow you to consume it regularly as you would do in a normal table scenario. in Code First it's a bit more tedious. you would create a POCO object that maps to the columns in your view. notice that you need to include an Id in this POCO class. for example
public class ViewPOCO
{
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public Guid Id {get;set;}
public string ViewColumn1 {get;set;}
... etc.
}
you would add this POCO class in your DbContext
public class MyDbContext : DbContext
{
public virtual DbSet<ViewPOCO> MyView {get;set;}
}
now you will normally apply the command of adding migration through the package manager console
Add-Migration <MigrationName> <ConnectionString and provider Name>
now in the migration up and down you will notice that EF treats your Model as table. you would clear all of this and write your own sql to add/alter the view in the up and drop the view in the down method using the Sql function.
public override void Up()
{
Sql("CREATE OR ALTER VIEW <ViewName> AS SELECT NEWID() AS Id, ...");
}
public override void Down()
{
Sql("DROP VIEW <ViewName>");
}
First create your view.
Update Your .edmx File.
then use like this.
using (ManishTempEntities obj = new ManishTempEntities())
{
var a = obj.View_1.ToList();
}

DateCreated or Modified Column - Entity Framework or using triggers on SQL Server

After I read one question in attached link, I got a sense of how to set DateCreated and DateModified columns in Entity Framework and use it in my application. In the old SQL way though, the trigger way is more popular because is more secure from DBA point of view.
So any advice on which way is the best practice? should it be set in entity framework for the purpose of application integrity? or should use trigger as it make more sense from data security point of view? Or is there a way to compose trigger in entity framework? Thanks.
EF CodeFirst: Rails-style created and modified columns
BTW, even though it doesn't matter much, I am building this app using ASP.NET MVC C#.
Opinion: Triggers are like hidden behaviour, unless you go looking for them you usually won't realise they are there. I also like to keep the DB as 'dumb' as possible when using EF, since I'm using EF so my team wont need to maintain SQL code.
For my solution (mix of ASP.NET WebForms and MVC in C# with Business Logic in another project that also contains the DataContext):
I recently had a similar issue, and although for my situation it was more complex (DatabaseFirst, so required a custom TT file), the solution is mostly the same.
I created an interface:
public interface ITrackableEntity
{
DateTime CreatedDateTime { get; set; }
int CreatedUserID { get; set; }
DateTime ModifiedDateTime { get; set; }
int ModifiedUserID { get; set; }
}
Then I just implemented that interface on any entities I needed to (because my solution was DatabaseFirst, I updated the TT file to check if the table had those four columns, and if so added the interface to the output).
UPDATE: here's my changes to the TT file, where I updated the EntityClassOpening() method:
public string EntityClassOpening(EntityType entity)
{
var trackableEntityPropNames = new string[] { "CreatedUserID", "CreatedDateTime", "ModifiedUserID", "ModifiedDateTime" };
var propNames = entity.Properties.Select(p => p.Name);
var isTrackable = trackableEntityPropNames.All(s => propNames.Contains(s));
var inherits = new List<string>();
if (!String.IsNullOrEmpty(_typeMapper.GetTypeName(entity.BaseType)))
{
inherits.Add(_typeMapper.GetTypeName(entity.BaseType));
}
if (isTrackable)
{
inherits.Add("ITrackableEntity");
}
return string.Format(
CultureInfo.InvariantCulture,
"{0} {1}partial class {2}{3}",
Accessibility.ForType(entity),
_code.SpaceAfter(_code.AbstractOption(entity)),
_code.Escape(entity),
_code.StringBefore(" : ", String.Join(", ", inherits)));
}
The only thing left was to add the following to my partial DataContext class:
public override int SaveChanges()
{
// fix trackable entities
var trackables = ChangeTracker.Entries<ITrackableEntity>();
if (trackables != null)
{
// added
foreach (var item in trackables.Where(t => t.State == EntityState.Added))
{
item.Entity.CreatedDateTime = System.DateTime.Now;
item.Entity.CreatedUserID = _userID;
item.Entity.ModifiedDateTime = System.DateTime.Now;
item.Entity.ModifiedUserID = _userID;
}
// modified
foreach (var item in trackables.Where(t => t.State == EntityState.Modified))
{
item.Entity.ModifiedDateTime = System.DateTime.Now;
item.Entity.ModifiedUserID = _userID;
}
}
return base.SaveChanges();
}
Note that I saved the current user ID in a private field on the DataContext class each time I created it.
As for DateCreated, I would just add a default constraint on that column set to SYSDATETIME() that takes effect when inserting a new row into the table.
For DateModified, personally, I would probably use triggers on those tables.
In my opinion, the trigger approach:
makes it easier; I don't have to worry about and remember every time I save an entity to set that DateModified
makes it "safer" in that it will also apply the DateModified if someone finds a way around my application to modify data in the database directly (using e.g. Access or Excel or something).
Entity Framework 6 has interceptors which can be used to set created and modified. I wrote an article how to do it: http://marisks.net/2016/02/27/entity-framework-soft-delete-and-automatic-created-modified-dates/
I agree with marc_s - much safer to have the trigger(s) in the database. In my company's databases, I require each field to have a Date_Modified, Date_Created field, and I even have a utility function to automatically create the necessary triggers.
When using with Entity Framework, I found I needed to use the [DatabaseGenerated] annotation with my POCO classes:
[Column(TypeName = "datetime2")]
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public DateTime? Date_Modified { get; set; }
[Column(TypeName = "datetime2")]
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public DateTime? Date_Created { get; set; }
I was attempting to use stored procedure mapping on an entity, and EF was creating #Date_Modified, #Date_Created parameters on my insert/update sprocs getting the error
Procedure or function has too many arguments specified.
Most of the examples show using [NotMapped], which will allow select/insert to work, but then those fields will not show up when that entity is loaded!
Alternately you can just make sure any sprocs contain the #Date_Modified, #Date_Created parameters, but this goes against the design of using triggers in the first place.

How to design domain with entity referencing entity on another sql server with NHibernate persistance

I need to design domain that has two simple entities:
public class User
{
public virtual int Id { get; protected set; }
public virtual string Email { get; protected set; }
public virtual Country Country { get; protected set; }
...
}
public class Country
{
public virtual int Id { get; protected set; }
public virtual string Name { get; protected set; }
...
}
It's all nice and clear in domain world but the problem is that User and Country persisted in two different databases on two different servers (tho they are both MSSQL 2005 servers).
So, how should I correctly implement persistance of entites across different sql servers in NHibernate?
Using IDs instead of objects in references? Yeah, thats simple but it's hitting hard on the whole domain thing making domain object more like DTO. And it will require that IUserRepository get it's hands on ICountryRepository to load User entity.
Linked servers? Hm... Somehow I don't like it (distributed transactions and no XML columns). And what I should be aware in case of using them and more importantly how should I configure NHibernate to work effectively with linked servers?
Maybe some other solution?
I've heard of people using the schema property in a class mapping to contain the linked server name (like otherserver.dbo), but I don't know anyone that hasn't ran into one problem or another when doing that.
There are a few DDD bootstrapping frameworks that allow you to transparently map entities to different databases (resulting in multiple ISessionFactories, which it will manage for you). NCommon is one I would recommend. This assumes, however, that Country only exists in one database, and User only exists in another.
As for transactions... well, if you use a TransactionScope and configure DTS, that might work. NCommon uses a UnitOfWork API that also wraps TransactionScope.
You would have to change User so that Country is just an ID. Here's why. You'd end up with two session factories, one that has a mapping for Country and the other that has a mapping for User. If you don't make that change, NHibernate will complain that there is no mapping for Country when you save User (since they are stored in two different DBs).
Now you could instruct NHibernate to ignore Country property, and keep Country so your domain doesn't change. However, when you load User from the database next time, Country will be null.
You could use NHibernate.Shards from NHContrib.

Playframework Siena Filtering and Ordering

This is my first question on any of these websites so pardon my unprofessionalism.
I use playframework with SIENA module (with GAE) and I came accross the following problem:
Given 3 entities:
public class Meeting extends Model{
#Id
public Long id;
public String place;
#Owned
Many<MeetingUser> users;
.
.
.
}
public class User extends Model{
#Id
public Long id;
public String firstName;
public String lastName;
#Owned
Many<MeetingUser> meetings;
.
.
.
}
public class MeetingUser extends Model{
#Id
public Long id;
public Meeting meeting;
public User user;
.
.
.
public User getUser(){
return Model.all(User.class).filter("id", user).get();
}
public Meeting getMeeting(){
return Model.all(Meeting.class).filter("id", meeting).get();
}
}
For instance I am listing a meeting and all their users:
public static void meetingInfo(Long meetingId){
Meeting meeting = Models.all(Meeting.class).filter("id",meetingId);
List<MeetingUser> meetingusers = meeting.asList();
List<User> users = new ArrayList<User>();
for(MeetingUser mu: meetingusers){
users.add(mu.getUser());
}
render(users);
}
This is done(is there any better way here?) however when it comes to filtering (especially dynamic filtering for many many fields) I can not use the Query's filter method on the MeetingUser as I need to filter on a MeetingUser's field's field (firstName). The same problem arise for ordering. I need the solution for both problems.
I hope my problem is clear and I appreciate any kind of help here.
Remember that you are in GAE which is a NoSQL DB.
So you can't do Join request as in RDBMS.
Yet, this is not really the pb you have so this was just to be sure you are aware of it ;)
So if you want to find the person having given firstname in a given meeting, can you try the following:
List<MeetingUser> meetingusers = meeting.users.asQuery().filter("firstname", "XXX");
(you can also order)
Nevertheless, knowing that you can't join, remember that you can't write a query searching for a meeting in which there are users whose firstname is XXX as it would require some joins and it doesn't exist in GAE. In this case, you need to change your model following NoSQL philosophy but this is another subject
regards
Let's try to give a way to do what you want...
Your relation is a Many-to-Many which is always the worst case :)
You want to filter Meeting by User's firstname.
It requires a join request which is not possible in GAE. In this case, you must change your model by denormalizing it (sometimes use redundancy also) and manage the join by yourself. Actually, you must do the job of the RDBMS by yourself. It seems overkill but in fact, it's quite easy. The only drawback is that you must perform several requests to the DB. NoSQL means No Schema (& No Join) so there are a few drawbacks but it allows to scale and to manage huge data load... it depends on your needs :)
The choice you did to create the MeetingUser which is a "joined" table and a kind of denormalization is good in GAE because it allows to manage the join yourself.
Solution:
// fetch users by firstname
List<User> users = users.all().filter("firstName", "John").fetch();
// fetch meetingusers associated to these users (verify the "IN" operator works because I didn't use that for a long time and don't remember if it works with this syntax)
List<MeetingUser> meetingusers = MeetingUser.all().filter("user IN", users);
// now you must fetch the whole meeting because in MeetingUser, only the Meeting ID is stored (other fields are Null or O)
List<Meeting> meetings = new ArrayList<Meeting>()
for(MeetingUsers mu:meetingusers) {
meetings.add(meetingusers.meeting);
}
// use the batch feature to fetch all objects
Meeting.batch(Meeting.class).get(meetings);
// you have your meetings
Hope this helps!

Resources