I am trying to build a WPF application using the MVVM pattern. It would be my first one.
In my database I have 2 tables a reports table and a columns table. Basically I just want to store the skeleton of reports by storing the name and some minor infos (header row...) and save all columns in the other table.
I am wondering what would be the best approach when creating my model:
should I do 2 models (Report and Column) for each table? And make an observable collection of Columns
Only 1 model and create a POCO Column with a regular list of Columns
If I go with the 2 models approach should I implement 2 modelViews or can I group everything in one modelview as I will work with only one report in the view (like the edit report view)?
Hope I was able to clearly explain my situation.
Just do each separately (i.e. one View/ViewModel/Model per table). You can refactor common items later (and/or as you're building).
ViewModels in MVVM usually have one-to-one relationship to View, unlike Asp.Net MVC. In order to decide how many Views/ViewModels you need you can start thinking of the interface. ViewModels is modeled UI, so if you have one screen in your app then you should start with one viewModel class, later you can refine it if it will be too big. Models are a little bit different, it depends how are you going to interact with them. I'm not sure what are you going to do with them, I had an experience of storing a report definition in database and it may happen that you do not really need two tables at all, you do not even need relational database, just save a blob with serialized XML. But anyway after deserializing it back to your object you will have at least two model classes - Column and Report, model is kind of lowest level of abstraction, if you won't have those two model classes you won't be able to distinguish these entities.
Related
I am looking for advice on the best way to go about modeling my database
Lets say I have three entities: meetings, habits, and tasks. Each has its own unique schema, however, I would like all 3 to have several things in common.
They should all contain calendar information, such as a start_date, end_date, recurrence_pattern, etc...
There are a few ways I could go about this:
Add these fields to each of the entities
Create an Event entity and have a foreign_key field on each of the other entities, pointing to the related Event
Create an Event entity and have 3 foreign_key fields on the Event (one for each of the other entities). At any given time only 1 of those fields would have a value and the other 2 would be null
Create an Event entity with 2 fields related_type and related_id. the related_type value, for any given row, would be one of "meetings", "habits", or "tasks" and the related_id would be the actual id of that entity type.
I will have separate api endpoints that access meetings, habits, and tasks.
I will need to return the event data along with them.
I will also have an endpoint to return all events.
I will need to return the related entity data along with each event.
Option 4 seems to be the most flexible but eliminates working with foreign keys.
Im not sure if that is a problem or a hinders performance.
I say its flexible in the case that I add a new entity, lets call it "games", the event schema will already be able to handle this.
When creating a new game, I would create a new event, and set the related_type to "games".
Im thinking the events endpoint can join on the related_type and would also require little to no updating.
Additionally, this seems better than option 3 in the case that I add many new entities that have event data.
For each of these entities a new column would be added to the event.
Options 1 and 2 could work fine, however I cannot just query for all events, I would have to query for each of the other entities.
Is there any best practices around this scenario? Any other approaches?
In the end performance is more important then flexibility. I would rather update code than sacrifice on performance.
I am using django and maybe someone has some tips around this, however, I am really looking for best practices around the database itself and not the api implementation.
I would keep it simple and choose option 1. Splitting up data in more tables than necessary for proper normalization won't be a benefit.
Perhaps you will like the idea of using PostgreSQL table inheritance. You could have an empty table event, and your three tables inherit from that table. That way, they automatically have all column from event, but they are still three independent tables.
What is the best practice (and why you consider this as the best) for using of the same dimension in multiple data models (either multid or tabular).
In the case of necessity to have the same dimension in multiple models ("region", "trade net", "product" whatever) should I either use single database view as a datasource for the appropriate dimensions of each data model or create multiple database views based on the same dimension table and use "personal" database view as a source for the dimension of each certain data model?
Unless you are worried about tracking usage by SSAS model, I can't think of a good reason to duplicate the view for each tabular model. If it truly is a shared dimension, it would be easier to have one view. If changes are made to the view, they would then be reflected in all tabular models.
I agree on using views in general to feed a tabular model. They provide a level of abstraction so you can limit columns, add calculations, and control what goes into the tabular model with no surprises from new columns being added.
I'm newbie to designing class diagrams.
As my application works as REST API, I would like to use DTO-DAO design patterns. For user registration module, DB contains 3 tables for user signon, profile and address.
Do I need to create 3 DTOs and corresponding DAOs to insert/update user signon, profile and address?
If so, what if I only one table is created instead of three tables and dropped two tables in future?
Whatever design pattern you follow, data modelling is entirely upto you.Your design pattern should be based on your data modelling and your need. Not that,your data model will depend on the design pattern but on your need
You can create whatever dto objects you like. However both your database design and your dto design is driven by the concepts in your system (user/company/address etc) this often called the domain.
You'll often find that the two are very similar, after all they both represent the same domain!
As to whether you need different dtos for different calls that really depends on you. Do you need a different class to represent an insert/update call? What's the difference? Often the update has an id (whereas the insert hasn't had one assigned yet). So why not have two where the update inherits from the insert but adds the id property?
Delete dtos, you can do these as either an update or just as an id. After all why bother to populate an entire object you're about tot delete. Personally I'd just say
DeleteUser(int id);
Much easier!
I'm trying to join two tables that are in two different databases. These databases don't necessarily have to be the same. So i'm trying to see if I can make a model off of one model and another model off the other model and join the two derived models. Or if there is a way that you know how to join two models of different databases that'd be great too.
It is not possible for a Model to use two Tables/Databases
A model is generally an access point to the database, and more specifically, to a certain table in the database. By default, each model uses the table who's name is plural of its own, i.e. a 'User' model uses the 'users' table.
Source
The second Answer says it is possible, but I don't think it will work the way you want it.
However it is possible for a Model to use a different Database.
You can then create a relationship to link them.
It could be achieved with useDbConfig easily (just tested).
Another way is to use DB view.
We are working on a mapping application that uses Google Maps API to display points on a map. All points are currently fetched from a MySQL database (holding some 5M + records). Currently all entities are stored in separate tables with attributes representing individual properties.
This presents following problems:
Every time there's a new property we have to make changes in the database, application code and the front-end. This is all fine but some properties have to be added for all entities so that's when it becomes a nightmare to go through 50+ different tables and add new properties.
There's no way to find all entities which share any given property e.g. no way to find all schools/colleges or universities that have a geography dept (without querying schools,uni's and colleges separately).
Removing a property is equally painful.
No standards for defining properties in individual tables. Same property can exist with different name or data type in another table.
No way to link or group points based on their properties (somehow related to point 2).
We are thinking to redesign the whole database but without DBA's help and lack of professional DB design experience we are really struggling.
Another problem we're facing with the new design is that there are lot of shared attributes/properties between entities.
For example:
An entity called "university" has 100+ attributes. Other entities (e.g. hospitals,banks,etc) share quite a few attributes with universities for example atm machines, parking, cafeteria etc etc.
We dont really want to have properties in separate table [and then linking them back to entities w/ foreign keys] as it will require us adding/removing manually. Also generalizing properties will results in groups containing 50+ attributes. Not all records (i.e. entities) require those properties.
So with keeping that in mind here's what we are thinking about the new design:
Have separate tables for each entity containing some basic info e.g. id,name,etc etc.
Have 2 tables attribute type and attribute to store properties information.
Link each entity (or a table if you like) to attribute using a many-to-many relation.
Store addresses in different table called addresses link entities via foreign keys.
We think this will allow us to be more flexible when adding, removing or querying on attributes.
This design, however, will result in increased number of joins when fetching data e.g.to display all "attributes" for a given university we might have a query with 20+ joins to fetch all related attributes in a single row.
We desperately need to know some opinions or possible flaws in this design approach.
Thanks for your time.
In trying to generalize your question without more specific examples, it's hard to truly critique your approach. If you'd like some more in depth analysis, try whipping up an ER diagram.
If your data model is changing so much that you're constantly adding/removing properties and many of these properties overlap, you might be better off using EAV.
Otherwise, if you want to maintain a relational approach but are finding a lot of overlap with properties, you can analyze the entities and look for abstractions that link to them.
Ex) My Db has Puppies, Kittens, and Walruses all with a hasFur and furColor attribute. Remove those attributes from the 3 tables and create a FurryAnimal table that links to each of those 3.
Of course, the simplest answer is to not touch the data model. Instead, create Views on the underlying tables that you can use to address (5), (4) and (2)
1 cannot be an issue. There is one place where your objects are defined. Everything else is generated/derived from that. Just refactor your code until this is the case.
2 is solved by having a metamodel, where you describe which properties are where. This is probably needed for 1 too.
You might want to totally avoid the problem by programming this in Smalltalk with Seaside on a Gemstone object oriented database. Then you can just have objects with collections and don't need so many joins.