We are working on a POC where we have following architecture (MVVM),
WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB)
All are different projects.
In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are,
1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created.
E.g.
CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState
ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1
OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2
Is there any way to avoid this?
2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly?
Thanks in advance and appreciate if anyone has approach for these issues.
Thanks,
Kiran
1) Are you enabling the ChangeTracker always when you grab entities from your data access?
I guess you can't avoid what is created unless you use the POCO Template. It has some more job to be done in some matters but you will have lighter objects. You must then manage the entity state by yourself. I think its good to stay on selftracking but while you use WCF you should change the collection type to FixUpCollection as i can remember to work better with your WCF Service. HINT: Don't forget to disable lazy loading or else you will end up having all the child records you night not want when the entities are serialized.
2) Try and seperate the models in different assemblies, is a better practice and i think you will overcome these problems. I worked this way and its ok.
Hope i helped....
Related
I just hit an interesting wall. We are in the process of developing a pretty reasonable sized application that has a ton of historical data. The plan is to use materialized views and do nightly/weekly refreshes as applicable to help reduce the impact on the server for large data set reports.
I know that in Cake 2, I could just define a new view by creating the appropriate model and the running with it. I could even bake the model as a starting point.
However, with Cake 4, I am getting an error when I try to use it. Specifically the error that gets raised is:
“Cannot describe v_12month_summary. It has 0 columns.”
This happens whether I bake, or manually create the entity and table model files.
Is there a way to use materialized, or even regular views, with CakePHP 4? Essentially, I would like to be able to:
$yearSummary = $this->fetchTable('V12monthSummary')->find()->where(['month'=>6,'year'=>2021'])
I have Umbraco 7 website with MVC.
I want to perform some custom action on the database.
As I understand I should be using DbContext to connect.
I have referenced System.Data.Entity to get to DbContext class. However when I'm trying to use DbContext I'm getting an error saying
The type or namespace name 'DbContext' could not be found (are you missing
a using directive or an assembly reference?)
In my models namespace:
public class umbracoDbDSN : DbContext
{
//some code
}
Can you let me know what I am missing?
Thanks
You are mixing things up. Umbraco uses PetaPoco as ORM, not entity framework. You don't need to include the System.Data.Entity. Neither you need the DbContext.
However, if you have existing DataLayer logic which you need to incorporate, for legacy systems, you might need to continue with your code above. Then look for entity framework tutorials on the internet to continue your journey.
If you are not dragging legacy stuff, then the question is: do you want to perform queries on custom tables or do you want to query the Umbraco tables for some reason.
Let's start with the last one. Querying the umbraco tables:
If you want to connect to the umbraco SQL tables, I start wondering why. There is a ContentCache, which is blasting fast, and it enables you to query very quickly everything you need from the content section. You have API's for relationships, media, users, members and everything you need. So the question remains, WHY would you ever connect to the umbraco tables.
However, if you want to store data in custom tables, I would read this article of Warren: http://creativewebspecialist.co.uk/2013/07/16/umbraco-petapoco-to-store-blog-comments/
The idea is simple, you reuse the existing code base to extend umbraco behaviour without storing stuff in the content section.
Below a simple example for reusing the databaseconnection while querying some proper created table...
var db = ApplicationContext.Current.DatabaseContext.Database;
// Fetch a collection of contacts from the db.
var listOfContacs = db.Fetch<Contact>(new Sql().Select("*").From("myContactsTable"));
Some days I love my dba's, and then there is today...
In a Grails app, we use the database-migration plugin (based on Liquibase) to handle migrations etc.
All works lovely.
I have been informed that there is a set of db administrative meta data that we must support on every table. This information has zero use to the app.
Now, I can easily update my models to accommodate this. But that answer is ugly.
The problem is now at each migration, Liquibase/database-migration plugin, complains about the schema and the model being out of sync.
Is there anyway to tell Liquibase (or GORM) that columns x,y,z are to be ignored?
What I am trying to avoid is changesets like this:
changeSet(author: "cwright (generated)", id: "1333733941347-5") {
dropColumn(columnName: "BUILD_MONTH", tableName: "ASSIGNMENT") }
Which tries to bring the schema back in line with the model. Being able to annotate those columns as not applying to the model would be a good thing.
Sadly, you're probably better off defining your own mapping block and taking control of the Data Mapper (what Hibernate essentially is) yourself at this point. If you need to take control of the way the database-integration plugin handles migrations, you might wanna look at the source or raise an issue on the JIRA. Naively, mapping your columns explicitly in the domain model should allow you to bypass unnecessary columns from the DB.
For past projects(the last few have been web using asp.net mvc) we created a service that caches our reference tables(as required) to be used primarily for dropdown lists.
Now I'm working on a desktop application.An upgrade from vb6/sybase to vb.net/sql server
I'm trying out WPF.
I started down the same path building up my DAL. one entity for each reference table.
I'm at the stage now where I want to setup the business layer (some reference tables can be edited)
And I'm not sure if I should follow the same process which is to use ReferenceTableService to "manage" the reference tables.(interacts with the DAL, Controller)
This will be an application that sits on a share that multiple users run.
What's the best way to deal with the reference tables? Caching them doesn't seem to be an option. Should I simply load them as each user opens up a new form in the application? Perhaps using a "ReferenceTableService"?
In this case, the Reference Table Service is thin layer in the application. Not a process running elsewhere.
I haven't done much WPF (be interesting to see what the WPF Gurus think) but I think your existing approach is sound and I don;t see why you should deviate from it.
Loading up on app start sounds reasonable; you just have to think about the expected lifetime of a user session vs the expected frequency of changes to the reference data.
Caching: if the data comes from a central service you could always introduce caching there.
I need to build an offline database application on WP7.
App is simple - it's about making orders from our clients, then translate it to main server (MS SQL).
Spend a days read about existing techologies - but I'am still confused. Which is right for that project?
Sync Framework.
Looking good, but as I understand - it provides single tables - no reference beetwen them. All the references I have to build on client side. Sad.
Entity FrameWork on server side.
And I have no clue - what can I use on client side. Is there a way to serialize entity object to Isolate Store, then restore it, and continue work with it? May be I can use Sync FrameWork, but scheme will become strange then - kinda one way.)))
Working with WCF & XML - most simple for me. A lot of code and conversion, but in this case I understand the data flow. In other view - I already have app with pure SQL-queries. I wanna be advanced. ))))
Using ext. databases (siaqodb for example).
Which one? siaqodb suppots "Sync provider", but it doesn't support references beetwen objects - so I have to build them by myself? Any gain? I don't know.
Is there another way to build such apps? Point it please.
If this has to be done offline, then I would generally use something like:
storing the minimal amount of the required data within isolated using a WP7 specific database like Sterling
using either a new REST or a new RIA/WCF service with objects/functions you define in order to provide the required data synchronisation
I think this is your option 3?
I've never really liked automatic data synchronisation. I just find it easier to code the sync and deal with the error cases myself - this is especially the case if your wp7 client app uses quite a small footprint of data in relation to the larger main server db.