I have made the mistake when starting the coding an iPhone App of not adding a prefix to my classnames, and as such have a conflict (A CoreData class called Category). The project build fine until the recent update of Xcode, and only now I realize the mistake.
Is it possible to rename CoreData classes while keeping a working system after update?
If I add a new version to the Datamodel and rename the class, the App updates, but it seems that the old table is deleted and a new (empty) one created. Obviously all the links subsequently are broken. I would like to maintain the data while making the change.
In Java EE you can overrule the table name if I remember correctly, and as such I could stick to the old class name to name the table. Is there any such possibility with CoreData?
Thanks in advance! I have to find a way to update the DB without all the Apps out in the field breaking.
Actually, it i documented quite well by Apple:
If you rename an entity or property,
you can set the renaming identifier in
the destination model to the name of
the corresponding property or entity
in the source model. You typically set
the renaming identifier using the
Xcode Data Modeling tool, (for either
an NSEntityDescription or an
NSPropertyDescription object). In
Xcode, the renaming identifier is in
the User Info pane of the Detail Pane,
below the version hash modifier (see
The Browser View in Xcode Tools for
Core Data). (Mac OS X Developer Library )
But actually for me it seems to work just to change the Classname and leave the rest of the model alone.
Related
I am attempting to gain some knowledge and use cases on SSDT Database development and deployments and strugling with some deployment issues.
Specifically when using nested views. For some unknown reason when attempting to deploy / publish the files in the project to a local / live db it seems to mess up the references in the views.
In this project i have the following views (example):
View1
View2
View3
View1 references View2 and View3 is referencing View1.
Building the project works fine. hoewever when i try to publish the database either by generating a dacpac by snapshot and publish it to the database or let Visual Studio generate an update script after (or not) comparing schemas i end up with an update script which tries to create the views in what seems to be the logical order in which they are stored in the project.
In this case View1 -> View2 -> View3. This means the publish fails because of reference issues. It can't create a view if the referenced view does not exist.
I have tried several options by adding the dacpac as reference in the project in various ways (same database, Same Server different Database w/ w/o database parameter) but in many cases i end up with a sql71561 / SQL71508 error which was another PITA to solve.
Online i can't find any good sources which explains how to work around this issue or which explains how this works properly.
Hopefully i can get some help here. If you need extra input from my side or want me to try something let me know.
Issue has been resolved by new insights. When trying to build the demo code to share with SO community i accidentally found the solution because i needed to clean up sensitive data(model) parts. Please let me elaborate on what was the issue.
The solution can be divided into two solutions:
Configuration of Database Project / Solution
the way references work
I'll share some insights on the both matters.
Configuration of Database Project / Solution
The Visual Studio solution contained a single project in which all views were placed. The actual tables and other database items were separated in different Solutions / Project.
Solution1
Project1
View1
View2
View3
Solution2
Project1
Tables
Security
Schemas
Etc...
The views itself contained three-part identifiers [Database].[Schema].[Table/View]. This was both on the items inside the project (views) and on the items outside the project (tables etc.).
By just using that one separate Project with just the views led to missing references. It was not able to find the other views nor the tables (further see references).
One solution to this issue was making sure both the views and the tables refenced are in the same Solution / Project. Even with using three-part identifiers Visual Studio ignores these because of the existence of all items in the same project / solution. It will detect the dependencies this way.
the way references work
The other way to solve it was using references the right way in visual studio. which is the second possible solution.
Considering the earlier example where the views were in a different solution as the other elements led to missing references. However adding a dacpac as a database reference with the setting Same Database led to conflicting references and SQL71508 element already exists in the model. This is true because it exists in the references dacpac and we try to create a new view with the same name referencing itself in the dacpac. This is because it sees the three-part reference as a variable for the dacpac.
When using the dacpac setting for same Server, Different Database it resolves the mixed up references because it sees the three-part identifiers as an external reference and thinks that you creating a local copy of a view which is looking at the external dacpac. in other words it will not detect the nested view because it thinks you referencing a separate database not inside the project.
When building the project this will not lead to errors and deployment will work. however since it thinks you are referencing an external data source (in the form of a dacpac) it does not see the reference to the other local views.
The solution to this (atleast this worked for us) is to use two-part identifiers in our views when we need a local reference to the other views. This way it will look at other files inside the project instead of the referenced dacpac.
Since it will detect the reference to the other local views it will build correctly and detect the dependancies in the views inside the local project. It will then create a good build order for all views.
I guess you could also assign a different Variable name to the referenced Dacpac, use three-part-identifiers all the way but change the ones in the external dacpac to use the newly assigned variable name. We have not tested this (but i will when i get back home tonight).
So in all this was a good learning experience in how Database references work inside Database projects when using partial projects or when you have split up the database into several projects / solutions. Now to understand this Pandora's black box and convert them into a future-proof solution :)
I'm trying to make a custom column (for a custom list), where the users can upload files without overwriting the previous - this way they can keep past versions of the files and upload newer ones and the new ones append. There already exist "append only" comment columns and file upload columns that I can see.
I'm working with Sharepoint designer 2007 (2010 doesn't work with the site), and I'm referencing this code I found online somewhere (http://pastebin.com/raw.php?i=0qN89meu), trying to research the Sharepoint documentation on MSDN. I can open the site in designer, but don't know where to go from there (it's already running on a web server, not opening it locally).
I'm just not clear on how to start, I thought there'd be a simple "right+click -> new column" feature but I can't find it. If someone could point me in the right direction to where I could start creating columns on the site, that would be great. Thanks!
An untested idea :
Create a document library with a lookup column to the custom list.
Create an event receiver (ItemAdded and ItemUpdated) than will take the attached files and move them to the other list (with the correct lookup value). --> Code with Visual Studio
Grant to this document library only read permissions.
Adapt the view to display the related documents in the dispform of the custom list.
Advantages:
this seems to answer to your need
you gain all the usability of a document library (nothing prevent you to grant edit rights to other users, force check out, etc.)
Disadvantages:
you have to play with lookup. Can be tricky sometimes, if you play with features
you split one business entity to two entities. You will have to deal with cascading delete (if you need it).
I'm having some trouble with a django database (postgresql backend).There was a model (a profile for users) in the project with some boilerplate stuff in it. This has sat in our project for a while, not being used. I actually got round to needing this, so I adjusted the models and created some initial migrations with South. On my dev box I dropped the entire db and syncdb'ed and migrated. This worked fine.
When I've pushed this out to production, I manually removed the old tables in postgresql and syncdb'ed and migrated. However, in my admin interface a DatabaseError is raised as some function is looking for a field on the old model. I've even dropped the entire postgresql database and syncdb'ed / migrated again and this still happens. The offendinging field is called gender (not one that I created). The migration works, and the database structure reflects my models, but for some reason the admin interface wants to find this (non-existant) gender field. This is the error:
DatabaseError: column user_profiles.gender does not exist LINE 1: ... "user_profiles"."id", "user_profiles"."user_id", "user_prof...
I understand this seems to be quite site specific, but perhaps I could get some pointers on how to debug this?
Thanks
If I understand your problem, your current code should not contain a reference to "gender" any more, since the old code was removed. Try to find a source file which still contains it:
find your-dir -name '*.py'|xargs grep gender
Or there is a pyc file, but the py file was removed. But Python still loads the pyc file....
If this does not help, please post the ascii traceback.
I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.
We're investigating using RIA Services (July 09 Preview) to expose parts of an existing EF model. We've added a Domain Service class to our web application and specified the EF model to use and selected a few of the entities we wish to make available via the domain service (some have editing enabled, most do not).
We build and everything is great, but if we want to add an additional entity to the domain service how do we do that. Is it a case of delete your current class and re-add and this hole will be plugged when RIA Services hits RTM?
I agree, that's annoying to type in all that manually every time the DB changes. What i end up doing is creating a new temporary domain service classes (and metadata) and cut&pasting the code into the existing domain service and then removing the temp service from the project.
Another option can be (didn't try it) to make the generated file a partial class, put all the new queries into a separate file and every time the DB Schema changes just blow away the generated file and recreate it using the wizard. Just a thought
You can just add the code for the new entities... just add the right methods, query, and depending on which operations you need, insert, update, delete and custom ones.
Yoiu shouldn't have to delete your current class, which theoretically contains a bunch of interesting app logic (I'd imagine) just because you want to add an entity.
My solution to this problem was to create a code snippet that does most of the work.
I only have to type efdsmethods, tab twice, and replace the EntitySet name, EntityType name, and entity variable for the methods to use and then I'm done. It makes adding the 4 standard methods very easy.
I've submitted my snippet as a patch (#10154) to the Silverlight Contrib project on codeplex, but it hasn't been accepted yet. Until then you can download the snippet from here.
Hope this helps you.