I am having what seems to be a simple problem but I can't figure out how to get VS2010 to update my .xsd file when I change the schema in my sql server database.
I have found two recommendations thus far.
The first to right click the xsd file and 'Run Custom Tool'. This does not update the XSD as far as I can tell since I my added table doesn't appear and a table with a new column isn't updated.
The second option is to delete the xsd file and then re-add the datasource. This method obviously updates the tables but any queries that I added for the UI are lost.
Because this is a fairly large data model I expect that there will be changes (we are still early in the process) to the schema. At this point it appears that my only recourse will be to move my UI specific queries into that database as functions are stored procedures...
Is there a best practice for this (or something I'm terribly missing with this toolset)?
Related
My team is looking into db migration tools (e.g., Flyway, Liquibase) and so I'm thinking about how to incorporate changes I make to the db contents using my groovy+grails service method. I'm not referring to changes to columns and/or tables (i.e., domain classes), I'm referring to inserts/updates of rows which represent configuration values for the associated webapp.
My service method is written to be used somewhat interactively. That is, when I'm adding or updating rows in various tables (i.e., newInstance or save), it helps me navigate various db constraints and to make sure all the foreign keys and my own business logic are set correctly. I run it repeatedly (rolling back each time afterwards using setRollbackOnly()) until I've found something I'm happy with. The method is written in groovy, and I don't want to rewrite it in sql.
Is there a way to get groovy/grails to emit the sql it would execute instead of executing the sql? That is, give me something I could copy/paste into a Flyway migration or Liquibase changeset?
I looked into logging, but I'd have to somehow process that output to substitute the values in and to get the proper column names, and even then I'd need a distinction between lines that I actually change the db (maybe I could just extract the inserts and updates). I also looked into these
grails database migration scripts, but they appear to either look at domain classes (which isn't where my changes are happening) or at the entire database (which would sweep up a lot of user data too).
Thanks!
Our firm does not have a dedicated DBA employed but does have select developers performing DBA functions. We update our database often during a development cycle and have a release script with the various updates. We keep our db schema and objects in Visual Studio in a Database Project.
However, we often encounter two stumbling block problems that causes time-intensive manual intervention:
Developers cannot always sync from the Database Project to their local database because if we have added a NOT NULL field to an existing table that contains data then the Deploy process for VS to the db isn't smart enough to automagically insert "test" data just get the field into the table (unless this is a setting someplace?). We would of course follow this up, if possible, with a script to populate the field with real data, but we can't because the deployment fails.
Sometimes a developer will restore a backup from any past random date. There is no way of knowing exactly which db updates were applied to this database, so they don't know which scripts to start applying. What we do in this case is to check each script, chronologically, to see if the changes from that script have been applied to the database. If so, move on to the next script to run. Repeat.
One method we have discussed is potentially creating a "Database Update Level" table in the database with 1 field, 1 row. It would maintain the level that the database has been updated through. For example, when the first script is run, update the level to 2. In each db script, we would wrap the statements in a check such as
IF Database_Update_Level < 2 THEN
do some things here
UPDATE Database_Update_Level SET Database_Update_Level = 2
END IF
The db scripts can then be run on any database because the individual statement won't execute below a certain level.
This feels like we're missing something because this must be a common problem that every development shop that allows developers to develop locally encounters.
Any insights would be greatly appreciated.
Thanks.
about the restore problem, I don't see many solutions, you might try to prevent full restore and run scripts to populate the tables instead. As for versioning structures, do you use SSDT (SQL Server Data Tools) in VS ? You can generate DACPACs and generate diff scripts.
But what you say is that you also alter structures directly in the database ? No way to avoid that ? If not you could for example use DDL triggers (http://www.mssqltips.com/sqlservertip/2085/sql-server-ddl-triggers-to-track-all-database-changes/) to at least get notified that something changed.
One easy way to solve the NOT NULL problem is to establish default constraints (could just be an empty string, max number value for the data type, max date value, etc.). When the publish occurs the new column will be populated with the default value.
For the second issue I'd utilize post-deploy scripts in your SSDT project to keep the data in sync utilizing 'NOT EXISTS' to make incremental changes. That way, you can simply publish the database and allow the data updates to occur one after another.
I am new to Oracle SQL Developer (about 1 month of use), having always used Toad. I have 2 almost identical schema set up - one to test older code, one to develop a modified versions. I have 2 different connections set up - one to each schema, with separate user names for each one.
But when I delete a table or column from the schema in one connection, it is also deleted or changed in the other.
This happens if I right-click on the table or field in the Connection explorer panel, or if I open a SQL Script saved to disk. If I open a SQL script, I even see a pop-up that asks me what connection to use, but if I select one, it still makes changes to both. Even if I only have one of the two connections open, the script will still change design in both of the connections.
The only way I can be sure to make changes to just one of the two is to right-click on the connection name in the Explorer panel, and open a new SQL Worksheet. The worksheet is then named for the connection and just makes changes to it.
This is not the behavior I was expecting, and I'm facing many hours of work to get the definitions of the 2 schema back to where I need them to be. I am wondering if there is some key concept or distinction I am missing or if there is some way the database(s) are set up that is enabling this to happen.
In case you never found the answer for your question. This is my understanding:
The database may have several schemas. The schema is not a separate database, it is a grouping of objects in that database. If you change something while in one schema, you are really changing it in the database, not just the schema. I hope this helps.
Are you just trying to test things in one schema? It sounds like you may want to have a Database and a TEST Database. You could test whatever you wanted in the TEST database and never have it change the real database.
When first time I created my App, I created a Database using Microsoft SQL SERVER Management Studio and I connected my App with it.
I created another DB with the same tables and every thing but with diferent names and I let my App to connect to the second one because I want to make some changes and when I am trying to edit my DataSet with Wizard I get this tables page :
as you can see my app couldn't find the right tables and when I am trying to select LastWork table as in the pic, it will make the table name in the DataSet LastWork1.
How I can fix this problem? and let it find the right tables
I've seen this problem when using copies of databases as well, after pointing to a different connection in the settings area of the project properties. The XSD evidently hard codes each DbObjectName with the name of the database and schema in use at design time. One approach to fixing it is to open the wizard for the appropriate dataset, uncheck the red-x objects with the missing references, close the wizard, then re-open it and re-select the objects that are needed. This is not ideal in a large xsd if many findby queries, custom columns, etc. have been added. So an alternative is to do a find and replace on the database name within the XSD itself.
Interestingly, my experience has been that an application runs fine when the connection string points to a differently named but otherwise identical database.
We are in the process of a multi-year project where we're building a new system and a new database to eventually replace the old system and database. The users are using the new and old systems as we're changing them.
The problem we keep running into is when an object in one system is dependent on an object in the other system. We've been using views, but have run into a limitation with one of the technologies (Entity Framework) and are considering other options.
The other option we're looking at right now is replication. My boss isn't excited about the extra maintenance that would cause. So, what other options are there for getting dependent data into the database that needs it?
Update:
The technologies we're using are SQL Server 2008 and Entity Framework. Both databases are within the same sql server instance so linked servers shouldn't be necessary.
The limitation we're facing with Entity Framework is we can't seem to create the relationships between the table-based-entities and the view-based-entities. No relationship can exist in the database between a view and a table, as far as I know, so the edmx diagram can't infer it. And I cannot seem to create the relationship manually without getting errors. It thinks all columns in the view are keys.
If I leave it that way I get an error like this for each column in the view:
Association End key property [...] is
not mapped.
If I try to change the "Entity Key" property to false on the columns that are not the key I get this error:
All the key properties of the
EntitySet [...] must be mapped to all
the key properties [...] of table
viewName.
According to this forum post it sounds like a limitation of the Entity Framework.
Update #2
I should also mention the main limitation of the Entity Framework is that it only supports one database at a time. So we need the old data to appear to be in the new database for the Entity Framework to see it. We only need read access of the old system data in the new system.
You can use linked server queries to leave the data where it is, but connect to it from the other db.
Depending on how up-to-date the data in each db needs to be & if one data source can remain read-only you can:
Use the Database Copy Wizard to create an SSIS package
that you can run periodically as a SQL Agent Task
Use snapshot replication
Create a custom BCP in/out process
to get the data to the other db
Use transactional replication, which
can be near-realtime.
If data needs to be read-write in both database then you can use:
transactional replication with
update subscriptions
merge replication
As you go down the list the amount of work involved in maintaining the solution increases. Using linked server queries will work best if its the right fit for what you're trying to achieve.
EDIT: If they're the same server then as suggested by another user you should be able to access the table with servername.databasename.schema.tablename Looks like it's an entity-framework issues & not a db issue.
I don't know about EntityToSql but I know in LinqToSql you can connect to multiple databases/servers in one .dbml if you prefix the tables with:
ServerName.DatabaseName.SchemaName.TableName
MyServer.MyOldDatabase.dbo.Customers
I have been able to click on a table in the .dbml and copy and paste it into the .dbml of the alternate project prefix the name and set up the relationships and it works... like I said this was in LinqToSql, though have not tried it with EntityToSql. I would give it shot before you go though all the work of replication and such.
If Linq-to-Entities cannot cross DB's then Replication or something that emulates it is the only thing that will work.
For performance purposes you probably want either Merge replication or Transactional with queued (not immediate) updating.
Thanks for the responses. We're going to try adding triggers to the old database tables to insert/update/delete records in the new tables of the new database. This way we can continue to use Entity Framework and also do any data transformations we need.
Once the UI functions move over to the new system for a particular feature, we'll remove the table from the old database and add a view to the old database with the same name that points to the new database table for backwards compatibility.
One thing that I realized needs to happen before we can do this is we have to search all our code and sql for ##Identity and replace it with scope_identity() so the triggers don't mess up the Ids in the old system.