In Visual Studio Database Projects I've seen table constraints being added in 2 different ways:
As part of the same script file used to create the table, after the CREATE TABLE statement;
In a separate file, kept in a "Tables\Constraints" folder, one constraint per file.
Are there good reasons to do one or the other?
Visual Studio does number 2 when importing a database from SQL Server, so I would guess that's the best way, but I can't see why. From a developer point of view number 1 seems better, as it keeps the table definition and constraints "closer" to each other.
I can only think of reasons to keep them together (#1) for exactly the reasons you mentioned: it keeps the table definition and constraints closer to each other.
Visual Studio used to keep constraints in separate files but stopped that practice in the latest "SQL Server Database Project" template introduced by SQL Server Data Tools (installed in VS 2012 out of the box, and requires a separate download for VS 2010).
Strong vote for #1. Separating the constraints from the table scripts makes it a real pain to add non-nullable columns later. When you deploy, the generated ALTER TABLE script will not be able to add a NOT NULL column to a table with existing data, because it doesn't have a DEFAULT constraint. If the constraint is already in the CREATE TABLE script, the ALTER TABLE will use it and everything will just work.
Related
Dear all, Currently I am just researching how I could handle the change of the collation on the database.
Somebody made an unusual decision to create accent sensitive database for global use... but I am on the way to handle this!
REASON: of changing the collation is that database contains data collected from different countries and as we all know some of cultures have their own letters.
With the respect for the customers, our organization would like to have Accent Insensitive database. That will allow users to request data from the server without any limitations using local characters.
As far as I have find out, there may be an option to drop constraints and etc. change collation and then just to bring everything back. In this case I am afraid if this would be enough to affect already existing data (columns).
Another way, I have found an article in Collation change on 2005 and 2008 server. However, this does not include the 2012 server.
Also I am taking the complexity of this example into consideration as well.
I believe that I am not in an easy phase. But I am hoping to get few advises what would be the best and safest way to handle this.
Thank you for your concerns and assistance.
UPDATE let me add what architecture do we have: The complete system contains 4 databases and more than 1.000 tables in total. So my expectations is that not all of the possible ways may work in an optimal way.
me too i had to deal with a similar issue because of a different reason: ancient databases with an old SQL collation installed ages ago on a SQL6.5 server that has been inplace upgraded for each version from sql 7 to sql 2005 and now should be updated to sql 2012.
why all these inplace upgrades? because the actual collation was the server collation and was so old that is not available during then install process of a recent version (2000+) of sql server...
i decided to drop all that old rubbish so i had to find a way that allowed me to move to a new installation with a windows collation.
i had to exclude the data migration (create a new database and import data) because of the lack of documentation and the huge number of customizations, triggers, hidden rules and so on.
the solution i used (the order matters):
disable automatic statistics generation
script the creation of all foreign keys and then drop them
script unique and primary indexes and then drop them
script all remaining indexes and then drop them
script custom statistics and then drop them
script CHECK and DEFAULT constraints and then drop them
now you can run the ALTER commands needed to change the collation of the columns and change the collation of the database itself.
when done repeat the above in reverse order to rebuild all the needed objects.
it happens that if the database is so old as is mine you may incur in something funny like existing foreign key that references fields with different datatypes.
Changing collation of all existing columns is a real pain. I suggest a side-by-side migration rather than alter each column individually. Create a new database with the desired collation containing only empty tables. Copy data from the old db to the new one using INSERT...SELECT (or the ETL tool of your choice), and then create constraints, indexes, and other database objects.
Consider upvoting the Make it easy to change collation on a database SQL Server feature request.
There are a number of complicated solutions on the internet for inplace collation changes but the simplest (and safest) way we have found is to script out the database, alter the script to create a new db with the collation set at the start and then import the data to the new database.
We achieve this using MS SQL Server 2012 Management Studio in the following way:
Script out all database objects with Tasks -> Generate Scripts -> Script entire Database and all Database objects
Alter the script with the following 2 changes and then run it to create a new database:
a) Change DB name to MY-NEW-DB
b) Under the CREATE DATABASE statement add: ALTER DATABASE [MY-NEW-DB] collate Latin1_General_CS_AS
If desired, use a tool like RG SQL Compare to compare the old and new database to verify all indexes, constraints, types etc were the same and collation on relevant columns only was changed.
Run Tasks->Import Data ensuring 'Enable Identity Insert' checked. All data transferred to the new case sensitive database correctly.
Run DBCC CHECKDB if you wish to check consistency
Is there an IDE for SQL Server that includes refactoring?
For an example, if I have a composite primary key on a table and I change it, sql management studio will drop all foreign keys referencing to this primary key (it will warn first). Is there a tool that generates the DROP statements for the foreign keys and recreates them?
I would look into the SQLDeveloper product from redgate. They offer some refactoring features in their SQL Prompt product. Also take a look at the SQL Compare tool. Both are worth every penny.
I would recommend looking at the Database project type in VS2010, if you haven't already. It has a lot of features that make DB refactoring easier than working SQL Server Management Studio.
For example it does a lot of build-time validation to make sure your database objects don't reference objects which no longer exist. For example if you rename a column, it will give you build errors for FKs that reference the old column name. Also, it has very handy "compare" feature which compares the DB project scripts & databases, generates a DIFF report, and generates the scripts to move selected changes between the two (either DB project to SQL Server, or vice versa).
I'm not sure it will automatically handle your composite key example -- in other words, when you rename a column it won't fix up all references to that column throughout the project. However, since all of the database objects are kept in scripts within the project, things like column renames are just a search & replace operation. Also if you make a mistake you will get build errors when it validates the database structure. So at least makes it easy to find the places that you need to change.
There are probably more powerful tools out there (I have heard good things about redgate) but the VS2010 support for the Database project type is fairly decent.
The way your objects handle foreign key references is upon creation of the table/constraint. ON DELETE CASCADE would only be one option. You can also have it set to NULL or default.
Unless I am misunderstanding your question, it is not the environment but the object parameters that dictate this.
Are there any automation tools to ease creation of tables and adding standard insert/select/update stored procs, rather than doing hand creation for a large number of tables ?
If i have 100 tables to create (and later ALTER) and their associated stored procs in SQL Server 2008, what is the most convenient way to do it ?
ADDED:
Are there tools to auto-generate nice class skeletons (with data fields) tied-up with corresponding tables ?
I am using C# .NET 4.0 in Visual studio 2010 and Microsoft SQL Server 2008.
We are starting off a new project from scratch, so it would be helpful to get tools for quick bootstrap from Design on paper to initial code.
Any other related suggestions are appreciated!
Use SSMS Database Diagrams to design you 100 tables because one only has to type the column name and can point and click for data type, primary key, nullability, foreign key, etc. Create a diagram for each functional grouping. When the design is done you can write a script which will write the SQL for the stored proc to do the insert/update for the tables.
As you create the script to write the script you could post your work here.
Why do I get message that the table needs to dropped and re-created when I add/move columns?
I believe this happens after adding foreign key constraints.
What can I do to add new columns without dropping table?
If you're more interested in simply getting SSMS to stop nagging, you can uncheck the "Prevent saving changes that require table re-creation" setting in Options->Designers->Table And Database Designers. The table(s) will still be dropped and re-created, but at least SSMS won't pester you quite as much about it.
(This assumes you're working in an dev/test environment or in a production environment where a brief lapse in the existence of the table won't screw anything up)
Because that's how SQL Server Management Studio does it (sometimes)!
Use TSQL's ALTER TABLE instead:
ALTER TABLE
ADD myCol int NOT NULL
SQL Server (and any other RDBMS, really) doesn't have any notion of "column order" - e.g. if you move columns around, the only way to achieve that new table structure is be issuing a new CREATE TABLE statement. You cannot order your columns any other way - nor should you, really, since in the relational theory, the order of the columns in a tuple is irrelevant.
So the only thing SQL Server Management Studio can do (and has done all along) is:
rename the old table
create the new table in your new layout you wish to have
copy the data over from the old table
drop the old table
The only way to get around this is:
not reordering any columns - only add new columns at the end of your table
use ALTER TABLE SQL statements instead of the interactive table designer for your work
When you edit a table definition in the designer, you are saying "here's what I want the table to look like, now work out what SQL statements to issue to make my wishes come true". This works fine for simple changes, but the software can't read your mind, and sometimes it will try to do things in a more complicated way for safety.
When this happens, I suggest that, instead of just clicking OK, click the "Script" button at the top of the dialog, and let it generate the SQL statements into a query window. You can then edit and simplify the generated code before executing it.
There are bugs in SSMS 2008 R2 (and older) that are useful to know:
when the table data is changed, ерушк rendering in SSMS is autorefreshed by SSMS in its already opened tabs (windows) - one should press Ctrl+R to refresh. The options to force refreshing do not appear in SSMS GUI - through buttons, menus or context-sensitive options (on right-clicking)
when a (table or database) schema is modified, like adding/deleting/removing a column in a table, SSMS does not reflect these changes in already opened tabs(windows) even through Ctrl+R, one should close and reopen tabs(windows)
I reported it few years ago through Microsoft Connect feedback, but bugs were closed due to it is "by design"
Update:
This is strange and irritating to see in desktop product developed during 2 decades, while this (autorefreshing) is being done by most webapplications in any browser
I'm using a MS SQL Server db and use plenty of views (for use with an O/R mapper). A little annoyance is that I'd like to
use schema binding
update with scripts (to deploy on servers and put in a source control system)
but run into the issue that whenever I want to e.g. add a column to a table, I have to first drop all views that reference that table, update the table, and then recreate the views, even if the views wouldn't need to be updated otherwise. This makes my update scripts a lot longer and also, looking the diffs in the source control system, it is harder to see what the actual relevant change was.
Is there a better way to handle this?
I need to still be able to use simple and source-controllable sql updates. A code generator like is included in SQL Server Management Studio would be helpful, but I had issues with SQL Server Management Studio in that it tends to create code that does not specify the names for some indices or (default) constraints. But I want to have identical dbs when I run my scripts on different systems, including the names of all contraints etc, so that I don't have to jump through loops when updating those constraints later.
So perhaps a smarter SQL code generator would a solution?
My workflow now is:
type the alter table statement in query editor
check if I get an error statement like "cannot ALTER 'XXX' because it is being referenced by object 'YYY'."
use SQL Server Managment Studio to script me create code for the referenced object
insert a drop statement before the alter statement and create statement after
check if the drop statement creates error and repeat
this annoys me, but perhaps I simply have to live with it if I want to continue using schemabinding and script updates...
You can at least eliminate the "check if I get an error" step by querying a few dynamic managment functions and system views to find your dependencies. This article gives a decent explanation of how to do that. Beyond that, I think you're right, you can't have your cake and eat it too with schema-binding.
Also keep in mind that dropping/creating views will cause you to lose any permissions that were granted on those objects, so those permissions should be included in your scripts as well.