What are possible side effects when using option to "Prevent saving changes that require table-recreation." Specifically, just adding a new field to table.
You are referring to SSMS. By default, you cannot save schema changes that involve a table recreation. Whenever I install SSMS, I immediately turn this option off.
Some schema changes require a temp table to be created, the data from the original table copied to it, a new table created, with the new schema, and then the data from the temp table copied to the new table. The temp table is then dropped. When this option selected, any schema change that requires this process is not permitted in SSMS.
IMO, there is no downside to turning this off, as long as you are aware that some schema changes require this, and, with a table with a large number of rows, the operation could take a long time.
Just adding a new column to a table is fine, provided you can accept that the new column will "appear" at "the end" of the table.
It's when people want to position the new column in a particular place in the list of columns that problems occur, because there's no such actual command to allow this to happen in SQL; So SSMS has to fake this by creating a new table, copying data across, deleting the old table, and renaming the new. All of these steps take time, during which it's unsafe for anyone to be trying to access this table.
Damien is right, just adding a column causes no side effects.
This can easily be done using T-SQL, as this is one of the actions that is done without re-creating a table when using T-SQL. The other actions are:
Modifying the NULL setting of an existing column
Using RESEED for a column
Changing data type of an existing column
Related
I am testing different strategies for a incoming breaking change. The problem is that each experiment would carry some costs in Azure.
The data is huge, and can have some inconsistencies due to many years with fixes and transactions before I even knew the company.
I need to change a column in a table with million of records and dozens of indexes. This will have a big downtime.
ALTER TABLE X ALTER COLUMN A1 decimal(15, 4) --The original column is int
One of the initial ideas (Now I know this is not possible) is to have a secondary replica, do the changes there, and, when changes finish, swap primary with secondary... zero or almost zero downtime. I am referring to a "live", redundant replica, not just a "copy"
EDIT:
Throwing new ideas:
Variations to what have been mentioned in one of the answers: Create a table replica (not the whole DB, just the table), apply a INSERT INTO... SELECT and swap the tables at the end of the process. Or... do the swap early to minimize downtime in trade of a delay during the post-addition of all records from the source
I have tried this, but takes AGES to complete. Also, some null and FK violations make the process to fail after processing for several hours.
"Resuming" could be an option but it makes the process slower with each execution. Without some kind of "Resume", each failure have to be repeated from scratch
An acceptable improvement could be to IGNORE the errors (but create logs, of course) and apply fixes after migration. But afaik, AzureSql (nor SqlServer) doesn't offer an "ignore" option
Drop all indexes, constraints and dependencies to the column that needs to be modified, modify the column and apply all indexes, constraints and dependencies again.
Also tried this one. Some indexes take AGES to complete. But for now seems to be the best bet.
There is a possible variation by applying ROW COMPRESSION before the datatype change, but I think it will not improve the real deal: index re-creation
Create a new column with the target datatype, copy the data from the source column, drop the old column and rename the new one.
This strategy also requires to drop and regenerate indexes, so it will not offer lot of gain (if any) with regards #2.
A friend thought of a variation on this, which is to duplicate the needed indexes ONLINE for the column copy. In the meanwhile, trigger all changes on source column to the column copy.
For any of the mentioned strategies, some gain can be obtained by increasing the processing power. But, anyway, we consider to increase the power with any of the approaches, therefore this is common for all solutions
When you need to update A LOT of rows as a one-time event, maybe it's more effective to use the following migration technique :
create a new target table
use INSERT INTO SELECT to fill the new table with correct / updated values
rename the old and new table
create indexes for the new table
After many tests and backups, we finally used the following aproach:
Create a new column [columnName_NEW] with the desired format change. Allow NULLS
Create a trigger for INSERTS to update the new column with the value in the column to be replaced
Copy the old column value to the new column by batches
This operation is very time consuming. We ran a batch every day in a maintenance window (2h during 4 days). Our batch filled the values taking oldest rows first, we counted on the trigger filling the new ones
Once #3 is complete, don't allow NULLS anymore on the new column, but set a default value to avoid the INSERT trigger to crash
Create all the needed indexes and views on the new column. This is very time consuming but can be done ONLINE
Allow NULLS on the old column
Remove the insert trigger - start downtime now!
Rename the old column to [columnName_OLD], the new to [columnName]. This requires few downtime seconds!
--> You can consider it is finally done!
After some safe time, you can backup the result and remove [columnName_OLD] with all of its dependencies
I selected the other answer, because I think it could be also useful in most situations. This one has more steps but has a very little downtime and is reversible at any step but the last.
We have a star schema designed in Wherescape. The task is to add new columns to the Fact table.
The fact table have around 30gb in it. Is it possible to add columns without deleting the fact table? Or what technique should be used to retain the current data in the fact table, and at the same time have the new columns available. I keep getting a timeout error if I just try to add columns in management studio.
I think the guy before me actually just modified it in Wherescape (not too sure). In anycase if I have to do it manually in management studio, that works for me too.
thanks
Gemmo
Can't really do this without deleting the table. It's too big and no matter what you do, it will time out. Back up the table, delete it and create the table with the new structure. You'll just have to put the data in again. No shortcuts. For smaller tables, you can easily add a column no problem.
Best way to do this is to add the column to the metadata and then right click on your table/object and then click "Validate against the database".
this would allow you to alter the table instead of having to take the long route of moving data into a temp table, recreating the table
and moving the data back.
So I've noticed over the past few weeks that changing tables in SQL Server is very difficult, such as specifying a new primary key column or changing a column definition (ie. changing it's datatype). Half of the time I have to drop the table and start over. Or even re-ordering columns in a table results in about a half hour of working (even though it doesn't really matter to the databse engine what order they are in, but I like to have things in logical order).
Are there any simpler ways to make changes to tables like this without having to go through headaches of dropping tables or recreating them. Most of the time I get an error saying that the table has to be dropped and recreated but apparently SQL server can't do this. I don't want to turn off the "Prevent table changes from requiring a table to be dropped" option because of the possiblye problems it can cause later.
Sometimes, for example, I can cheat and generate a CREATE TABLE script and then change the definition of a table in the script, drop the actual table, and create it again with the script ,but sometimes this doesn't work. And re-ordering columns is a pain and a problem, or even changing data types in a column that's already in the table, or setting a primary key, or changing a field from "NULL" to "NOT NULL" using the checkbox.
Any ideas on how to better manage the tables and make changes? It frustrates me that Microsoft did not follow the SQL standard on some of its ALTER TABLE commands, among other things. In MYSQL this would be a lot easier, but we are using SQL Server unfortunately.
I added a new column to an existing table in the SQL Server Management Studio table designer. Type INT, not null. Didn't set a default value.
I generated a change script and ran it, it errored out with a warning that the new column does not allow nulls, and no default value was being set. It said "0 rows affected".
Data was still there, and for some reason my new column was visible in the "columns" folder on the database tree on the left of SSMS even though it said "0 rows affected" and failed to make the database change.
Because the new column was visible in the list, I thought I would go ahead and update all rows and add a value in.
UPDATE MyTable SET NewColumn = 0
Boom.. table wiped clean. Every row deleted.
This is a big problem because it was on a production database that wasn't being backed up unbeknownst to me. But.. recoverable with some manual entry, so not the end of the world.
Anyone know what could have happened here.. and maybe what was going on internally that could have caused my update statement to wipe out every row in the table?
An UPDATE statement can't delete rows unless there is a trigger that performs the delete afterward, and you say the table has no triggers.
So it had to be the scenario I laid out for you in my comment: The rows did not get loaded properly to the new table, and the old table was dropped.
Note that it is even possible for it to have looked right for you, where the rows did get loaded at one point--if the transaction was not committed, and then (for example) later when your session was terminated the transaction was automatically rolled back. The transaction could have been rolled back for other reasons, too.
Also, I may have gotten the order incorrect: it may create the new table under a new name, load the rows, drop the old table, and rename the new one. In this case, you may have been querying the wrong table to find out if the data had been loaded. I can't remember off the top of my head right now which way the table designer structures its scripts--there's more than one way to skin this cat.
I have situation where I need to change the order of the columns/adding new columns for existing Table in SQL Server 2008. It is not allowing me to do without drop and recreate. But that is in production system and having data in that table. I can take backup of the data, and drop the existing table and change the order/add new columns and recreate it, insert the backup data into new table.
Is there any best way to do this without dropping and recreating. I think SQL Server 2005 will allow this process without dropping and recreating while changing to existing table structure.
Thanks
You can't really change the column order in a SQL Server 2008 table - it's also largely irrelevant (at least it should be, in the relational model).
With the visual designer in SQL Server Management Studio, as soon as you make too big a change, the only reliable way to do this for SSMS is to re-create the table in the new format, copy the data over, and then drop the old table. There's really nothing you can do about this to change it.
What you can do at all times is add new columns to a table or drop existing columns from a table using SQL DDL statements:
ALTER TABLE dbo.YourTable
ADD NewColumn INT NOT NULL ........
ALTER TABLE dbo.YourTable
DROP COLUMN OldColumn
That'll work, but you won't be able to influence the column order. But again: for your normal operations, column order in a table is totally irrelevant - it's at best a cosmetic issue on your printouts or diagrams..... so why are you so fixated on a specific column order??
There is a way to do it by updating SQL server system table:
1) Connect to SQL server in DAC mode
2) Run queries that will update columns order:
update syscolumns
set colorder = 3
where name='column2'
But this way is not reccomended, because you can destroy something in DB.
One possibility would be to not bother about reordering the columns in the table and simply modify it by add the columns. Then, create a view which has the columns in the order you want -- assuming that the order is truly important. The view can be easily changed to reflect any ordering that you want. Since I can't imagine that the order would be important for programmatic applications, the view should suffice for those manual queries where it might be important.
As the other posters have said, there is no way without re-writing the table (but SSMS will generate scripts which do that for you).
If you are still in design/development, I certainly advise making the column order logical - nothing worse than having a newly added column become part of a multi-column primary key and having it no where near the other columns! But you'll have to re-create the table.
One time I used a 3rd party system which always sorted their columns in alphabetical order. This was great for finding columns in their system, but whenever they revved their software, our procedures and views became invalid. This was in an older version of SQL Server, though. I think since 2000, I haven't seen much problem with incorrect column order. When Access used to link to SQL tables, I believe it locked in the column definitions at time of table linking, which obviously has problems with almost any table definition changes.
I think the simplest way would be re-create the table the way you want it with a different name and then copy the data over from the existing table, drop it, and re-name the new table.
Would it perhaps be possible to script the table with all its data.
Do an edit on the script file in something like notepad++
Thus recreating the table with the new columns but the same.
Just a suggestion, but it might take a while to accomplish this.
Unless you write yourself a small little c# application that can work with the file and apply rules to it.
If only notepadd++ supported a find and move operation