I'm making changes to a Sql Server (2008) database to change an existing column type from a UUID to a varchar(128) - the column, we'll call it reference was originally used to save an ID of a reference project that provided more information about the data in the row, but now the business requirements have changed and we're just going to allow free-form text instead. There were no foreign keys setup against the column, so there won't be any breaking relationships.
My biggest concern at the moment is one of backwards compatibility with existing stored procedures. Unfortunately, the database is very big and very old, and contains hundreds of stored procs littered throughout the design. I'm very nervous about making a change which could potentially break existing stored procs.
I know this is a loaded question, but in general, can UUID columns be converted into varchar columns without deleterious effects? It seems like any existing stored proc which inserted into or queried on the UUID column would still work, given that UUIDs can be represented as strings.
I've tried performed the below steps and I didn't see any issues, so i think you can go ahead and change the column datatype:
Created the table with the column type as unique identifier.
Created the stored procedure to insert a value into that table through NewID() function.
Execute the stored procedure, and data was inserted without any issue.
Now, I changed the column type to varchar and again executed the stored procedure.
Procedure ran fine without any issue, and data was inserted.
So, the answer is Yes, you can change the column data type.
Related
I am altering the column datatype for a table with around 100 Million records using the below query:
ALTER TABLE dbo.TARGETTABLE
ALTER COLUMN XXX_DATE DATE
The column values are in the right date format as I inserted original date from a valid data source.
However, the query have been running for a long time and even when I attempt to cancel the query it seems to take forever.
Can anyone explain what is happening behind the scene in SQL Server when an ALTER TABLE STATEMENT is executed and why requires such resources?
There are a lot of variables that will make these Alter statements
make multiple passes through your table and make heavy use of TempDB
and depending on efficiency of TempDB it could be very slow.
Examples include whether or not the column you are changing is in the
index (especally clustered index since non-clustering key carries the
clustering index).
Instead of altering table...i will give you one simple exmaple...so you can try this....
Suppose your table name is tblTarget1
Create the another table (tblTarget2) with same structure...
Change the dataType of tblTarget2.....
Copy the data from tblTarget1 To tblTarget2 using Insert into query....
Drop the original table(tblTarget1)
Rename the tblTarget2 as tblTarget1
The main Reaseon is that....changing the data type will take a lot of data transfer and data page alignment....
For more Information you can follow this Link
Another approach to do this is the following:
Add new column to the table - [_date] date
Using batch update you can change transfer the values from the old to the new column without blocking the table for the other users.
Then in one transaction do the following:
update all of the new values inserted after the update is done
drop the old column
rename the new column
Note, if you have an index on this field you need to drop it before deleting the old column and create if after renaming the new one.
I want to rename a column in a table. I see two approaches
Use sp_rename() and modify stored procedures to refer to new name.
Create new column, copy data from old column to new column, modify stored procedure etc. to refer to new column, eventually drop old column.
We can't use #1 as renaming column might leave stored procedures broken and we cannot afford any downtime.
If we go with #2, there is possibility that both old and new columns would co-exist for sometime after the data is copied over from old to new column but before the stored procedures deployed to use the new column.
Is there any way to keep the new column in sync with any updates/insert/deletes done to old column?
Can the AFTER triggers help here? But triggers usually increase the transaction time, so may not be a favorable solution.
Can I replicate data between two columns of the same table?
Any other possible solutions?
Also does sp_rename() cleanly updates all the references to the column - like stored procedures, functions, indexes etc?
First confirm there are no references to the column from outside the database like application code directly querying the column without going through stored procedures.
Here is how I renamed the column without causing the downtime -
Add a new column besides the existing column. The new column has same data type as the old column but the new name.Also create indexes, modify replication etc. on the new column based on the use cases.
Modify all stored procedures writing (insert/update operations) to the old column to also insert/update in the new column.
Copy over the data from old column to the new column for the existing records. Step 2 and 3, together, now ensure that the new column will remain in sync with the old column.
Modify all stored procedures reading from the old column to now read from the new column.
Now that all code has transitioned to use new column, we need to clean up -
Modify stored procedures from earlier Step#2 to stop referring to old column.
Drop the old column.
You could rename your table and create a view using the old table name and have the view include an alias for your column.
SQLPrompt by redgate has a feature called Smart Rename which can rename a column and update all of the references to the new name.
From SQL Prompt 7 documentation:
SQL Prompt can create a script that allows you to rename objects in your database without breaking dependencies. You can rename the following:
Tables (including columns)
Views (including columns)
Stored procedures (including parameters)
Functions (including parameters)
When an object is renamed:
SQL Prompt also modifies any objects that reference, or are referenced by, the renamed object to ensure that dependency links are not broken.
If you have previously renamed an object using SQL Server Management Studio or Enterprise Manager Rename, or the T-SQL sp_rename command, the object definition will contain the original name.
Any objects that reference this original name are not updated.
To help you locate objects that reference objects that no longer exist, see Finding invalid objects.
The original permissions and extended properties of the object are preserved.
I need to change a database column from integer to string/text but I am not sure how to go about it.
This column is meant to store identification numbers, but recently the ID format changed and now the IDs contain ASCII characters as well (so with this change the new IDs cannot be stored as integers).
The application I am updating is written in Delphi 7 and uses the odbcexpress components for the SQL Server library.
Is it possible to use ALTER TABLE for this? Or does the data need to be copied to a new column as string, delete the old column, and rename the column to the old name?
Can you provide an example on how I might do this? I am not very familiar with the workings of SQL Server.
Thanks!
ALTER TABLE is precisely what you want to do.
Your SQL might look something like this:
ALTER TABLE dbo.MyTable ALTER COLUMN MyColumn VARCHAR(20) NOT NULL;
Note that if you have columns that reference this one, you will have to update those as well, generally by dropping the foreign key constraints temporarily, making your changes, then recreating your foreign key constraints.
Don't forget to change anything that is dependent or downstream as well, such as any variables in stored procedures or your Delphi code.
Additional info related to comments (thanks, all):
This alter column operation will preserve data as it will be implicitly casted to the new type. An int casts to varchar without a problem so long as your varchar is wide enough to accommodate the largest converted value at least. For total safety with ints, I often use a varchar(11) or larger in order to handle the widest int value: negative two billion.
ALTER TABLE your_table MODIFY your_column_name varchar(255) null;
My company has an application with a bunch of database tables that used to use a sequence table to determine the next value to use. Recently, we switched this to using an identity property. The problem is that in order to upgrade a client to the latest version of the software, we have to change about 150 tables to identity. To do this manually, you can right click on a table, choose design, change (Is Identity) to "Yes" and then save the table. From what I understand, in the background, SQL Server exports this to a temporary table, drops the table and then copies everything back into the new table. Clients may have their own unique indexes and possibly other things specific to the client, so making a generic script isn't really an option.
It would be really awesome if there was a stored procedure for scripting this task rather than doing it in the GUI (which takes FOREVER). We made a macro that can go through and do this, but even then, it takes a long time to run and is error prone. Something like: exec sp_change_to_identity 'table_name', 'column name'
Does something like this exist? If not, how would you handle this situation?
Update: This is SQL Server 2008 R2.
This is what SSMS seems to do:
Obtain and Drop all the foreign keys pointing to the original table.
Obtain the Indexes, Triggers, Foreign Keys and Statistics of the original table.
Create a temp_table with the same schema as the original table, with the Identity field.
Insert into temp_table all the rows from the original table (Identity_Insert On).
Drop the original table (this will drop its indexes, triggers, foreign keys and statistics)
Rename temp_table to the original table name
Recreate the foreign keys obtained in (1)
Recreate the objects obtained in (2)
I'm pretty good around Oracle but I've been struggling to find a decent solution to a problem I'm having with Sybase.
I have a table which has an IDENTITY column which is also a User Defined Datatype (UDD) "id" which is numeric(10,0). I've decided to replace the UDD with the native datatype but I get an error when I do this.
I've found that the only way to do this is:
Rename the original table (table_a to table_a_backup) using the procedure sp_rename
Recreate the original table (table_a) but use native data types
Copy the contents of the backup table to the original (i.e insert into table_a select * from table_b)
This works however I have over 10M records and it eventually runs out of log segment and halts (I can't increase the segment any more due to physical requirements).
Does anybody have a solution, preferably not a solution which would involve processing the records as anything but one large set?
Cheers,
JLove
conceptually, something like this works (in Sybase ASE 12.5.x) ...
do an "alter table drop column" on your current ID column
do "alter table add column" stmt to add new column (w/ native datatype) with IDENTITY attribute
Note that the ID field might not have the same numbers, so be very wary of doing the above if the ID field is used as an explicit or implicit key to other tables.