I have about 100 columns in my table, 50 of which need to be changed to (smallmoney, null) format. Currently they are all (varchar(3), null).
How do I do that with the columns I want? Is there a quick way? Let's pretend I have 5 columns:
col1 (varchar(3), null)
col2 (varchar(3), null)
col3 (varchar(3), null)
col4 (varchar(3), null)
col5 (varchar(3), null)
how do I make them look like this:
col1 (smallmoney, null)
col2 (smallmoney, null)
col3 (smallmoney, null)
col4 (varchar(3), null)
col5 (varchar(3), null)
You can programmatically create the ALTER script, and then execute it. I just chopped this out, you'll need to validate the syntax:
SELECT
'ALTER TABLE "' + TABLE_NAME + '" ALTER COLUMN "' + COLUMN_NAME + '" SMALLMONEY'
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'MyTable'
AND COLUMN_NAME LIKE 'Pattern%'
Give this a shot, but make a backup of the table first... no idea how the automatic conversion of that data will go.
alter table badlyDesignedTable alter column col1 smallmoney, col2 smallmoney, col3 smallmoney
edit: changed syntax
You can query the system tables or ANSI views for the columns in question and generate the ALTER table statements. This
select SQL = 'alter table'
+ ' ' + TABLE_SCHEMA
+ '.'
+ TABLE_NAME
+ ' ' + 'alter column'
+ ' ' + COLUMN_NAME
+ ' ' + 'smallmoney'
from INFORMATION_SCHEMA.COLUMNS
where TABLE_SCHEMA = 'dbo'
and TABLE_NAME = 'MyView'
order by ORDINAL_POSITION
will generate an alter table statement for every column in the table. You'll need to either filter it in the where clause or past the results into a text editor and remove the ones you don't want.
Read up on ALTER TABLE ALTER COLUMN though...modifying a column with alter column comes with constraints and limitations, especially if it is indexed. alter table will fail if any column can't be converted to the target data type. For varchar->smallmoney conversion, you'll fail if any row contains anything that doesn't look like a string literal of the appropriate type. If it won't convert with CONVERT(smallmoney,alter table will fail. it the column contains nil ('') or whitespace (' '), the conversion will most likely succeed (in the case of a smallmoney target, I suspect you'll get 0.0000 as a result).
Bear in mind that multiple values may wind up converted to the same value in the target datatype. This can hose indexes.
If you're trying to convert from a nullable column to a non-nullable column, you'll need to first ensure that every row has a non-null value first. Otherwise the conversion will fail.
Good luck.
Related
I have a table import.hugo (import is schema) and I need to change all columns data type from text to numeric. The table already has data, so to alter column (for example) y I use this:
ALTER TABLE import.hugo ALTER COLUMN y TYPE numeric USING (y::numeric);
But the table has many columns and I donĀ“t want to do it manually.
I found something here and try it like this:
do $$
declare
t record;
begin
for t IN select column_name, table_name
from information_schema.columns
where table_name='hugo' AND data_type='text'
loop
execute 'alter table ' || t.table_name || ' alter column ' || t.column_name || ' type numeric';
end loop;
end$$;
When I run it, it doesn't work, it says: relation "hugo" does not exist
I tried many many more variants of this code, but I can't make it work.
I also don't know, how to implement this part: USING (y::numeric); (from the very first command) into this command.
Ideally, I need it in a function, where the user can define the name of a table, in which we are changing the columns data type. So function to call like this SELECT function_name(table_name_to_alter_columns);
Thanks.
Selecting table_name from information_schema.columns isn't enough, since the import schema is not in your search path. You can add the import schema to your search_path by appending import to whatever shows up in SHOW search_path, or you can just add the table_schema column to your DO block:
do $$
declare
t record;
begin
for t IN select column_name, table_schema || '.' || table_name as full_name
from information_schema.columns
where table_name='hugo' AND data_type='text'
loop
execute 'alter table ' || t.full || ' alter column ' || t.column_name || ' type numeric USING (' || t.column_name || '::numeric)';
end loop;
end$$;
I'm creating a grid that has two columns: Name and HotelId. The problem is that data for this grid should be sent with a single parameter of VARCHAR type and should look like this:
#Parameter = 'Name1:5;Name2:10;Name3:6'
As you can see, the parameter contains Name and a number that represents ID value and you can have multiple such entries, separated by ";" symbol.
My first idea was to write a query that creates a temp table that will have two columns and populate it with data from the parameter.
How could I achieve this? It seems like I need to split the parameter two times: by the ";" symbol for each row and then by ":" symbol for each column.
How should I approach this?
Also, if there is any other more appropriate solution, I'm open to suggestions.
First Drop the #temp table if Exists...
IF OBJECT_ID('tempdb..#temp', 'U') IS NOT NULL
/*Then it exists*/
DROP TABLE #temp
Then create #temp table
CREATE TABLE #temp (v1 VARCHAR(100))
Declare all the #Paramter....
DECLARE #Parameter VARCHAR(50)
SET #Parameter= 'Name1:5;Name2:10;Name3:6'
DECLARE #delimiter nvarchar(1)
SET #delimiter= N';';
Here, Inserting all #parameter value into #temp table using ';' separated..
INSERT INTO #temp(v1)
SELECT * FROM(
SELECT v1 = LTRIM(RTRIM(vals.node.value('(./text())[1]', 'nvarchar(4000)')))
FROM (
SELECT x = CAST('<root><data>' + REPLACE(#Parameter, #delimiter, '</data><data>') + '</data></root>' AS XML).query('.')
) v
CROSS APPLY x.nodes('/root/data') vals(node)
)abc
After inserting the value into #temp table..get all the value into ':' seprated...
select Left(v1, CHARINDEX(':', v1)-1) as Name , STUFF(v1, 1, CHARINDEX(':', v1), '') as HotelId FROM #temp
Then you will get this type of Output
I'm using SQL Server 2008 R2. I have 2 tables old and new and about 500k rows.
I need to convert data from old to new. Some columns were changed. For example in old table many columns are of type varchar and in new table int.
I'm executing query like this:
INSERT INTO new (xxx)
SELECT FROM old (yyy)
And get following error:
Msg 245, Level 16, State 1, Line 4
Conversion failed when converting the nvarchar value 'Tammi ' to data type int.
This error shows, that in old table are some rows with wrong data in columns. (Human factor).
But how can I find these wrong rows? Is it possible?
How can I find in what column wrong data is present?
This is a pain. But, to find values that cannot be converted to ints, try this:
select yyyy
from old
where yyyy like '%[^0-9]%';
In SQL Server 2012+, you can use try_convert():
select yyyy
from old
where try_convert(int, yyyy) is null;
Could you execute the code that this T-SQL statement generates (just change the table name):
DECLARE #TableName SYSNAME = 'DataSource'
SELECT 'SELECT * FROM ' + #TableName + ' WHERE ' +
STUFF
(
(
SELECT 'OR ISNUMERIC([' + name + '] + ''.e0'') = 0 '
FROM sys.columns
WHERE object_id = OBJECT_ID(#TableName)
FOR XML PATH(''), TYPE
).value('.', 'VARCHAR(MAX)')
,1
,3
,''
);
For, example, if we have the following table:
IF OBJECT_ID('DataSource') IS NOT NULL
BEGIN
DROP TABLE DataSource;
END;
GO
CREATE TABLE DataSource
(
A VARCHAR(12)
,B VARCHAR(12)
,C VARCHAR(12)
);
GO
INSERT DataSource ([A], [B], [C])
VALUES ('1', '2', '3')
,('0.5', '4', '2')
,('1', '2', 'A');
GO
The script will generate this statement:
SELECT * FROM DataSource WHERE ISNUMERIC([A] + '.e0') = 0 OR ISNUMERIC([B] + '.e0') = 0 OR ISNUMERIC([C] + '.e0') = 0
returning two of the rows (because A and 0.5 cannot be converted to int):
I have just completed the process of loading new tables with data. I'm currently trying to validate the data. The way I have designed my database there really shouldn't be any values anywhere that are NULL so i'm trying to find all rows with any NULL value.
Is there a quick and easy way to do this instead of writing a lengthy WHERE clause with OR statements checking each column?
UPDATE: A little more detail... NULL values are valid initially as sometimes the data is missing. It just helps me find out what data I need to hunt down elsewhere. Some of my tables have over 50 columns so writing out the whole WHERE clause is not convenient.
Write a query against Information_Schema.Columns (documentation) that outputs the SQL for your very long where clause.
Here's something to get you started:
select 'OR ([' + TABLE_NAME + '].[' + TABLE_SCHEMA + '].[' + COLUMN_NAME + '] IS NULL)'
from mydatabase.Information_Schema.Columns
order by TABLE_NAME, ORDINAL_POSITION
The short version answer, use SET CONCAT_NULL_YIELDS_NULL ON and bung the whole thing together as a string and check that for NULL (once). That way any null will propagate through to make the whole row comparison null.
Here's the silly sample code to demo the principal, up to you if you want to wrap that in an auto-generating schema script (to only check Nullable columns and do all the appropriate conversions). Efficient it ain't, but almost any way you cut it you will need to do a table scan anyway.
CREATE TABLE dbo.Example
(
PK INT PRIMARY KEY CLUSTERED IDENTITY(1,1),
A nchar(10) NULL,
B int NULL,
C nvarchar(50) NULL
) ON [PRIMARY]
GO
INSERT dbo.Example(A, B, C)
VALUES('Your Name', 1, 'Not blank'),
('My Name', 3, NULL),
('His Name', NULL, 'Not blank'),
(NULL, 5, 'It''s blank');
SET CONCAT_NULL_YIELDS_NULL ON
SELECT E.PK
FROM dbo.Example E
WHERE (E.A + CONVERT(VARCHAR(32), E.B) + E.C) IS NULL
SET CONCAT_NULL_YIELDS_NULL OFF
As mentioned in a comment, if you really expect columns to not be null, then put NOT NULL constraints on them. That said...
Here's a slightly different approach, using INFORMATION_SCHEMA:
DECLARE #sql NVARCHAR(max) = '';
SELECT #sql = #sql + 'UNION ALL SELECT ''' + cnull.TABLE_NAME + ''' as TableName, '''
+ cnull.COLUMN_NAME + ''' as NullColumnName, '''
+ pk.COLUMN_NAME + ''' as PkColumnName,' +
+ 'CAST(' + pk.COLUMN_NAME + ' AS VARCHAR(500)) as PkValue '
+ ' FROM ' + cnull.TABLE_SCHEMA + '.' + cnull.TABLE_NAME
+ ' WHERE ' +cnull.COLUMN_NAME + ' IS NULL '
FROM INFORMATION_SCHEMA.COLUMNS cnull
INNER JOIN (SELECT Col.Column_Name, col.TABLE_NAME, col.TABLE_SCHEMA
from INFORMATION_SCHEMA.TABLE_CONSTRAINTS Tab
INNER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE Col
ON Col.Constraint_Name = Tab.Constraint_Name AND Col.Table_Name = Tab.Table_Name
WHERE CONSTRAINT_TYPE = 'PRIMARY KEY') pk
ON pk.TABLE_NAME = cnull.TABLE_NAME AND cnull.TABLE_SCHEMA = pk.TABLE_SCHEMA
WHERE cnull.IS_NULLABLE = 'YES'
set #sql = SUBSTRING(#sql, 11, LEN(#sql)) -- remove the initial 'UNION ALL '
exec(#sql)
Rather a huge where clause, this will tell you the primary key on the table where any field in that table is null. Note that I'm CASTing all primary key values to avoid operand clashes if you have some that are int/varchar/uniqueidentifier etc. If you have a PK that doesn't fit into a VARCHAR(500) you probably have other problems....
This would probably need some tweaking if you have any tables with composite primary keys - as it is, I'm pretty sure it would just output separate rows for each member of the key instead of concatenating them, and wouldn't necessarily group them together the way you'd want.
One other thought would be to just SELECT * from ever table and save the output to a format (Excel, plain text csv) you can easily search for the string NULL.
I have a table and the columns on this table contains empty spaces for some records. Now I need to move the data to another table and replace the empty spaces with a NULL value.
I tried to use:
REPLACE(ltrim(rtrim(col1)),' ',NULL)
but it doesn't work. It will convert all of the values of col1 to NULL. I just want to convert only those values that have empty spaces to NULL.
I solved a similar problem using NULLIF function:
UPDATE table
SET col1 = NULLIF(col1, '')
From the T-SQL reference:
NULLIF returns the first expression if the two expressions are not equal. If the expressions are equal, NULLIF returns a null value of the type of the first expression.
Did you try this?
UPDATE table
SET col1 = NULL
WHERE col1 = ''
As the commenters point out, you don't have to do ltrim() or rtrim(), and NULL columns will not match ''.
SQL Server ignores trailing whitespace when comparing strings, so ' ' = ''. Just use the following query for your update
UPDATE table
SET col1 = NULL
WHERE col1 = ''
NULL values in your table will stay NULL, and col1s with any number on space only characters will be changed to NULL.
If you want to do it during your copy from one table to another, use this:
INSERT INTO newtable ( col1, othercolumn )
SELECT
NULLIF(col1, ''),
othercolumn
FROM table
This code generates some SQL which can achieve this on every table and column in the database:
SELECT
'UPDATE ['+T.TABLE_SCHEMA+'].[' + T.TABLE_NAME + '] SET [' + COLUMN_NAME + '] = NULL
WHERE [' + COLUMN_NAME + '] = '''''
FROM
INFORMATION_SCHEMA.columns C
INNER JOIN
INFORMATION_SCHEMA.TABLES T ON C.TABLE_NAME=T.TABLE_NAME AND C.TABLE_SCHEMA=T.TABLE_SCHEMA
WHERE
DATA_TYPE IN ('char','nchar','varchar','nvarchar')
AND C.IS_NULLABLE='YES'
AND T.TABLE_TYPE='BASE TABLE'
A case statement should do the trick when selecting from your source table:
CASE
WHEN col1 = ' ' THEN NULL
ELSE col1
END col1
Also, one thing to note is that your LTRIM and RTRIM reduce the value from a space (' ') to blank (''). If you need to remove white space, then the case statement should be modified appropriately:
CASE
WHEN LTRIM(RTRIM(col1)) = '' THEN NULL
ELSE LTRIM(RTRIM(col1))
END col1
Maybe something like this?
UPDATE [MyTable]
SET [SomeField] = NULL
WHERE [SomeField] is not NULL
AND LEN(LTRIM(RTRIM([SomeField]))) = 0
here's a regex one for ya.
update table
set col1=null
where col1 not like '%[a-z,0-9]%'
essentially finds any columns that dont have letters or numbers in them and sets it to null. might have to update if you have columns with just special characters.