I've too many columns with DECIMAL(A,B). Some of them have column default, some of them nullable, etc.
Instead of using:
ALTER TABLE TABLE_NAME ALTER COLUMN COLUMN_NAME DECIMAL(A,C)
Is there method that simply updates the SCALE of the DECIMAL?
You can try with modify keyword
ALTER TABLE "table_name" MODIFY "column_name" "New Data Type";
Here is one way
DECLARE #sql VARCHAR(max)= ''
SET #sql = (SELECT 'ALTER TABLE TABLE_NAME ALTER COLUMN ' + COLUMN_NAME + ' DECIMAL(A,C);'
FROM information_schema.columns
WHERE table_name = 'TABLE_NAME'
AND data_type = 'DECIMAL'
AND NUMERIC_SCALE = --B
--add relevant filters
FOR xml path(''))
--print #sql
EXEC (#sql)
Don't forget to replace A and C with proper Precision and Scale
Use can achieve it by executing a dynamic SQL query. Use stuff function to concatenate each alter statement and retrieve the column names and other details from information_schema.columns. Create an other variable to hold the new numeric_scale value and take numeric_precision from the information_schema.columns itself.
Query
declare #sql as varchar(max);
declare #i as int = 3; -- change accordingly
select #sql = stuff((
select 'alter table ' + [table_name]
+ ' alter column ' + [column_name] + ' decimal('
+ cast([numeric_precision] as varchar(100)) + ',' + cast(#i as varchar(100)) + ');'
from information_schema.columns
where table_name = 'your_table_name'
and data_type = 'decimal'
for xml path('')
)
, 1, 0, ''
);
exec(#sql);
Related
I am trying to set a default value to a column(Inserted_time), but first i need to check if the column exists in the tables. If the column doesn't exist, I need to add that column and give it a default value.
I am working with Sql Server Management Studio.
So far I have written this code:
IF EXISTS ( select TABLE_NAME from INFORMATION_SCHEMA.COLUMNS where TABLE_CATALOG = 'DB_COPY' and COLUMN_NAME = 'Inserted_Time')
begin
ALTER TABLE table_name ADD CONSTRAINT [Inserted_Time_Def] SET DEFAULT (sysdatetimeoffset()) FOR [Inserted_Time]
end
else
ALTER TABLE table_name ADD COLUMN [Inserted_Time] CONSTRAINT [Inserted_Time_Def] DEFAULT (sysdatetimeoffset()) WITH VALUES
Once I retrieve the tables that has the column, I need to add that table_name to the Alter command. But I am not able to do that. Can someone please tell me how to use the table_names retrieved from select statement in the alter statement?
First, you want to put all the table names in a temporary table so you can loop through it.
After, you can use a cursor to execute a command for each table name.
In my example, I only printed the command I wanted to execute. That way you can be sure the code will do what you want first.
Example :
select TABLE_NAME As TableName INTO #TablesList from INFORMATION_SCHEMA.COLUMNS where TABLE_CATALOG = 'DB_COPY' and COLUMN_NAME = 'Inserted_Time'
DECLARE #TablesCursor as CURSOR;
DECLARE #TableName as NVARCHAR(max);
DECLARE #CommandToExecute as NVARCHAR(max);
SET #TablesCursor = CURSOR FOR SELECT TableName FROM #TablesList;
OPEN #TablesCursor;
FETCH NEXT FROM #TablesCursor INTO #TableName;
WHILE ##FETCH_STATUS = 0
BEGIN
SET #CommandToExecute = 'ALTER TABLE ' + #TableName + ' WHAT YOU WANNA DO '
PRINT #CommandToExecute
--EXEC(#CommandToExecute)
FETCH NEXT FROM #TablesCursor INTO #TableName;
END
CLOSE #TablesCursor;
DEALLOCATE #TablesCursor;
Assuming that every table is in a different schema, then you could do something like this:
DECLARE #SQL nvarchar(MAX);
SET #SQL = STUFF((SELECT NCHAR(13) + NCHAR(10) +
CASE WHEN EXISTS (SELECT 1
FROM INFORMATION_SCHEMA.COLUMNS C
WHERE T.TABLE_SCHEMA = C.TABLE_SCHEMA
AND T.TABLE_NAME = C.TABLE_SCHEMA
AND C.COLUMN_NAME = N'Inserted_Time') THEN N'ALTER TABLE ' + QUOTENAME(T.TABLE_SCHEMA) + N'.' + QUOTENAME(T.TABLE_NAME) + N' ADD CONSTRAINT [Inserted_Time_Def] DEFAULT (sysdatetimeoffset()) FOR [Inserted_Time];'
ELSE N'ALTER TABLE ' + QUOTENAME(T.TABLE_SCHEMA) + N'.' + QUOTENAME(T.TABLE_NAME) + N' ADD COLUMN [Inserted_Time] CONSTRAINT [Inserted_Time_Def] DEFAULT (sysdatetimeoffset());'
END
FROM INFORMATION_SCHEMA.TABLES T
WHERE T.TABLE_CATALOG = N'DB_COPY'
FOR XML PATH(N''),TYPE).value('.','nvarchar(MAX)'),1,2,N'');
PRINT #SQL; --Your best friend. If more than 4,000 characters, use SELECT
EXECUTE sp_executesql #SQL;
This will very likely hugely out perform a CURSOR solution if you have a large number of schemas.
I need to convert all float data type columns to decimal with precision in all tables in a particular database.
I have the following code:
DECLARE #sql VARCHAR(8000)
SELECT
#sql = COALESCE(#sql + ',', '') +
CASE DATA_TYPE
WHEN 'float' THEN 'CAST(' + COLUMN_NAME + ' AS DECIMAL(28, 10))'
ELSE COLUMN_NAME
END
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
DATA_TYPE = 'float'
EXEC ('SELECT '+ #sql + ' FROM TABLE_NAME')
I also have another version with the decimal scale as a variable:
DECLARE
#sql NVARCHAR(MAX),
#DecimalPlace INT = 10
SELECT
#sql = COALESCE(#sql + ',', '') +
CASE DATA_TYPE
WHEN 'float'
THEN 'CAST(' + COLUMN_NAME + ' AS DECIMAL(28, ' + CAST(#DecimalPlace AS NVARCHAR) + '))'
ELSE COLUMN_NAME
END
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
DATA_TYPE = 'float'
EXEC ('SELECT '+ #sql + ' FROM TABLE_NAME')
I am currently getting the following error for my topmost script:
Msg 156, Level 15, State 1, Line 1
Incorrect syntax near the keyword 'from'.
I see my issue... 'from TABLE_NAME' is invalid. Here's the SQL generated...
SELECT
CAST(DWF_ORDERS_EXCHANGE_RATE_DOL AS DECIMAL(28,10)),
CAST(DWF_ORDERS_ITEM_DISCOUNT_PRC AS DECIMAL(28,10)),
CAST(DWF_ORDERS_SALES_QTY AS DECIMAL(28,10)),
CAST(DWF_ORDERS_OPEN_QTY
FROM
TABLE_NAME
It appears as though my logic can only accommodate one table at a time. Is there a relatively easy way to modify my logic in order to accommodate multiple tables fed into a variable?
Additionally, I'm not sure what would be the best practice in order to keep the original float precision/scale if applicable.
Using SQL Server 2014.
The following script will iterate through all the tables with a float column and generate the dynamic SQL with the correct table name:
declare #Table table ([Name] nvarchar(max), Handled bit default(0));
declare #Sql nvarchar(max), #TableName nvarchar(max);
insert into #Table ([Name])
select TABLE_NAME
from INFORMATION_SCHEMA.COLUMNS
where DATA_TYPE = 'float'
group by TABLE_NAME;
while exists (select 1 from #Table where Handled = 0) begin
select top 1 #TableName = [Name] from #Table where Handled = 0;
select #Sql = coalesce(#sql+',','') +
case DATA_TYPE
when 'float' then 'cast(' + COLUMN_NAME + ' as decimal(28,10))'
else COLUMN_NAME
end
from INFORMATION_SCHEMA.COLUMNS
where DATA_TYPE = 'float' and TABLE_NAME = #TableName;
set #Sql = 'select '+ #Sql + ' from ' + #TableName;
print(#Sql);
--exec (#Sql);
update #Table set Handled = 1 where #TableName = [Name];
end
I have a table with records which has 100 columns, I need to get the count of distinct values of all the columns from this table based on some condition (where clause).
Below query is working fine, but I'm not able to use the where clause. So it's giving the result for all the records of the table. But I want it to be based on some condition lets say column file_id = 1;. My question is how to use where clause with the below query. Or if there is any other alternative way to solve this problem.
declare #SQL nvarchar(max)
set #SQL = ''
;with cols as (
select Table_Schema, Table_Name, Column_Name, Row_Number() over(partition by Table_Schema, Table_Name
order by ORDINAL_POSITION) as RowNum
from INFORMATION_SCHEMA.COLUMNS
)
select #SQL = #SQL + case when RowNum = 1 then '' else ' union all ' end
+ ' select ''' + Column_Name + ''' as Column_Name, count(distinct ' + quotename (Column_Name) + ' ) As DistinctCountValue,
count( '+ quotename (Column_Name) + ') as CountValue FROM ' + quotename (Table_Schema) + '.' + quotename (Table_Name)
from cols
where Table_Name = 'table_name' --print #SQL
execute (#SQL)
I am using the dynamic query because I need to reuse this query for other tables also.
First get the columns and use stuff to generate the select in this way:
SELECT COUNT(ColumnA) AS ColumnA, COUNT(ColumnB AS ColumnB), COUNT(ColumnC) AS ColumnC....
That way you only select on your table once to get all counts, After that, use CROSS APPLY to "unpivot" those columns and return the output on one row per column
CROSS APPLY(
VALUES(1, 'ColumnA', ColumnA), (2, 'ColumnB', ColumnB), (3, 'ColumnC', ColumnC)
)(ID, ColumnName, DistinctCountValue)
For the filter, use sp_executesql and send the file_id as parameter
exec SP_executesql #SQL, N'#FID INT', #FID = #FileID
Since you are using all columns of the table Row_Number() over(partition by Table_Schema, Table_Name order by ORDINAL_POSITION) as RowNum becomes redundant, ORDINAL_POSITION already has the value that you are looking for
declare #tablename nvarchar(50) = 'MyTestTable'
declare #fileID int = 1
declare #SQL nvarchar(max)
set #SQL = ''
;with cols as (
select TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, ORDINAL_POSITION
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = #TableName
)
select #SQL = ';WITH CTE AS (SELECT
' +
STUFF((
SELECT ', COUNT(DISTINCT ' + QUOTENAME(COLUMN_NAME) + ') AS ' + QUOTENAME(COLUMN_NAME)
FROM cols
ORDER BY ORDINAL_POSITION
FOR XML PATH('')
), 1, 1, '')
+ '
FROM ' + #TableName + '
WHERE File_ID = #FID
)
SELECT B.*
FROM CTE
CROSS APPLY (
VALUES ' +STUFF((
SELECT ',( ' + CAST(ORDINAL_POSITION AS VARCHAR) + ',' + QUOTENAME(COLUMN_NAME,'''') + ',' + QUOTENAME(COLUMN_NAME) + ')'
FROM cols
ORDER BY ORDINAL_POSITION
FOR XML PATH('')
), 1, 1, '') + '
)B (ID,ColumnName,DistinctCountValue)
'
from cols
exec SP_executesql #SQL, N'#FID INT', #FID = #FileID
The query below creates a table of all the column names and uses a while loop to select the count for whatever WHERE clause you want to use. This should be pretty flexible for any table; just update the top variables. Note that this will not count a column where its value is null. You can add a case to the #Query parameter if that's what you want. Since it processes each row individually, I added in a temp table so you only hit the db once.
IF OBJECT_ID('tempdb..##SourceValues') IS NOT NULL
DROP TABLE ##SourceValues
DECLARE #Schema VARCHAR(50) = 'SomeSchema'
DECLARE #Table VARCHAR(50) = 'SomeTable'
DECLARE #WhereClause VARCHAR(MAX) = ' Some WHERE clause'
DECLARE #ColumnName VARCHAR(50)
DECLARE #ProcessedRows TABLE(ColumnName VARCHAR(50), DistinctCount INT)
DECLARE #Columns TABLE(RowNumber INT, ColumnName VARCHAR(100))
INSERT INTO #Columns SELECT ROW_NUMBER() OVER(ORDER BY COLUMN_NAME DESC), COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = #Table
DECLARE #Count INT = (SELECT MAX(RowNumber) FROM #Columns)
DECLARE #Counter INT = 0
DECLARE #DistinctCount INT
DECLARE #Query NVARCHAR(MAX)
EXEC('SELECT * INTO ##SourceValues FROM ' + #Table +' (NOLOCK)')
WHILE #Counter < #Count
BEGIN
SET #Counter += 1
SET #ColumnName = (SELECT ColumnName FROM #Columns WHERE RowNumber = #Counter)
SET #Query = 'SELECT #OutPut = COUNT(' + #ColumnName + ') FROM ' + #Schema + '.' + ' ##SourceValues ' + #WhereClause
EXECUTE sp_executesql #Query, N'#Output INT OUT', #DistinctCount OUT
INSERT INTO #ProcessedRows(ColumnName, DistinctCount) VALUES (#ColumnName, #DistinctCount)
END
SELECT * FROM #ProcessedRows
Let's try some different approach.
Get all values unpivoted as Param/Value:
1) Collect list of tables and columns to be used in dynamic SQL:
DROP TABLE IF EXISTS #Base;
;WITH SchemaData AS (
SELECT t.name AS [TableName],c.name AS [ColumnName],c.column_id AS [ColumnOrderID]
FROM sys.tables t
INNER JOIN sys.columns c ON c.object_id = t.object_id
)
SELECT t.TableName
,STUFF((SELECT ',CONVERT(NVARCHAR(MAX),' + QUOTENAME([ColumnName]) + ') AS ' + QUOTENAME([ColumnName])
FROM SchemaData a WHERE (a.TableName = t.TableName) FOR XML PATH(''),TYPE).value('(./text())[1]','NVARCHAR(MAX)'),1,1,'') AS [SelectClause]
,STUFF((SELECT ',' + QUOTENAME([ColumnName]) FROM SchemaData a WHERE (a.TableName = t.TableName) FOR XML PATH(''),TYPE).value('(./text())[1]','NVARCHAR(MAX)'),1,1,'') AS [UnpivotClause]
INTO #Base
FROM SchemaData t
GROUP BY t.TableName
;
2) Get all data inside a temp table
DROP TABLE IF EXISTS #Result;
CREATE TABLE #Result(TableName NVARCHAR(255),ColumnName NVARCHAR(255),[Value] NVARCHAR(MAX));
DECLARE #TableName NVARCHAR(255),#SelectClause NVARCHAR(MAX),#UnpivotClause NVARCHAR(MAX);
DECLARE crPopulateResult CURSOR LOCAL FAST_FORWARD READ_ONLY FOR SELECT b.TableName,b.SelectClause,b.UnpivotClause FROM #Base b;
OPEN crPopulateResult;
FETCH NEXT FROM crPopulateResult INTO #TableName,#SelectClause,#UnpivotClause;
DECLARE #dSql NVARCHAR(MAX);
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT #dSql = N' INSERT INTO #Result(TableName,[ColumnName],[Value])
SELECT up.TableName,up.Param AS [ColumnName],up.[Value]
FROM (
SELECT ''' + #TableName + N''' AS [TableName]
,' + #SelectClause + N'
FROM ' + QUOTENAME(#TableName) + N'
) a
UNPIVOT(Value FOR Param IN (' + #UnpivotClause + N')) up
';
EXEC sp_executesql #stmt = #dSql;
FETCH NEXT FROM crPopulateResult INTO #TableName,#SelectClause,#UnpivotClause;
END
CLOSE crPopulateResult;
DEALLOCATE crPopulateResult;
3) Any filters can be applied with #Results, including Table names, column names, data filters, etc:
SELECT r.TableName,r.ColumnName,COUNT(*) AS [CountValue],COUNT(DISTINCT r.[Value]) AS [DistinctCountValue]
FROM #Result r
--
--WHERE r.ColumnName = 'file_id' AND r.[Value] = '1'
--
GROUP BY r.TableName,r.ColumnName
ORDER BY r.TableName,r.ColumnName
;
To use this with a where clause with this query you just have to put the where clause in the construction after the table name so if you wanted to filter on file_id='1' then you would have:
FROM ' + quotename (Table_Schema) + '.' + quotename (Table_Name) +'where file_id =''1'' '
You can add a #where variable and concatenate that with your big union construction (as part of your select ... from cols). For example:
declare #SQL nvarchar(max)
declare #where nvarchar(max) = ' where file_id = 1'
set #SQL = ''
;with cols as (
select Table_Schema, Table_Name, Column_Name, Row_Number() over(partition by Table_Schema, Table_Name
order by ORDINAL_POSITION) as RowNum
from INFORMATION_SCHEMA.COLUMNS
)
select #SQL = #SQL + case when RowNum = 1 then '' else ' union all ' end
+ ' select ''' + Column_Name + ''' as Column_Name, count(distinct ' + quotename (Column_Name) + ' ) As DistinctCountValue,
count( '+ quotename (Column_Name) + ') as CountValue FROM ' + quotename (Table_Schema) + '.' + quotename (Table_Name)
+ #where
from cols
where Table_Name = 'table_name' --print #SQL
execute (#SQL)
Note that you'll need to escape single quotes in #where if you're searching for a string. For example, declare #where nvarchar(max) = ' where state = ''CT'''.
I want someway to automate table creations as every day customer can add some columns ,remove some ,so my idea is to pass table name and columns into a table then use this table in stored procedure to automatically creates the table.
This is table that will hold tables structure
create table nada_test
(
table_name varchar(500),
col_name varchar(100),
col_type varchar(100)
)
Sample data:
insert into nada_test
values ('data', 'salary', 'int'), ('data', 'id', 'int'),
('data', 'job', 'varchar(100)')
Could someone show me how to achieve this?
How about that
CREATE TABLE T
(
TableName varchar(500),
ColName varchar(100),
ColType varchar(100)
);
INSERT INTO T VALUES
('data','salary','int'),
('data', 'id', 'int'),
('data', 'job', 'varchar(100)');
DECLARE #SQL NVARCHAR(MAX);
SELECT #SQL = N'CREATE TABLE Data ('+ STUFF((
SELECT ',' + ColName + ' ' + ColType
FROM T
FOR XML PATH('')
), 1, 1, '') + N' );'
FROM T;
SELECT #SQL [CreateTable];
--EXECUTE sp_executesql #SQL;
But that won't help you
What will happen to the data already exists in your table?
What if the table already exists, ok you can pass that by IF OBJECT_ID() .., but still, what will happen to the data already in your table?
You will face another problem even if you store the data in temp table because the structure of both tables is not the same even the datatypes of the columns.
As it already been mentioned, your approach is very vulnerable to SQL injections.
See example:
insert into #nada_test
values ('TestTable] (TestColumn int);SELECT * FROM sys.tables--', 'TestColumn', 'INT')
GO
DECLARE #TableName sysname, #ColumnName sysname, #Type VARCHAR(100), #SQL VARCHAR(2000)
WHILE EXISTS (SELECT TOP 1 1 FROM #nada_test)
BEGIN
SELECT TOP 1 #TableName = table_name, #ColumnName = [col_name], #Type = col_type FROM #nada_test
DELETE FROM #nada_test WHERE #TableName = table_name and #ColumnName = [col_name]
IF NOT EXISTS ( SELECT TOP 1 1 FROM sys.tables WHERE name = #TableName)
SET #SQL = 'CREATE TABLE [' + #TableName + '] ([' + #ColumnName + '] ' + #Type + ');'
ELSE IF NOT EXISTS ( SELECT TOP 1 1 FROM sys.columns WHERE name = #ColumnName AND object_id = OBJECT_ID(#TableName))
SET #SQL = 'ALTER TABLE [' + #TableName + '] ADD [' + #ColumnName + '] ' + #Type + ';'
ELSE
SET #SQL = 'PRINT ''TABLE name [' + #TableName + '] with column [' + #ColumnName + '] is already exists'';'
PRINT #SQL
EXEC (#SQL)
END
Generally we can use like
create table x as select * from y ;
using some existing table structure say y in this case
You can create a ddl trigger on your existing requirement i.e. if theres any change to this table
then fire the same query above.
I want to delete the ABCDEF column from all tables of my database. I am trying this code:
declare #SQL nvarchar(max)
SELECT
#SQL = STUFF((SELECT ' DROP column ' + quotename(TABLE_SCHEMA) + '.' + quotename(table_NAME ) +'.ABCDEF;'
FROM
information_schema.columns
FOR XML PATH('')),1,2,'')
PRINT #SQL
EXECUTE (#SQL)
But I am getting an error
incorrect syntax near column
How to do this?
You can use an undocumented feature of SQL Server sys.sp_msforeachtable. The below script will basically iterate for all the tables in database and alter them if required.
select '[dbo].['+tab.name+']' name into #table from
sys.tables tab join sys.columns col on tab.object_id = col.object_id and col.name = 'ABCDEF'
exec sys.sp_msforeachtable 'if exists (select 1 from #table where name = ''?'')
alter table ? drop column [ABCDEF]'
That's not the right way to drop a column from a table. It should be
ALTER TABLE table_name DROP COLUMN column_name
Build your dynamic query something like this:
DECLARE #sql NVARCHAR(max)=''
SELECT #SQL += 'Alter table ' + Quotename(table_catalog)
+ '.' + Quotename(table_schema) + '.'
+ Quotename(TABLE_NAME) + ' DROP column '
+ Quotename(column_name) + ';'
FROM information_schema.columns where COLUMN_NAME = 'abcd' -- Here alone mention
--the Column to be removed
EXEC Sp_executesql #sql
While dropping the columns from multiple tables, I faced following default constraints error.
The object 'DF_TableName_ColumnName' is dependent on column 'ColumnName'.
To resolve this, I have to drop all those constraints first, by using following query
DECLARE #sql NVARCHAR(max)=''
SELECT #SQL += 'Alter table ' + Quotename(tbl.name) + ' DROP constraint ' + Quotename(cons.name) + ';'
FROM SYS.DEFAULT_CONSTRAINTS cons
JOIN SYS.COLUMNS col ON col.default_object_id = cons.object_id
JOIN SYS.TABLES tbl ON tbl.object_id = col.object_id
WHERE col.[name] IN ('Column1','Column2')
--PRINT #sql
EXEC Sp_executesql #sql
After that, I dropped all those columns by using the answer above.
DECLARE #sql NVARCHAR(max)=''
SELECT #SQL += 'Alter table ' + Quotename(table_catalog)+ '.' + Quotename(table_schema) + '.'+ Quotename(TABLE_NAME)
+ ' DROP column ' + Quotename(column_name) + ';'
FROM information_schema.columns where COLUMN_NAME IN ('Column1','Column2')
--PRINT #sql
EXEC Sp_executesql #sql
I posted here in case someone find the same issue.
Happy Coding!