Altering data types of all columns in a table (PostgreSQL 13) - database

I have a table import.hugo (import is schema) and I need to change all columns data type from text to numeric. The table already has data, so to alter column (for example) y I use this:
ALTER TABLE import.hugo ALTER COLUMN y TYPE numeric USING (y::numeric);
But the table has many columns and I donĀ“t want to do it manually.
I found something here and try it like this:
do $$
declare
t record;
begin
for t IN select column_name, table_name
from information_schema.columns
where table_name='hugo' AND data_type='text'
loop
execute 'alter table ' || t.table_name || ' alter column ' || t.column_name || ' type numeric';
end loop;
end$$;
When I run it, it doesn't work, it says: relation "hugo" does not exist
I tried many many more variants of this code, but I can't make it work.
I also don't know, how to implement this part: USING (y::numeric); (from the very first command) into this command.
Ideally, I need it in a function, where the user can define the name of a table, in which we are changing the columns data type. So function to call like this SELECT function_name(table_name_to_alter_columns);
Thanks.

Selecting table_name from information_schema.columns isn't enough, since the import schema is not in your search path. You can add the import schema to your search_path by appending import to whatever shows up in SHOW search_path, or you can just add the table_schema column to your DO block:
do $$
declare
t record;
begin
for t IN select column_name, table_schema || '.' || table_name as full_name
from information_schema.columns
where table_name='hugo' AND data_type='text'
loop
execute 'alter table ' || t.full || ' alter column ' || t.column_name || ' type numeric USING (' || t.column_name || '::numeric)';
end loop;
end$$;

Related

Make variable's data type match column's data type

A local variable's data type needs to match the data type of an existing table column.
In the past, I would look up the column's data type and manually match, like so:
-- schema follows...
CREATE TABLE [dbo].[TestTable]
(
[Id] BIGINT NOT NULL PRIMARY KEY,
[valueholder] NVARCHAR(MAX) NULL
)
...
-- manually set data type to match above
DECLARE #tempvalueholder AS NVARCHAR(MAX)
The trouble is, if the schema changes somewhere along the line, I'd have to manually look up and update.
Assuming the column and table names remain constant, is there some way to tie a local variable's data type to a column's data type?
I know how to get the data type in a way similar to this, but can't figure out how to hook up to a variable declaration:
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME='testtable'
AND COLUMN_NAME='valueholder'
You need to create "DECLARE #VariableName DATATYPE" dynamically. This means that the variable and the rest of your code should be in the scope of the dynamic SQL. If this fits your needs, you can try this:
DECLARE
#DataType1 VARCHAR(16)
, #DataType2 VARCHAR(16)
, #DynamicSQL NVARCHAR(MAX) = N'';
SELECT
#DataType1 = UPPER(DATA_TYPE)
, #DataType2 =
CASE
WHEN (DATA_TYPE IN ('char', 'nchar')) THEN CONCAT(UPPER(DATA_TYPE), '(', CHARACTER_MAXIMUM_LENGTH, ')')
WHEN (DATA_TYPE IN ('varchar', 'nvarchar')) THEN CASE WHEN (CHARACTER_MAXIMUM_LENGTH = -1) THEN CONCAT(UPPER(DATA_TYPE), '(MAX)') ELSE CONCAT(UPPER(DATA_TYPE), '(', CHARACTER_MAXIMUM_LENGTH, ')') END
WHEN (DATA_TYPE IN ('decimal', 'numeric')) THEN CONCAT(UPPER(DATA_TYPE), '(', NUMERIC_PRECISION, ', ', NUMERIC_SCALE, ')')
WHEN (DATA_TYPE = 'float') THEN CASE WHEN (NUMERIC_PRECISION = 53) THEN UPPER(DATA_TYPE) ELSE CONCAT(UPPER(DATA_TYPE), '(', NUMERIC_PRECISION, ')') END
WHEN (DATA_TYPE = 'real') THEN CASE WHEN (NUMERIC_PRECISION = 24) THEN UPPER(DATA_TYPE) ELSE CONCAT(UPPER(DATA_TYPE), '(', NUMERIC_PRECISION, ')') END
WHEN (DATA_TYPE = 'image') THEN NULL
WHEN (DATA_TYPE = 'time') THEN CASE WHEN (DATETIME_PRECISION = 7) THEN UPPER(DATA_TYPE) ELSE CONCAT(UPPER(DATA_TYPE), '(', DATETIME_PRECISION, ')') END
WHEN (DATA_TYPE = 'varbinary') THEN CASE WHEN (CHARACTER_MAXIMUM_LENGTH = -1) THEN CONCAT(UPPER(DATA_TYPE), '(MAX)') ELSE CONCAT(UPPER(DATA_TYPE), '(', CHARACTER_MAXIMUM_LENGTH, ')') END
ELSE UPPER(DATA_TYPE)
END
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_SCHEMA = 'SchemaName'
AND TABLE_NAME = 'TableName'
AND COLUMN_NAME = 'ColumnName';
IF(#DataType2 IS NULL)
BEGIN RAISERROR (N'Data type %s is invalid for local variables.', 16, 1, #DataType1); END
ELSE
BEGIN
SET #DynamicSQL += N'
DECLARE #VariableName ' + #DataType2 + ';'
SET #DynamicSQL += N'
SET #VariableName = 15;
SELECT #VariableName AS [#VariableName];
';
EXEC (#DynamicSQL);
END
unlike Oracle this is not supported in sql-server.
The only thing you can do is when the schema of a table changes, find all procedures, triggers, functions that access this table and check/correct the declaration.
Here is a small query that helps you find all procedures, functions and triggers where that table is used :
SELECT DISTINCT
o.name AS Object_Name,
o.type_desc,
m.*
FROM sys.sql_modules m
INNER JOIN sys.objects o ON m.object_id = o.object_id
WHERE m.definition Like '%MyTableName%';
I can suggest using a temp table which will have exactly the same structure of the originating table.
So even table columns' data types are changed, your stored procedure codes will not be affected.
But it has also coding overload for developers
Here is a sample
create procedure TestTable10_readbyId (#Id bigint)
as
begin
DECLARE #tempvalueholder AS NVARCHAR(MAX)
select * into #t from TestTable10 where 1 = 0
insert into #t (Id) values (#Id)
update #t
set valueholder = TestTable10.valueholder
from #t
inner join TestTable10 on TestTable10.Id = #t.Id
and TestTable10.Id = #Id
select #tempvalueholder = valueholder
from TestTable10
where Id = #Id
select #tempvalueholder as valueholder
select valueholder from #t
end
You see I have maintained both methods
One is using a data variable declaration
Other is creating a temp table, which generates logically a variable for each column in the select list
Of course we need a row with NULL values for the second solution.
I actually don't prefer this method because you have to think all those during coding. So I have inserted a dummy row with null values but only the PK field that I believe it will be useful in following code blocks
Instead of returning a variable by setting its value first, I update the temp table column and then return this row column in this example.

Oracle: update multiple columns with dynamic query

I am trying to update all the columns of type NVARCHAR2 to some random string in my database. I iterated through all the columns in the database of type nvarchar2 and executed an update statement for each column.
for i in (
select
table_name,
column_name
from
user_tab_columns
where
data_type = 'NVARCHAR2'
) loop
execute immediate
'update ' || i.table_name || 'set ' || i.column_name ||
' = DBMS_RANDOM.STRING(''X'', length('|| i.column_name ||'))
where ' || i.column_name || ' is not null';
Instead of running an update statement for every column of type nvarchar2, I want to update all the nvarchar columns of a particular table with a single update statement for efficiency(that is, one update statement per 1 table). For this, I tried to bulk collect all the nvarchar columns in a table, into a temporary storage. But, I am stuck at writing a dynamic update statement for this. Could you please help me with this? Thanks in advance!
You can try this one. However, depending on your table it could be not the fastest solution.
for aTable in (
select table_name,
listagg(column_name||' = nvl2('||column_name||', DBMS_RANDOM.STRING(''XX'', length('||column_name||')), NULL)') WITHIN GROUP (ORDER BY column_name) as upd,
listagg(column_name) WITHIN GROUP (ORDER BY column_name) as con
from user_tab_columns
where DATA_TYPE = 'NVARCHAR2'
group by table_name
) loop
execute immediate
'UPDATE '||aTable.table_name ||
' SET '||aTable.upd ||
' WHERE COALESCE('||aTable.con||') IS NOT NULL';
end loop;
Resulting UPDATE (verify with DBMS_OUTPUT.PUT_LINE(..)) should look like this:
UPDATE MY_TABLE SET
COL_A = nvl2(COL_A, DBMS_RANDOM.STRING('XX', length(COL_A)), NULL),
COL_B = nvl2(COL_B, DBMS_RANDOM.STRING('XX', length(COL_B)), NULL)
WHERE COALESCE(COL_A, COL_B) IS NOT NULL;
Please try this:
DECLARE
CURSOR CUR IS
SELECT
TABLE_NAME,
LISTAGG(COLUMN_NAME||' = DBMS_RANDOM.STRING(''X'', length(NVL('||
COLUMN_NAME ||',''A''))',', ')
WITHIN GROUP (ORDER BY COLUMN_ID) COLUMN_NAME
FROM DBA_TAB_COLUMNS
WHERE DATA_TYPE = 'NVARCHAR2'
GROUP BY TABLE_NAME;
TYPE TAB IS TABLE OF CUR%ROWTYPE INDEX BY PLS_INTEGER;
T TAB;
S VARCHAR2(4000);
BEGIN
OPEN CUR;
LOOP
FETCH CUR BULK COLLECT INTO T LIMIT 1000;
EXIT WHEN T.COUNT = 0;
FOR i IN 1..T.COUNT LOOP
S := 'UPDATE ' || T(i).TABLE_NAME || ' SET ' || T(i).COLUMN_NAME;
EXECUTE IMMEDIATE S;
END LOOP;
END LOOP;
COMMIT;
END;
/
I think that would do it. But as I said in the comments, you need to validate the syntax since I don't have an Oracle instance to test it.
for i in (
select table_name,
'update || i.table_name || set ' ||
listagg( column_name || '= NLV( ' || column_name || ', '
|| 'DBMS_RANDOM.STRING(''X'', length('|| column_name ||') ) )'
|| ';'
) WITHIN GROUP (ORDER BY column_name) as updCommand
from user_tab_columns
where DATA_TYPE = 'NVARCHAR2'
group by table_name
) loop
execute immediate i.updCommand;
end loop;
If you find any error, let me know in the comments so I can fix it.

Apply replace to all columns of SQL server query

I know how to replace something in one field:
Select Replace(Clmn1, char(9), ' ') as Clmn1
From TableA
Now I would like to apply replace statement to all columns in the table. Is there an easy way to do it?
Sorry, there's no shortcut here (short of dynamic sql, but you dont want to do that!)
Select Replace(Clmn1, char(9), ' ') as Clmn1,
Replace(Clmn2, char(9), ' ') as Clmn2,
Replace(Clmn3, char(9), ' ') as Clmn3
From TableA
If this is something you do often, you could write a user defined function, which would slightly shorten your statement
Select dbo.CleanChar9(Clmn1) as Clmn1,
dbo.CleanChar9(Clmn2) as Clmn2,
dbo.CleanChar9(Clmn3) as Clmn3
From TableA
with
CREATE FUNCTION dbo.CleanChar9( #val NVARCHAR(MAX) ) -- use the appropriate type
AS
BEGIN
RETURN REPLACE(#val,CHAR(9), ' ');
END

Execute Immediate in cursor on ibm db2

I'm having difficulties creating a SP in which I pass in a name of a table and query the SYS2 library to find out if it has an auto-increment field. If it does I query for the max value of that field in the table and then alter the table so the next used value is that result plus 1. This is for use when migrating production data over to development.
I'm not sure if it is possible to use "Execute Immediate" as part of a cursor declaration. I'm still fairly new to db2 in general, never mind for IBM. So any assistance would be greatly appreciated. If "Execute Immediate" is not allowed in a cursor declaration, how would I go about doing this?
I'm getting the error on the Cursor declaration (line 10), but here is the exact error code I'm getting:
SQL State: 42601
Vendor Code: -199
Message: [SQL0199] Keyword IMMEDIATE not expected. Valid tokens: <END-OF-STATEMENT>. Cause . . . . . : The keyword IMMEDIATE was not expected here. A syntax error was detected at keyword IMMEDIATE. The partial list of valid tokens is <END-OF-STATEMENT>. This list assumes that the statement is correct up to the unexpected keyword. The error may be earlier in the statement but the syntax of the statement seems to be valid up to this point. Recovery . . . : Examine the SQL statement in the area of the specified keyword. A colon or SQL delimiter may be missing. SQL requires reserved words to be delimited when they are used as a name. Correct the SQL statement and try the request again.
And then finally here is my SP
/* Creating procedure DLLIB.SETNXTINC# */
CREATE OR REPLACE PROCEDURE DLLIB.SETNXTINC#(IN TABLE CHARACTER (10) ) LANGUAGE SQL CONTAINS SQL PROGRAM TYPE SUB CONCURRENT ACCESS RESOLUTION DEFAULT DYNAMIC RESULT SETS 0 OLD SAVEPOINT LEVEL COMMIT ON RETURN NO
SET #STMT1 = 'SELECT COLUMN_NAME ' ||
'FROM QSYS2.SYSCOLUMNS ' ||
'WHERE TABLE_SCHEMA =''DLLIB'' and table_name = ''' || TRIM(TABLE) || '''' ||
'AND HAS_DEFAULT = ''I'' ' ||
'OR HAS_DEFAULT = ''J'';';
DECLARE cursor1 CURSOR FOR
EXECUTE IMMEDIATE #STMT1;
OPEN cursor1;
WHILE (sqlcode == 0){
FETCH cursor1 INTO field;
SET #STMT2 = 'ALTER TABLE DLLIB.' || TRIM(TABLE) || ''' ' ||
'ALTER COLUMN ' || TRIM(field) || ' RESTART WITH ( ' ||
'SELECT MAX(' || TRIM(field) || ') ' ||
'FROM DLLIB.' || TRIM(TABLE) || ');';
EXECUTE IMMEDIATE #STMT2;
};;
/* Setting label text for DLLIB.SETNXTINC# */
LABEL ON ROUTINE DLLIB.SETNXTINC# ( CHAR() ) IS 'Set the next auto-increment';
/* Setting comment text for DLLIB.SETNXTINC# */
COMMENT ON PARAMETER ROUTINE DLLIB.SETNXTINC# ( CHAR() ) (TABLE IS 'Table from DLLIB' ) ;
First off, you don't need to dynamically prepare you first statement.
Secondly, you can't use a SELECT in the RESTART WITH, you'll have to use 2 statements
Thirdly, if you use VARCHAR instead of CHAR, you don't need to use any TRIM()s
Lastly, using TABLE as a parameter name is bad practice as it is a reserved word.
You want something like so
CREATE OR REPLACE PROCEDURE QGPL.SETNXTINC#(IN MYTABLE VARCHAR (128) )
LANGUAGE SQL
MODIFIES SQL DATA
PROGRAM TYPE SUB
CONCURRENT ACCESS RESOLUTION DEFAULT
DYNAMIC RESULT SETS 0
OLD SAVEPOINT LEVEL
COMMIT ON RETURN NO
BEGIN
declare mycolumn varchar(128);
declare stmt2 varchar(1000);
declare stmt3 varchar(1000);
declare mymaxvalue integer;
-- Table known at runtime, a static statement is all we need
SELECT COLUMN_NAME INTO mycolumn
FROM QSYS2.SYSCOLUMNS
WHERE TABLE_SCHEMA = 'DLLIB'
AND TABLE_NAME = mytable
AND HAS_DEFAULT = 'I'
OR HAS_DEFAULT = 'J';
-- Need to use a dynamic statement here
-- as the affected table is not known till runtime
-- need VALUES INTO as SELECT INTO can not be used dynamically
SET STMT2 = 'VALUES (SELECT MAX(' || mycolumn || ') ' ||
'FROM DLLIB.' || mytable || ')' || 'INTO ?';
PREPARE S2 from stmt2;
EXECUTE S2 using mymaxvalue;
-- we want to restart with a value 1 more than the current max
SET mymaxvalue = mymaxvalue + 1;
-- Need to use a dynamic statement here
-- as the affected table is not known till runtime
SET STMT3 = 'ALTER TABLE DLLIB.' || mytable || ' ALTER COLUMN '
|| mycolumn || ' RESTART WITH ' || char(mymaxvalue);
EXECUTE IMMEDIATE STMT3;
END;
One more thing to consider, you might want to LOCK the table in exclusive mode prior to running STMT2; otherwise there's a possibility that a record with a higher value got added between the execution of STMT2 and STMT3.

Cast Varchar at smallmoney

I have about 100 columns in my table, 50 of which need to be changed to (smallmoney, null) format. Currently they are all (varchar(3), null).
How do I do that with the columns I want? Is there a quick way? Let's pretend I have 5 columns:
col1 (varchar(3), null)
col2 (varchar(3), null)
col3 (varchar(3), null)
col4 (varchar(3), null)
col5 (varchar(3), null)
how do I make them look like this:
col1 (smallmoney, null)
col2 (smallmoney, null)
col3 (smallmoney, null)
col4 (varchar(3), null)
col5 (varchar(3), null)
You can programmatically create the ALTER script, and then execute it. I just chopped this out, you'll need to validate the syntax:
SELECT
'ALTER TABLE "' + TABLE_NAME + '" ALTER COLUMN "' + COLUMN_NAME + '" SMALLMONEY'
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'MyTable'
AND COLUMN_NAME LIKE 'Pattern%'
Give this a shot, but make a backup of the table first... no idea how the automatic conversion of that data will go.
alter table badlyDesignedTable alter column col1 smallmoney, col2 smallmoney, col3 smallmoney
edit: changed syntax
You can query the system tables or ANSI views for the columns in question and generate the ALTER table statements. This
select SQL = 'alter table'
+ ' ' + TABLE_SCHEMA
+ '.'
+ TABLE_NAME
+ ' ' + 'alter column'
+ ' ' + COLUMN_NAME
+ ' ' + 'smallmoney'
from INFORMATION_SCHEMA.COLUMNS
where TABLE_SCHEMA = 'dbo'
and TABLE_NAME = 'MyView'
order by ORDINAL_POSITION
will generate an alter table statement for every column in the table. You'll need to either filter it in the where clause or past the results into a text editor and remove the ones you don't want.
Read up on ALTER TABLE ALTER COLUMN though...modifying a column with alter column comes with constraints and limitations, especially if it is indexed. alter table will fail if any column can't be converted to the target data type. For varchar->smallmoney conversion, you'll fail if any row contains anything that doesn't look like a string literal of the appropriate type. If it won't convert with CONVERT(smallmoney,alter table will fail. it the column contains nil ('') or whitespace (' '), the conversion will most likely succeed (in the case of a smallmoney target, I suspect you'll get 0.0000 as a result).
Bear in mind that multiple values may wind up converted to the same value in the target datatype. This can hose indexes.
If you're trying to convert from a nullable column to a non-nullable column, you'll need to first ensure that every row has a non-null value first. Otherwise the conversion will fail.
Good luck.

Resources