I am trying to update all the columns of type NVARCHAR2 to some random string in my database. I iterated through all the columns in the database of type nvarchar2 and executed an update statement for each column.
for i in (
select
table_name,
column_name
from
user_tab_columns
where
data_type = 'NVARCHAR2'
) loop
execute immediate
'update ' || i.table_name || 'set ' || i.column_name ||
' = DBMS_RANDOM.STRING(''X'', length('|| i.column_name ||'))
where ' || i.column_name || ' is not null';
Instead of running an update statement for every column of type nvarchar2, I want to update all the nvarchar columns of a particular table with a single update statement for efficiency(that is, one update statement per 1 table). For this, I tried to bulk collect all the nvarchar columns in a table, into a temporary storage. But, I am stuck at writing a dynamic update statement for this. Could you please help me with this? Thanks in advance!
You can try this one. However, depending on your table it could be not the fastest solution.
for aTable in (
select table_name,
listagg(column_name||' = nvl2('||column_name||', DBMS_RANDOM.STRING(''XX'', length('||column_name||')), NULL)') WITHIN GROUP (ORDER BY column_name) as upd,
listagg(column_name) WITHIN GROUP (ORDER BY column_name) as con
from user_tab_columns
where DATA_TYPE = 'NVARCHAR2'
group by table_name
) loop
execute immediate
'UPDATE '||aTable.table_name ||
' SET '||aTable.upd ||
' WHERE COALESCE('||aTable.con||') IS NOT NULL';
end loop;
Resulting UPDATE (verify with DBMS_OUTPUT.PUT_LINE(..)) should look like this:
UPDATE MY_TABLE SET
COL_A = nvl2(COL_A, DBMS_RANDOM.STRING('XX', length(COL_A)), NULL),
COL_B = nvl2(COL_B, DBMS_RANDOM.STRING('XX', length(COL_B)), NULL)
WHERE COALESCE(COL_A, COL_B) IS NOT NULL;
Please try this:
DECLARE
CURSOR CUR IS
SELECT
TABLE_NAME,
LISTAGG(COLUMN_NAME||' = DBMS_RANDOM.STRING(''X'', length(NVL('||
COLUMN_NAME ||',''A''))',', ')
WITHIN GROUP (ORDER BY COLUMN_ID) COLUMN_NAME
FROM DBA_TAB_COLUMNS
WHERE DATA_TYPE = 'NVARCHAR2'
GROUP BY TABLE_NAME;
TYPE TAB IS TABLE OF CUR%ROWTYPE INDEX BY PLS_INTEGER;
T TAB;
S VARCHAR2(4000);
BEGIN
OPEN CUR;
LOOP
FETCH CUR BULK COLLECT INTO T LIMIT 1000;
EXIT WHEN T.COUNT = 0;
FOR i IN 1..T.COUNT LOOP
S := 'UPDATE ' || T(i).TABLE_NAME || ' SET ' || T(i).COLUMN_NAME;
EXECUTE IMMEDIATE S;
END LOOP;
END LOOP;
COMMIT;
END;
/
I think that would do it. But as I said in the comments, you need to validate the syntax since I don't have an Oracle instance to test it.
for i in (
select table_name,
'update || i.table_name || set ' ||
listagg( column_name || '= NLV( ' || column_name || ', '
|| 'DBMS_RANDOM.STRING(''X'', length('|| column_name ||') ) )'
|| ';'
) WITHIN GROUP (ORDER BY column_name) as updCommand
from user_tab_columns
where DATA_TYPE = 'NVARCHAR2'
group by table_name
) loop
execute immediate i.updCommand;
end loop;
If you find any error, let me know in the comments so I can fix it.
Related
I have a table import.hugo (import is schema) and I need to change all columns data type from text to numeric. The table already has data, so to alter column (for example) y I use this:
ALTER TABLE import.hugo ALTER COLUMN y TYPE numeric USING (y::numeric);
But the table has many columns and I donĀ“t want to do it manually.
I found something here and try it like this:
do $$
declare
t record;
begin
for t IN select column_name, table_name
from information_schema.columns
where table_name='hugo' AND data_type='text'
loop
execute 'alter table ' || t.table_name || ' alter column ' || t.column_name || ' type numeric';
end loop;
end$$;
When I run it, it doesn't work, it says: relation "hugo" does not exist
I tried many many more variants of this code, but I can't make it work.
I also don't know, how to implement this part: USING (y::numeric); (from the very first command) into this command.
Ideally, I need it in a function, where the user can define the name of a table, in which we are changing the columns data type. So function to call like this SELECT function_name(table_name_to_alter_columns);
Thanks.
Selecting table_name from information_schema.columns isn't enough, since the import schema is not in your search path. You can add the import schema to your search_path by appending import to whatever shows up in SHOW search_path, or you can just add the table_schema column to your DO block:
do $$
declare
t record;
begin
for t IN select column_name, table_schema || '.' || table_name as full_name
from information_schema.columns
where table_name='hugo' AND data_type='text'
loop
execute 'alter table ' || t.full || ' alter column ' || t.column_name || ' type numeric USING (' || t.column_name || '::numeric)';
end loop;
end$$;
I want to create a generic query that will allow me to create a view (from a table) and convert all Array columns into strings.
Something like:
CREATE OR REPLACE VIEW view_1 AS
SELECT *
for each column_name in columns
CASE WHEN pg_typeof(column_name) == TEXT[] THEN array_to_string(column_name)
ELSE column_name
FROM table_1;
I guess that I can do that with stored procedure but I'm looking for solution in pure SQL, if it can be to much complex.
Here is a query to do such conversion. You can then customize it to create the view and execute it.
SELECT
'CREATE OR REPLACE VIEW my_table_view AS SELECT ' || string_agg(
CASE
WHEN pg_catalog.format_type(pg_attribute.atttypid, pg_attribute.atttypmod) LIKE '%[]' THEN 'array_to_string(' || pg_attribute.attname || ', '','') AS ' || pg_attribute.attname
ELSE pg_attribute.attname
END, ', ' ORDER BY attnum ASC)
|| ' FROM ' || min(pg_class.relname) || ';'
FROM
pg_catalog.pg_attribute
INNER JOIN
pg_catalog.pg_class ON pg_class.oid = pg_attribute.attrelid
INNER JOIN
pg_catalog.pg_namespace ON pg_namespace.oid = pg_class.relnamespace
WHERE
pg_attribute.attnum > 0
AND NOT pg_attribute.attisdropped
AND pg_namespace.nspname = 'my_schema'
AND pg_class.relname = 'my_table'
; \gexec
Example:
create table tarr (id integer, t_arr1 text[], regtext text, t_arr2 text[], int_arr integer[]);
==>
SELECT id, array_to_string(t_arr1) AS t_arr1, regtext, array_to_string(t_arr2) AS t_arr2, int_arr FROM tarr;
A local variable's data type needs to match the data type of an existing table column.
In the past, I would look up the column's data type and manually match, like so:
-- schema follows...
CREATE TABLE [dbo].[TestTable]
(
[Id] BIGINT NOT NULL PRIMARY KEY,
[valueholder] NVARCHAR(MAX) NULL
)
...
-- manually set data type to match above
DECLARE #tempvalueholder AS NVARCHAR(MAX)
The trouble is, if the schema changes somewhere along the line, I'd have to manually look up and update.
Assuming the column and table names remain constant, is there some way to tie a local variable's data type to a column's data type?
I know how to get the data type in a way similar to this, but can't figure out how to hook up to a variable declaration:
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME='testtable'
AND COLUMN_NAME='valueholder'
You need to create "DECLARE #VariableName DATATYPE" dynamically. This means that the variable and the rest of your code should be in the scope of the dynamic SQL. If this fits your needs, you can try this:
DECLARE
#DataType1 VARCHAR(16)
, #DataType2 VARCHAR(16)
, #DynamicSQL NVARCHAR(MAX) = N'';
SELECT
#DataType1 = UPPER(DATA_TYPE)
, #DataType2 =
CASE
WHEN (DATA_TYPE IN ('char', 'nchar')) THEN CONCAT(UPPER(DATA_TYPE), '(', CHARACTER_MAXIMUM_LENGTH, ')')
WHEN (DATA_TYPE IN ('varchar', 'nvarchar')) THEN CASE WHEN (CHARACTER_MAXIMUM_LENGTH = -1) THEN CONCAT(UPPER(DATA_TYPE), '(MAX)') ELSE CONCAT(UPPER(DATA_TYPE), '(', CHARACTER_MAXIMUM_LENGTH, ')') END
WHEN (DATA_TYPE IN ('decimal', 'numeric')) THEN CONCAT(UPPER(DATA_TYPE), '(', NUMERIC_PRECISION, ', ', NUMERIC_SCALE, ')')
WHEN (DATA_TYPE = 'float') THEN CASE WHEN (NUMERIC_PRECISION = 53) THEN UPPER(DATA_TYPE) ELSE CONCAT(UPPER(DATA_TYPE), '(', NUMERIC_PRECISION, ')') END
WHEN (DATA_TYPE = 'real') THEN CASE WHEN (NUMERIC_PRECISION = 24) THEN UPPER(DATA_TYPE) ELSE CONCAT(UPPER(DATA_TYPE), '(', NUMERIC_PRECISION, ')') END
WHEN (DATA_TYPE = 'image') THEN NULL
WHEN (DATA_TYPE = 'time') THEN CASE WHEN (DATETIME_PRECISION = 7) THEN UPPER(DATA_TYPE) ELSE CONCAT(UPPER(DATA_TYPE), '(', DATETIME_PRECISION, ')') END
WHEN (DATA_TYPE = 'varbinary') THEN CASE WHEN (CHARACTER_MAXIMUM_LENGTH = -1) THEN CONCAT(UPPER(DATA_TYPE), '(MAX)') ELSE CONCAT(UPPER(DATA_TYPE), '(', CHARACTER_MAXIMUM_LENGTH, ')') END
ELSE UPPER(DATA_TYPE)
END
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_SCHEMA = 'SchemaName'
AND TABLE_NAME = 'TableName'
AND COLUMN_NAME = 'ColumnName';
IF(#DataType2 IS NULL)
BEGIN RAISERROR (N'Data type %s is invalid for local variables.', 16, 1, #DataType1); END
ELSE
BEGIN
SET #DynamicSQL += N'
DECLARE #VariableName ' + #DataType2 + ';'
SET #DynamicSQL += N'
SET #VariableName = 15;
SELECT #VariableName AS [#VariableName];
';
EXEC (#DynamicSQL);
END
unlike Oracle this is not supported in sql-server.
The only thing you can do is when the schema of a table changes, find all procedures, triggers, functions that access this table and check/correct the declaration.
Here is a small query that helps you find all procedures, functions and triggers where that table is used :
SELECT DISTINCT
o.name AS Object_Name,
o.type_desc,
m.*
FROM sys.sql_modules m
INNER JOIN sys.objects o ON m.object_id = o.object_id
WHERE m.definition Like '%MyTableName%';
I can suggest using a temp table which will have exactly the same structure of the originating table.
So even table columns' data types are changed, your stored procedure codes will not be affected.
But it has also coding overload for developers
Here is a sample
create procedure TestTable10_readbyId (#Id bigint)
as
begin
DECLARE #tempvalueholder AS NVARCHAR(MAX)
select * into #t from TestTable10 where 1 = 0
insert into #t (Id) values (#Id)
update #t
set valueholder = TestTable10.valueholder
from #t
inner join TestTable10 on TestTable10.Id = #t.Id
and TestTable10.Id = #Id
select #tempvalueholder = valueholder
from TestTable10
where Id = #Id
select #tempvalueholder as valueholder
select valueholder from #t
end
You see I have maintained both methods
One is using a data variable declaration
Other is creating a temp table, which generates logically a variable for each column in the select list
Of course we need a row with NULL values for the second solution.
I actually don't prefer this method because you have to think all those during coding. So I have inserted a dummy row with null values but only the PK field that I believe it will be useful in following code blocks
Instead of returning a variable by setting its value first, I update the temp table column and then return this row column in this example.
Want to know the best way/approach to stream millions of records from DB table(oracle 11)
I tried simple
stmt.executeQuery() with setFetchSize on stmt, resultset etc. but it did not return any records when table has millions of records
tried with oracle 'rownum' clause, still returns nothing no execption
There are procedures like below
http://www.sqlines.com/postgresql-to-oracle/copy_export_csv_from_procedure
but it needs file package...
My question is there any best approach to stream such millions of records to UI from backend???
like best way to store large(how much) in local storage at browser? this has to secure storage as the data is sensitive?
Please help me with this.
NOTE: ORM/JPA cannot be used as the table generation is dynamic
Thanks!
Well, this is how I have solved it...
step 1 : oracle stored procedure to get the number of 'pages' based on your desired no. of records/page
create or replace
procedure sp_dynlkp_pages (pTable in varchar2, rowCnt out number)
as
begin
EXECUTE IMMEDIATE 'select count(*) from ' || pTable into rowCnt;
end;
step 2 : display the page numbers is drop down and on selection of page number from down fire rest call with input page number
create or replace procedure sp_dynlkp_data_view (pTable in varchar2, pageNumber in number, pageSize in number, p_data_cursor out sys_refcursor)
as
v_sql varchar2(4000);
v_sql1 varchar2(4000);
begin
v_sql := '';
for r in (select column_name from user_tab_cols where table_name = pTable)loop
-- decrypt fn
--v_sql := v_sql || 'f_decrypt(' || r.column_name ||', ' || dKey || ')' || ',';
v_sql := v_sql || r.column_name || ',';
end loop;
v_sql := rtrim(v_sql, ',');-- || ' from ' || pTable; --|| ' where rownum < ' || r_cnt;
v_sql1 := 'SELECT ' || v_sql || ' FROM
(
SELECT a.*, rownum r__
FROM
( select * from '
|| pTable
|| ') a
WHERE rownum < ' || ((pageNumber * pageSize) + 1)
|| ')
WHERE r__ >= ' || (((pageNumber-1) * pageSize) + 1);
--DBMS_OUTPUT.PUT_LINE(v_sql1);
open p_data_cursor for v_sql1;
end;
NOTE: On UI front I am using ng-grid(because of project spec), you guys can use any latest feature rich UI grid component.
I have a requirement I need your help with.
Number of rows in a table : 130
That is the only data I have. Based on this, Is it possible to find out the table names from an Oracle Database that contain 130 rows in them.
Thanks
Sam
SELECT TABLE_NAME FROM dba_tables WHERE num_rows = 130
-- num_rows = 130 can be replaced with any requirement you have
You can try with some dynamic SQL:
declare
n number;
begin
for t in (
select owner || '.' || table_name as tab
from dba_tables
where owner = 'YOUR_SCHEMA' /* IF YOU KNOW THE SCHEMA */
)
loop
execute immediate 'select count(1) from ' || t.tab into n;
if n = 130 then
dbms_output.put_line('Table ' || t.tab );
end if;
end loop;
end;
Please consider that, depending on the number of tables/records in your DB, this can take very long to run.
I hope this query may help you:
Query 1 : SELECT CONCAT('SELECT COUNT(*) as cnt FROM ', table_name, ' union all') FROM information_schema.tables WHERE table_schema = 'aes';
Query 2: select max(attendance) from ( paste the results obtained from the above query and remove the last union all) as tmptable;
Reference:
Find Table with maximum number of rows in a database in mysql