I'm currently getting the following Oracle error when calling a procedure:
ORA-54033: column to be modified is used in a virtual column
expression
How can I track down the root cause of this error?
To find the table and column names of all virtual columns in your database you can run the following query:
SELECT c.OWNER, c.TABLE_NAME, c.COLUMN_NAME
FROM DBA_TAB_COLS c
WHERE c.VIRTUAL_COLUMN = 'YES' AND
c.OWNER NOT IN ('SYS', 'XDB')
ORDER BY c.OWNER, c.TABLE_NAME, c.COLUMN_NAME;
And you can use the following script to dump the DDL of all tables in your database which contain virtual columns to DBMS_OUTPUT:
DECLARE
lobDDL CLOB;
PROCEDURE dump_clob(aCLOB IN CLOB) IS
nCLOB_length NUMBER;
nCLOB_offset NUMBER := 1;
nMax_chunk_size NUMBER := 32767;
strChunk VARCHAR2(32767);
BEGIN
nCLOB_length := DBMS_LOB.GETLENGTH(aCLOB);
WHILE nCLOB_offset <= nCLOB_length LOOP
strChunk := DBMS_LOB.SUBSTR(aCLOB, nMax_chunk_size, nCLOB_offset);
DBMS_OUTPUT.PUT(strChunk);
nCLOB_offset := nCLOB_offset + LENGTH(strChunk);
END LOOP;
DBMS_OUTPUT.PUT_LINE(';');
END dump_clob;
BEGIN
FOR aRow IN (SELECT DISTINCT c.OWNER, c.TABLE_NAME
FROM DBA_TAB_COLS c
WHERE c.VIRTUAL_COLUMN = 'YES' AND
c.OWNER NOT IN ('SYS', 'XDB')
ORDER BY c.OWNER, c.TABLE_NAME)
LOOP
lobDDL := DBMS_METADATA.GET_DDL(object_type => 'TABLE',
name => aRow.TABLE_NAME,
schema => aRow.OWNER);
dump_clob(lobDDL);
END LOOP;
END;
I got the same error and fixed it by doing as below.
Get the hidden columns with its dependent columns using the below.
SELECT COLUMN_NAME, DATA_DEFAULT, HIDDEN_COLUMN
FROM USER_TAB_COLS
WHERE TABLE_NAME = 'YOUR_TABLE_NAME';
You should see something like this below:
SYS_STUMF_$2WEF286CDZ1WPC4V_F5 |SYS_OP_COMBINED_HASH("ID","FIRST_NAME","ANOTHER_COLUMN_NAME") | YES
Drop this hidden column by using the above column names used by it.
exec dbms_stats.drop_extended_stats(user, 'YOUR_TABLE_NAME', '("ID","FIRST_NAME","ANOTHER_COLUMN_NAME") ');
Now run your procedure or alter your columns.
alter table YOUR_TABLE_NAME modify (ID VARCHAR2(10));
Create the hidden column again:
exec dbms_stats.create_extended_stats(user, 'YOUR_TABLE_NAME', '("ID","FIRST_NAME","ANOTHER_COLUMN_NAME")');
I am just sharing my experience here with this issue.
My table has the below virtual column. There are other columns as well, and I am just adding the impacted columns here.
Create Table Virtual_test
(ColumnA varchar2(100),
CLEAN_ColumnA VARCHAR2(4000 BYTE) GENERATED ALWAYS
AS(GET_CLEAN_ColumnA(ColumnA)) VIRTUAL
);
Now when I want to modify the size of ColumnA:
Alter Table Virtual_test
MODIFY ColumnA varchar2(255);
ORA-54033: column to be modified is used in a virtual column expression.
exec dbms_stats.drop_extended_stats(USER,'Virtual_test','(GET_CLEAN_ColumnA(ColumnA))' );
Alter Table Virtual_test
MODIFY ColumnA varchar2(255);
One major mistake I did was this virtual column was not hidden and not system generated in my table.
The step dbms_stats.drop_extended_stats deleted my existing column and
dbms_stats.create_extended_stats will create an invisible column with the system generated name.
So I think this won't work with a non-hidden column, and you may need to follow the below steps and drop the column.
Alter Table Virtual_test drop column CLEAN_ColumnA;
Alter Table Virtual_test
MODIFY ColumnA varchar2(255);
Alter Table Virtual_test add
CLEAN_ColumnA VARCHAR2(4000 BYTE) GENERATED ALWAYS
AS(GET_CLEAN_ColumnA(ColumnA)) VIRTUAL;
The virtual column names starting with 'SYS_ST%' are indeed belonging to extended statistics... and since Oracle 12c, the system is trying to recognize and create them automatically...
Just google for "Oracle 12c Automatic Column Group Detection".
Related
I have a computed column created with the following line:
alter table tbPedidos
add restricoes as (cast(case when restricaoLicenca = 1 or restricaoLote = 1 then 1 else 0 end as bit))
But, now I need to change this column for something like:
alter table tbPedidos
alter column restricoes as (cast(case when restricaoLicenca = 1 or restricaoLote = 1 or restricaoValor = 1 then 1 else 0 end as bit))
But it's not working. I'm trying to input another condition to the case statement, but it's not working.
Thanks a lot!
Something like this:
ALTER TABLE dbo.MyTable
DROP COLUMN OldComputedColumn
ALTER TABLE dbo.MyTable
ADD OldComputedColumn AS OtherColumn + 10
Source
If you're trying to change an existing column, you can't use ADD. Instead, try this:
alter table tbPedidos
alter column restricoes as
(cast(case when restricaoLicenca = 1 or restricaoLote = 1 or restricaoValor = 1
then 1 else 0 end as bit))
EDIT: The above is incorrect. When altering a computed column the only thing you can do is drop it and re-add it.
This is one of those situations where it can be easier and faster to just use the diagram feature of SQL Server Management Studio.
Create a new diagram, add your table, and choose to show the formula column in the diagram's table view.
Change the columns formula to an empty string ('') or something equally innocuous (probably such that you don't change the column's datatype).
Save the diagram (which should save the table).
Alter your function.
Put the function back in the formula for that column.
Save once again.
Doing it this way in SSMS will retain the ordering of the columns in your table, which a simple drop...add will not guarantee. This may be important to some.
Another thing that might be helpful to someone is how to modify a function that's a calculated column in a table (Following query is for SQL):
ALTER <table>
DROP COLUMN <column>
ALTER FUNCTION <function>
(
<parameters>
)
RETURNS <type>
BEGIN
...
END
ALTER <table>
ADD <column> as dbo.<function>(parameters)
Notes:
Parameters can be other columns from the table
You may not be able to run all these queries at once, I had trouble with this. Run them one at a time
SQL automatically populates calculated columns, so dropping and adding won't affect data (I was unaware of this)
Is it possible to set a variable during a query (valid only for the query in question) that can be captured by a TRIGGER procedure?
For example, I want to record the ID of the executor of a query (current_user is always the same).
So I would do something like this:
tbl_executor (
id PRIMARY KEY,
name VARCHAR
);
tbl_log (
executor REFERENCE tbl_executor(id),
op VARCHAR
);
tbl_other ...
CREATE TRIGGER t AFTER INSERT OR UPDATE OR DELETE ON tbl_executor
FOR EACH ROW
EXECUTE PROCEDURE (INSERT INTO tbl_log VALUES( ID_VAR_OF_THIS_QUERY ,TG_OP))
Now if I run a query like:
INSERT INTO tbl_other
VALUES(.......) - and set ID_VAR_OF_THIS_QUERY='id of executor' -
I get the following result:
tbl_log
-----------------------------
id | op |
-----------------------------
'id of executor' | 'INSERT'|
I hope I have made the idea... and I think it is hardly feasible... but is there anyone who could help me?
To answer the question
You can SET a (customized option) like this:
SET myvar.role_id = '123';
But that requires a literal value. There is also the function set_config(). Quoting the manual:
set_config(setting_name, new_value, is_local) ... set parameter and return new value
set_config sets the parameter setting_name to new_value. If is_local is true, the new value will only apply to the current transaction.
Correspondingly, read option values with SHOW or current_setting(). Related:
How to use variable settings in trigger functions?
But your trigger is on the wrong table (tbl_executor) with wrong syntax. Looks like Oracle code, where you can provide code to CREATE TRIGGER directly. In Postgres you need a trigger function first:
How to use PostgreSQL triggers?
So:
CREATE OR REPLACE FUNCTION trg_log_who()
RETURNS trigger AS
$func$
BEGIN
INSERT INTO tbl_log(executor, op)
VALUES(current_setting('myvar.role_id')::int, TG_OP); -- !
RETURN NULL; -- irrelevant for AFTER trigger
END
$func$ LANGUAGE plpgsql;
Your example setup requires the a type cast ::int.
Then:
CREATE TRIGGER trg_log_who
AFTER INSERT OR UPDATE OR DELETE ON tbl_other -- !
FOR EACH ROW EXECUTE PROCEDURE trg_log_who(); -- !
Finally, fetching id from the table tbl_executor to set the variable:
BEGIN;
SELECT set_config('myvar.role_id', id::text, true) -- !
FROM tbl_executor
WHERE name = current_user;
INSERT INTO tbl_other VALUES( ... );
INSERT INTO tbl_other VALUES( ... );
-- more?
COMMIT;
Set the the third parameter (is_local) of set_config() to true to make it session-local as requested. (The equivalent of SET LOCAL.)
But why per row? Would seem more reasonable to make it per statement?
...
FOR EACH STATEMENT EXECUTE PROCEDURE trg_foo();
Different approach
All that aside, I'd consider a different approach: a simple function returning the id a column default:
CREATE OR REPLACE FUNCTION f_current_role_id()
RETURNS int LANGUAGE sql STABLE AS
'SELECT id FROM tbl_executor WHERE name = current_user';
CREATE TABLE tbl_log (
executor int DEFAULT f_current_role_id() REFERENCES tbl_executor(id)
, op VARCHAR
);
Then, in the trigger function, ignore the executor column; will be filled automatically:
...
INSERT INTO tbl_log(op) VALUES(TG_OP);
...
Be aware of the difference between current_user and session_user. See:
How to check role of current PostgreSQL user from Qt application?
One option is to create a shared table to hold this information. Since it's per-connection, the primary key should be pg_backend_pid().
create table connection_global_vars(
backend_pid bigint primary key,
id_of_executor varchar(50)
);
insert into connection_global_vars(backend_pid) select pg_backend_pid() on conflict do nothing;
update connection_global_vars set id_of_executor ='id goes here' where backend_pid = pg_backend_pid();
-- in the trigger:
CREATE TRIGGER t AFTER INSERT OR UPDATE OR DELETE ON tbl_executor
FOR EACH ROW
EXECUTE PROCEDURE (INSERT INTO tbl_log VALUES( (select id_of_executor from connection_global_vars where backend_pid = pg_backend_pid()) ,TG_OP))
Another option is to create a temporary table (which exists per-connection).
create temporary table if not exists connection_global_vars(
id_of_executor varchar(50)
) on commit delete rows;
insert into connection_global_vars(id_of_executor) select null where not exists (select 1 from connection_global_vars);
update connection_global_vars set id_of_executor ='id goes here';
-- in the trigger:
CREATE TRIGGER t AFTER INSERT OR UPDATE OR DELETE ON tbl_executor
FOR EACH ROW
EXECUTE PROCEDURE (INSERT INTO tbl_log VALUES( (select id_of_executor from connection_global_vars where backend_pid = pg_backend_pid()) ,TG_OP))
For PostgreSQL in particular it probably won't make much difference to performance, except an unlogged temporary table may just possibly be slightly faster.
If you have performance issues around not recognising that it's a single row-table, you might run analyse.
I have a stored procedure that mergers Local temp table and existing table.
ALTER PROCEDURE [dbo].[SyncProductVariantsFromServices]
#Items ProductVariantsTable readonly
AS
BEGIN
CREATE TABLE #ProductVariantsTemp
(
ItemCode nvarchar(10) collate SQL_Latin1_General_CP1_CI_AS,
VariantCode nvarchar(10) collate SQL_Latin1_General_CP1_CI_AS,
VariantDescriptionBG nvarchar(100) collate SQL_Latin1_General_CP1_CI_AS,
VariantDescriptionEN nvarchar(100) collate SQL_Latin1_General_CP1_CI_AS
)
insert into #ProductVariantsTemp
select ItemCode, VariantCode, VariantDescriptionBG, VariantDescriptionEN
from #Items
MERGE ProductVariants AS TARGET
USING #ProductVariantsTemp AS SOURCE
ON (TARGET.ItemCode = SOURCE.ItemCode AND TARGET.VariantCode= SOURCE.VariantCode)
WHEN NOT MATCHED BY TARGET THEN
INSERT (ItemCode, VariantCode, VariantDescriptionBG, VariantDescriptionEN)
VALUES (SOURCE.ItemCode, SOURCE.VariantCode, SOURCE.VariantDescriptionBG, SOURCE.VariantDescriptionEN)
OUTPUT INSERTED.ItemCode, INSERTED.VariantCode, GETDATE() INTO SyncLog;
The problem is - i know in the output clause i have access to inserted or deleted records in case of Not merged by source. But in case 'not merged by source' I want to update
Update ProductVariants Set Active = 0
// when not matched by source
What is the most efficient way to do this?
Necessarily use `WHEN NOT MATCHED BY SOURCE 'when you want to delete a record that is not in the target table. if you want to 'inactivate' a record this must necessarily be done when the clause 'MATCHED' adding exceptions.
If you want to keep history of the records evaluate using "Slowly Changing Dimensions", I leave you some examples that Kimball uses for this treatment of historical data.
Slowly Changing Dimensions - Part 1
Slowly Changing Dimensions - Part 2
Use the WHEN NOT MATCHED BY SOURCE clause of the MERGE with an UPDATE statement.
MERGE
ProductVariants AS TARGET
USING
#ProductVariantsTemp AS SOURCE ON (TARGET.ItemCode = SOURCE.ItemCode AND TARGET.VariantCode= SOURCE.VariantCode)
WHEN
NOT MATCHED BY TARGET THEN
INSERT (ItemCode, VariantCode, VariantDescriptionBG, VariantDescriptionEN)
VALUES (SOURCE.ItemCode, SOURCE.VariantCode, SOURCE.VariantDescriptionBG, SOURCE.VariantDescriptionEN)
WHEN
NOT MATCHED BY SOURCE THEN
UPDATE SET Active = 0
OUTPUT
INSERTED.ItemCode, INSERTED.VariantCode, GETDATE() INTO SyncLog;
Since the OUTPUT clause for the INSERTED table might return either inserted or updated records now, you can add the special column $action that will tell you the original operation as INSERT or UPDATE. Will have to change the SyncLog table to recieve this value though.
OUTPUT
INSERTED.ItemCode, INSERTED.VariantCode, GETDATE(), $action
INTO SyncLog;
it it possible to write script in hana that crate temporary table that is based
on existing table (with no need to define columns and columns types hard coded ):
create local temporary table #mytemp (id integer, name varchar(20));
create temporary table with the same columns definitions and contain the
same data ? if so ..i ill be glad to get some examples
i am searching the internet for 2 days and i couldn't find anything useful
thanks
Creating local temporary tables based on dynamic structure definition is not supported in SQLScript.
The question would be: for what do you want to use it?
Instead of a local temp. table you can use a table variable in most cases.
By querying sys.table_columns view, you can get the list and properties of source table and build a dynamic CREATE script then Execute to create the table.
You can find SQL codes for a sample case at Create Table Dynamically on HANA Database
For table columns read
select * from sys.table_columns where table_name = 'TABLENAME';
Seems to work in the hana version I have. I'm not sure how to find out what the version.
PROCEDURE "xxx.yyy.zzz::MY_TEST"(
OUT "OUT_COL" NVARCHAR(200)
)
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
AS
BEGIN
create LOCAL TEMPORARY TABLE #LOCALTEMPTABLE
as
(
SELECT distinct 'Cola' as out_col
FROM "SYNONYMS1"
);
select * from #LOCALTEMPTABLE ;
DROP TABLE #LOCALTEMPTABLE;
END
The newer HANA version (HANA 2 SPS 04 Patch 5 ( Build 4.4.17 )) supports your request:
create local temporary table #tempTableName' like "tableTypeName";
This should inherit the data types and all exact values from whatever query is in the parenthesis:
CREATE LOCAL COLUMN TEMPORARY TABLE #mytemp AS (
SELECT
"COLUMN1",
"COLUMN2",
"COLUMN3"
FROM MyTable
);
-- Now you can add the rest of your query here as such:
SELECT * FROM #mytemp
I suppose you can just write :
create column table #MyTempTable as ( select * from MySourceTable);
BR,
I have a Oracle trigger and I need to create column after insert rows to first table.
So.. In my scenario:
When some record inserted into NEWS_TBL and i need to get that(in here i get it via last inserted record) and i need to get the NAME from the NEWS_TBL and returning value to NewsName variable and that returned value inserted to the NEWS_TYPE_TBL as a Column.
Below code is not working. can anyone pls give me a solution for this.
MyCode
BEFORE DELETE OR INSERT OR UPDATE
ON NEWS.NEWS_FIRST
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
NewsName varchar2(50);
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE
'select *
from ( select a.NAME,a.ID, max(ID) over () as max_pk
from NEWS_TBL a)
where ID = max_pk
RETURNING NAME INTO NewsName';
'ALTER TABLE NEWS_TYPE_TBL ADD [NewsName] NUMBER(50) NULL';
// I want to add returning ticket name to here from 1st query.
END trigNews;
/