I am working on a project and I want to automatically create a file partition whenever an insert into a SessionTerm table occurs.
I tried to execute a trigger on that table after the insert but I encountered an error:
ALTER DATABASE statement not allowed within multi-statement transaction
After research on the error, I discovered that a trigger is implicitly part of a transaction, so you cannot use an ALTER DATABASE statement within it.
Next, I decided to add a computed column which uses a user defined function that calls a stored procedure so that the function will be evaluated at least once when the table is read and the partitions will be created behind the scenes, but that did not work either as I got another error:
Only functions and some extended stored procedures can be executed from within a function.
Please, does anyone has an idea or any alternative method on how to accomplish what I want to do? I will appreciate any input.
Related
I have two separate procedures. One procedure alters the existing table with new columns. The other procedure adds data to the tables. They aren't being executed, only created. When I hit run, the procedure that adds data to the columns throws an error saying the column does not exist. I understand that it's not created because I didn't exec the procedure that contains the altered code. Not sure why the code inside the procedure executes since I thought that it only creates the procedure.
Some of the code is repetitive and I understand. This is simply to get a working solution before modifying it dynamically.
To answer this more fully than my comment - Stored procedures are compiled. So if you try and do something that is invalid, the compilation will fail. It is not only checked at runtime.
Try this and it will fail every time:
create table junk(a int)
create procedure p as
update junk set b=1
If you want this to work, run the procedure that creates the columns before you attempt to create the procedure that inserts the data, or change the insert procedure so that it uses dynamic sql
Note that if you're desperate to have a db that has no columns but has a procedure that references them for insert, you can create the columns, create the insert procedure and then drop the columns again. The procedure won't run because dropping the columns invalidated it, but it will still exist
Not quite sure why you'd want to though- db schema is very much a design time thing so the design should be evolutionary. If you're doing this as part of a wider work in a front end language, take a look at a database migratory tool - it's a device that runs scripts, typically on app startup, that ensures the db has all the columns and data the app needs for that version to run. It's bidirectional too, typically, so if you downgrade then the migratory will/can remove columns and data it added
If I call a stored procedure which recursively calls itself, I am wondering if #vars, #tables and ##tables have non-conflicting instance copies. I am guessing #vars and #tables are ok, but ## should create problems.
I think the question further expands as: When a sp calls itself, does it create a new session?
No, it does not. Variables are scoped to one level (so they are not visible to nested calls), #temp tables are scoped to the session, ##temp tables are scoped globally. There is no way in T-SQL to even create a new session (even EXEC creates a new batch, but not a new session). Well, you could create a scheduled job on the fly (or maybe use OPENROWSET with the local server), but that's cheating.
Be wary of creating temp tables in stored procedures that nest, though: you'll run into trouble if you're not careful. Specifically, per the docs:
A local temporary table created within a stored procedure or trigger
can have the same name as a temporary table that was created before
the stored procedure or trigger is called. However, if a query
references a temporary table and two temporary tables with the same
name exist at that time, it is not defined which table the query is
resolved against. Nested stored procedures can also create temporary
tables with the same name as a temporary table that was created by the
stored procedure that called it. However, for modifications to resolve
to the table that was created in the nested procedure, the table must
have the same structure, with the same column names, as the table
created in the calling procedure.
That means the "obvious" case where you create "the same" temp table in every step of the nesting works as you'd expect: every nested call has its "own" table and won't see the parent table. If you don't create the table in a nested call, though, you'll get the parent table, and if you create one with a different structure (for whatever reason) you can actually get a compilation error when SQL Server detects this bizarre set of circumstances.
You can therefore both use a temp table as way to keep state across calls, or specifically not do that by "recreating" it, but it's up to you to keep things sane.
All recursions are in the same batch
Each stored procedure (recursion) has it's own scope with in the batch
In simple terms
A connections has many batches (one after the other)
A batch has many scopes (each code unit such stored proc, function, etc)
So
#vars are scoped to that code unit = Ok per recursion
#tables are scoped to the connection = NOT Ok, visible to recursions
##tables are scoped to all using connections = NOT Ok, visible to recursions
I have a stored procedure which makes used of a temporary table with ##temp creating on the fly using select * into ##temp from tablename.
The problem I have having is this stored procedure seems to delete or make this available only for that moment in time when the query is ran, despite having ## which is global and can be used by other users from what i know.
I am using SSRS to pull the stored procedure and using drill through from this report to the same report, first one only showing charts, the second report which is the same stored procedure which uses the actions link via parameter but the second report doesn't recognize the ##temp table.
Now that you got the background, is there a way around this or a better way of doing it, keep in mind we don't have a data warehouse at the moment, so just using temporary tables to do the work around.
Thanks
From MSDN:
Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.
If you have admin access to the server, try this answer.
I am trying to create a generic trigger in SQL Server which can copy all column data from Table A and insert them in corresponding fields in Table B.
There are few problems I am facing.
I need this copy to occur under three conditions : INSERT, DELETE and UPDATE.
The trigger needs to be triggered after CUD operations. using AFTER throws SQL error saying ntext etc are not supported in inserted. How do I resolve this error?
Instead of if used can work for INSERT but not for delete. Is there a way to do this for delete operations?
Is there a way I can write a generic code inside the trigger which can work for all sorts of tables (we can assume that all the columns in table a exists in column b)
I am not well versed with triggers or for that matter DDL in SQL Server.
Appreciate if some can provide me some solutions.
Thanks
Ben
CREATE TRIGGER (Transact-SQL)
Use nvarchar(max) instead of ntext.
You can have an instead of trigger for delete.
You can have one trigger that handles insert/update/delete for one table but you can not connect a trigger to more than one table.
In DB2 for IBM System i I create this trigger for recording on MYLOGTABLE every insert operation made on MYCHECKEDTABLE:
SET SCHEMA MYSCHEMA;
CREATE TRIGGER MYTRIGGER AFTER INSERT ON MYCHECKEDTABLE
REFERENCING NEW AS ROWREF
FOR EACH ROW BEGIN ATOMIC
INSERT INTO MYLOGTABLE -- after creation becomes MYSCHEMA.MYLOGTABLE
(MMACOD, OPTYPE, OPDATE)
VALUES (ROWREF.ID, 'I', CURRENT TIMESTAMP);
END;
The DBMS stores the trigger body with MYSCHEMA.MYLOGTABLE hardcoded.
Now imagine that we copy the entire schema as a new schema NEWSCHEMA. When I insert a record in NEWSCHEMA.MYCHECKEDTABLE a log record will be added to MYSCHEMA.MYLOGTABLE instead of NEWSCHEMA.MYLOGTABLE, i.e. in the schema where trigger and its table live. This is cause of big issues!! Also because many users can copy the schema without my control...
So, is there a way to specify, in the trigger body, the schema where the trigger lives? In this way we'll write the log record in the correct MYLOGTABLE. Something like PARENT SCHEMA... Or is there a workaround?
Many thanks!
External triggers defined in an HLL have access to a trigger buffer that includes the library name of the table that fired the trigger. This could be used to qualify the reference to the MYLOGTABLE.
See chapter 11.2 "Trigger program structure" of the IBM Redbook Stored Procedures, Triggers, and User-Defined Functions on DB2 Universal Database for iSeries for more information.
Alternatively you may be able to use the CURRENT SCHEMA special register or the GET DESCRIPTOR statement to find out where the trigger and/or table are currently located.
Unfortunately I realized that the schema where a trigger lives can't be detected from inside trigger's body.
But there are some workarounds (thanks to #krmilligan too):
Take away the user's authority to execute CPYLIB and make them use a utility.
Create a background agent on the system that peridiocally runs looking for triggers that are out of synch.
For command CPYLIB set the default for TRG option to *NO. In this way triggers will never be copied, except if the user explicitly specifies it.
I choose the last one because it's the simplest one, even if there can be contexts where trigger copy is required. In such cases I'd take the first workaround.