In sybase: how to conditionally drop table and deallocate cursor - sybase

In postgres I can do:
drop table if exists ...
But in sybase I have to do:
if exists(
select 1 from sysobjects where name=... and type='U'
)
drop table ...
Is this the way to go in sybase?
But how do I do this for:
deallocate cursor ...
? In which system table is a cursor?
Also, I saw that in a procedure, the cursor is gone after the procedure has executed, so in a procedure I do not have to deallocate a cursor at the end. But if the procedure create a table, it is still there after the procedure has executed. Is this true indeed?

Related

Why doesn't this alter after insert statement work?

I have a stored procedure with dynamic sql that i have embedded as below:
delete from #temp_table
begin tran
set #sql = 'select * into #temp_table from sometable'
exec (#sql)
commit tran
begin
set #sql = 'alter table #temp_table add column1 float'
exec(#sql)
end
update #temp_table
set column1 = column1*100
select *
into Primary_Table
from #temp_table
However, I noticed that all the statements work but the alter does not. When run the procedure, I get an error message: "Invalid Column name column1"
What am I doing wrong here?
EDIT: Realized I didn't mention that the first insert is a dynamic sql as well. Updated it.
Alternate approach tried but throws same error:
delete from #temp_table
begin tran
set #sql = 'select * into #temp_table from sometable'
exec (#sql)
commit tran
alter table #temp_table add column1 float
update #temp_table set column1 = column1*100
Local temporary tables exhibit something like dynamic scope. When you create a local temporary table inside a call to exec it goes out of scope and existence on the return from exec.
EXEC (N'create table #x (c int)')
GO
SELECT * FROM #x
Msg 208, Level 16, State 0, Line 4
Invalid object name '#x'.
The select is parsed after the dynamic SQL to create #x is ran. But #x is not there because dropped on exit from exec.
Update
Depending on the situation there are different ways to work around the issue.
Put everything into the same string:
DECLARE #Sql NVARCHAR(MAX) = N'SELECT 1 AS source INTO #table_name;
ALTER TABLE #table_name ADD TARGET float;
UPDATE #table_name SET Target = 100 * source;';
EXEC (#Sql);
Create the table ahead of the dynamic sql that populates it.
CREATE TABLE #table_name (source INT);
EXEC (N'insert into #table_name (source) select 1;');
ALTER TABLE #table_name ADD target FLOAT;
UPDATE #table_name SET target = 100 * source;
In this option, the alter table statement can be removed by adding the additional column to the create table statement.' Note also that the alter table and update statements could be in separate invocations of dynamic SQL, if that was beneficial to your context.
1) It should be ALTER TABLE #temp... Not ALTER #temp.
2) Even if #1 weren't an issue, you're adding column1, as a NULLable column with no default value and, in the next statement setting it's value to itself * 100...
NULL * 100 = NULL
3) Why are you using dynamic sql to alter the #temp table? It can just as easily be done with a regular ALTER TABLE script... or, better yet, can be included in the original table definition.
This is because the #temp_table reference in the outer batch is a different temp table than the one created in dynamic SQL. Consider:
use tempdb
drop table if exists sometable
drop table if exists #temp_table
go
create table sometable(id int, a int)
create table #temp_table(id int, b int)
exec( 'select * into #temp_table from sometable; select * from #temp_table;' )
select * from #temp_table
Outputs
id a
----------- -----------
(0 rows affected)
id b
----------- -----------
(0 rows affected)
A temp table created in a nested batch is scoped to the nested batch and automatically dropped after. A "nested batch" is either a dynamic SQL query or a stored procedure. This behavior is explained here CREATE TABLE, but it only mentions stored procedures. Dynamic SQL behaves the same.
If you create the temp table in a top level batch, you can access it in dynamic SQL, you just can't create a new temp table in dynamic SQL and see it in the outer batch or in subsequent same-level dynamic SQL. So try to use INSERT INTO instead of SELECT INTO.

Should I drop temp table in this scenario?

I have a stored procedure called in a .Net webservice that works like this (pseudo-code):
CREATE PROC SomeProc AS
BEGIN TRY
BEGIN TRAN
IF EXISTS (SELECT * FROM sys.tables WHERE name LIKE '#temp%')
DROP TABLE #temp;
CREATE TABLE #temp (...);
/* lots of logic here */
-- clear up
IF EXISTS (SELECT * FROM sys.tables WHERE name LIKE '#temp%')
DROP TABLE #temp;
COMMIT TRAN
END TRY
BEGIN CATCH
IF EXISTS (SELECT * FROM sys.tables WHERE name LIKE '#temp%')
DROP TABLE #temp;
ROLLBACK TRAN;
END CATCH
The proc is always accessed via the same connection to the database (as defined in a config file).
Somebody has raised a concern that if the webservice, and thus the procedure gets called twice in quick succession, the there is a danger that the temp table for the second call would be deleted by the first call.
Is this correct? I thought SQL Server was synchronous, so that two procedures couldn't be called at the same time and SQL would queue the requests? This post seems to suggest I am doing the right thing but the multi-thread answer concerns me. Any clarification would be helpful please.
Local temporary (#) tables are session scoped, there is no way that some other session could interfere with a temp table created in your session.
If you do a select on sys.tables in tempdb, you will see that every temp table is suffixed with a session identificator.
Also, there is no need to explicitely drop a temp table in stored procedure, SQL Server will do the automatic cleanup, and also cache the metadata for possible performance benefit.
It is possible to use TRUNCATE TABLE.
TRUNCATE TABLE #temp;

SQL server Can I pass the results of a table- valued function to a stored procedure?

I have a conceptual way I'd like to code a set of related functions and stored procedures. I'm hoping to get a little feedback on whether or not that way is doable.
In a stored procedure, I'd like to assign the values of a table-valued function to a temporary table, then pass that table to another stored procedure...
Can I do this without creating table types?
A quick sample of the #temp table solution:
CREATE PROCEDURE dbo.B
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM #foo;
END
GO
CREATE PROCEDURE dbo.A
AS
BEGIN
SET NOCOUNT ON;
SELECT TOP 1 * INTO #foo FROM sys.objects;
EXEC dbo.B;
DROP TABLE #foo;
END
GO
EXEC dbo.A;
DROP PROCEDURE dbo.A, dbo.B;

Temporary table trouble in SQL Server

I have 2 store procedure :
The first one to create #TempTable
CREATE PROCEDURE CreateTempTable
AS
BEGIN
IF OBJECT_ID('tempdb..#TempTable') IS NOT NULL
BEGIN
DROP TABLE #TempTable;
END
CREATE TABLE #TempTable(
Value real NOT NULL
END
The second one to insert data in my #TempTable
CREATE PROCEDURE InsertData
#Value real
AS
BEGIN
INSERT #TempTable (Value) VALUES #Value
END
When I call these procedure I have an error :
exec CreateTempTable
exec InsertData" 1
go
Name '#TempTable' not valid in InsertData
Can you help me ?
A temp table created inside a sproc is automatically dropped after the sproc ends.
You have a few choices:
Create the temp table outside of the sproc as a standalone query. Then it will be dropped after the connection closes
Create a sproc that first creates the temp table and then calls the other sprocs
Use a global temp table (careful - concurrency issues can crop up with this)
I guess the problem here is that you are creating a local temporary table, that cannot be accessed outside CreateTempTable. You should create a global temporary table, by using ## instead of #.
Edit Yep, that's it. Here is your fixed script:
CREATE PROCEDURE CreateTempTable
AS
BEGIN
IF OBJECT_ID('tempdb..##TempTable') IS NOT NULL
BEGIN
DROP TABLE ##TempTable;
END
CREATE TABLE ##TempTable(
Value real NOT NULL
)
END
GO
CREATE PROCEDURE InsertData
#Value real
AS
BEGIN
INSERT ##TempTable (Value) VALUES (#Value)
END
GO
exec CreateTempTable
exec InsertData 1
go

nested insert exec work around

I have 2 stored procedures usp_SP1 and usp_SP2. Both of them make use of insert into #tt exec sp_somesp. I wanted to create a 3rd stored procedure which will decide which stored proc to call. Something like:
create proc usp_Decision
(
#value int
)
as
begin
if (#value = 1)
exec usp_SP1 -- this proc already has insert into #tt exec usp_somestoredproc
else
exec usp_SP2 -- this proc too has insert into #tt exec usp_somestoredproc
end
Later, I realized I needed some structure defined for the return value from usp_Decision so that I can populate the SSRS dataset field. So here is what I tried:
Within usp_Decision created a temp table and tried to do "insert into #tt exec usp_SP1". This didn't work out. error "insert exec cannot be nested"
Within usp_Decision tried passing table variable to each of the stored proc and update the table within the stored procs and do "select * from ". That didn't work out as well. Table variable passed as parameter cannot be modified within the stored proc.
Please suggest what can be done.
Can you modify usp_SP1 and usp_SP2?
If so, in usp_Decision, create a local temporary table with the proper schema to insert the results:
create table #results (....)
Then, in the called procedure, test for the existence of this temporary table. If it exists, insert into the temporary table. If not, return the result set as usual. This helps preserve existing behavior, if the nested procedures are called from elsewhere.
if object_id('tempdb..#results') is not null begin
insert #results (....)
select .....
end
else begin
select ....
end
When control returns to the calling procedure, #results will have been populated by the nested proc, whichever one was called.
If the result sets don't share the same schema, you may need to create two temporary tables in usp_Decision.
Have you had a look at table-valued user-defined functions (either inline or multi-statement)? Similar to HLGEM's suggestion, this will return a set which you may not have to insert any where.
Not a fan of global temp tables in any event (other processes can read these table and may interfere with the data in them).
Why not have each proc use a local temp table and select * from that table as the last step.
Then you can insert into a local temp table in the calling proc.
esimple example
create proc usp_mytest1
as
select top 1 id into #test1
from MYDATABASE..MYTABLE (nolock)
select * from #test1
go
--drop table #test
create proc usp_mytest2
as
select top 10 MYTABLE_id into #test2
from MYDATABASE..MYTABLE (nolock)
select * from #test2
go
create proc usp_mytest3 (#myvalue int)
as
create table #test3 (MYTABLE_id int)
if #myvalue = 1
Begin
insert #test3
exec ap2work..usp_mytest1
end
else
begin
insert #test3
exec ap2work..usp_mytest2
end
select * from #test3
go
exec ap2work..usp_mytest3 1
exec ap2work..usp_mytest3 0
See this blog article for one wortkaround (uses OPENROWSET to essentially create a loopback connection on which one of the INSERT EXEC calls happens)

Resources