Table valued function in Oracle - sql-server

I have used the functions in SQL Server for a long time. The functions can be used in the FROM clause and the WHERE clause can be specified.
In SQL Server the WHERE clause is used within the function itself. Unlike Oracle, the WHERE clause is applied after the function has produced the results.
The difference is important if the function without WHERE returns many results.
Furthermore, in SQL Server the indices inside the function are used if the WHERE is applied on the column with the index.
Example:
CREATE TABLE table_test (
col1 varchar(50),
col2 varchar(50)
)
--INSERT TEST DATA
Declare #Id int
Set #Id = 1
While #Id <= 1000000
Begin
Insert Into table_test values ('col1' + CAST(#Id as nvarchar(10)), 'col2' + CAST(#Id as nvarchar(10)))
Set #Id = #Id + 1
End
CREATE FUNCTION func_test() RETURNS TABLE
AS
RETURN
(
SELECT * FROM table_test
)
GO
CREATE NONCLUSTERED INDEX ixd_test ON table_test (col1) INCLUDE (col2)
SELECT * FROM func_test() WHERE col1 like 'col132%'
Is there a similar type of function in Oracle?

Create table
CREATE TABLE table_test (
col1 varchar(50),
col2 varchar(50)
)
Once you fill the data in to test_table (using something like follows)
Begin
for i in 1..1000000 loop
Insert Into table_test values ('col1' || TO_CHAR(i), 'col2' || TO_CHAR(i));
end loop;
commit;
End;
/
Something like follows may work. (Tested using oracle live sql)
-- Create row definition
create OR REPLACE TYPE SOME_ROW_TYPE AS OBJECT
(
COL1 varchar2(50),
COL2 varchar2(50)
)
/
-- Create table definition
create type SOME_TABLE_TYPE as table of SOME_ROW_TYPE
/
CREATE OR REPLACE FUNCTION DoStuff RETURN SOME_TABLE_TYPE AS
-- Declarations
RET_TABLE SOME_TABLE_TYPE := SOME_TABLE_TYPE();
CURSOR DATA_FETCH IS
SELECT
COL1,
COL2
FROM TABLE_TEST;
BEGIN
FOR ITEM IN DATA_FETCH LOOP
RET_TABLE.extend;
RET_TABLE(RET_TABLE.LAST) := SOME_ROW_TYPE(
ITEM.COL1,
ITEM.COL2);
END LOOP;
RETURN RET_TABLE;
END;
/
Creating index
CREATE INDEX index_name ON table_test(col1)
fetch via
select * from DoStuff() WHERE col1 like 'col132%'

Related

Adding constraints to list items in SQL Server database

I have table holding items for a given list id in my Ms Sql server database (2008R2).
I would like to add constraints so that no two list ids have same item list. Below illustrate my schema.
ListID , ItemID
1 a
1 b
2 a
3 a
3 b
In above example ListID 3 should fail. I guess you can't put constarint/check within the database itself (Triggers,check) and the logic constaint can only be done from the frontend?
Thanks in advance for any help.
Create a function that performs the logic you want and then create a check constraint or index that leverages that function.
Here is a functional example, the final insert fails. The function is evaluated row by row, so if you need to insert as a set and evaluate after, you'd need to do an "instead of" trigger:
CREATE TABLE dbo.Test(ListID INT, ItemID CHAR(1))
GO
CREATE FUNCTION dbo.TestConstraintPassed(#ListID INT, #ItemID CHAR(1))
RETURNS TINYINT
AS
BEGIN
DECLARE #retVal TINYINT = 0;
DECLARE #data TABLE (ListID INT, ItemID CHAR(1),[Match] INT)
INSERT INTO #data(ListID,ItemID,[Match]) SELECT ListID,ItemID,-1 AS [Match] FROM dbo.Test
UPDATE #data
SET [Match]=1
WHERE ItemID IN (SELECT ItemID FROM #data WHERE ListID=#ListID)
DECLARE #MatchCount INT
SELECT #MatchCount=SUM([Match]) FROM #data WHERE ListID=#ListID
IF NOT EXISTS(
SELECT *
FROM (
SELECT ListID,SUM([Match]) AS [MatchCount]
FROM #data
WHERE ListID<>#ListID
GROUP BY ListID
) dat
WHERE #MatchCount=[MatchCount]
)
BEGIN
SET #retVal=1;
END
RETURN #retVal;
END
GO
ALTER TABLE dbo.Test
ADD CONSTRAINT chkTest
CHECK (dbo.TestConstraintPassed(ListID, ItemID) = 1);
GO
INSERT INTO dbo.Test(ListID,ItemID) SELECT 1,'a'
INSERT INTO dbo.Test(ListID,ItemID) SELECT 1,'b'
INSERT INTO dbo.Test(ListID,ItemID) SELECT 2,'a'
INSERT INTO dbo.Test(ListID,ItemID) SELECT 2,'b'
Related

How to store the result of an select statement into a variable in sql server stored procedure

I have a condition like this:
IF #aaa = 'high'
set #bbb = select * from table1
else
set #bbb = select * from table2
I am going to use this variable (#bbb) throughout my stored procedure
is this possible to save a table into a variable?
I tried using temporary table but i am not able to assign it twice.
IF #aaa = 'high'
set #bbb = select * into #temp from table1
else
set #bbb = select * into #temp from table2
it shows #temp is already declared.
No, It is not work like that. You can declare a table variable and insert into inside it.
DECLARE #bbbTable TABLE(
Id int NOT NULL,
SampleColumn varchar(50) NOT NULL
);
insert into #bbbTable (Id,SampleColumn)
select Id,SampleColumn from table1
If the table1 and table2 are completely different tables, you should declare two different table variable;
DECLARE #bbbTable TABLE(
Id int NOT NULL,
SampleColumn varchar(50) NOT NULL
);
DECLARE #aaaTable TABLE(
Id int NOT NULL,
SampleColumn varchar(50) NOT NULL
);
IF #aaa = 'high'
insert into #bbbTable (Id,SampleColumn)
select Id,SampleColumn from table1
else
insert into #aaaTable (Id,SampleColumn)
select Id,SampleColumn from table2
You cant insert into a variable more than 1 value.
you can use Table Variable to reach your answer like this:
DELCARE #TableResult AS TABLE (Column1 INT, Column2 INT)
IF #aaa = 'high'
BEGIN
INSERT INTO #TableResult (Column1,Column2)
SELECT Column1FromTable, Column2FromTable
FROM table1
END
ELSE
BEGIN
INSERT INTO #TableResult (Column1,Column2)
SELECT Column1FromTable, Column2FromTable
FROM table2
END
Of course you can declare more than 2 columns.
You can store only 1 Column/Row to a variable.So you can't say *.
Suppose I want to store the value of Column1 from TableA to a variable, I can use this
SELECT #MyVariable = Column1 FROM TableA
But I Can't Say
SELECT #MyVariable = * FROM TableA
Even if there is only 1 column in the Table TableA.
Also If there is more than 1 record returned by the Select condition, then it will assign the First value to the Variable.
Or What you need is to store the entire Rows, you can Either use a Temporary table or a table variable.
Temporary Table
SELECT * INTO #Temp FROM TableA
Table Variable
DECLARE #MyVarTable TABLE
(
Column1 VARCHAR(50),
Column2 VARCHAR(50)
)
INSERT INTO #MyVarTable
(
Column1 ,
Column2
)
SELECT
Column1 ,
Column2
From MyTable
This Temporary Table and Table variable can be accessed in the same way you access the normal table using SELECT/UPDATE/DELETE Queries. Except :
Temporary tables are created for each session and automatically dropped when the session ends or the Query window is Closed
Table Variables exists only when you execute the Query. So before using the table variable in a query you need to declare the same

Multiple Row Param String to Single Stored Procedure

I have a stored procedure that mimics the MYSQL 'UPSERT' command. ie. insert if new / update existing if record exists.
I wish to keep the number of calls to SQL Server to an absolute minimum ie. 1
So can I pass an param string to a stored procedure (SP_MAIN) and in this stored procedure then call my 'UPSERT' stored procedure for every unique table row that is passed as a param to SP_MAIN...?
If so, can anyone illustrate with a simple example please..?
Thank you in advance.
You can use the merge statements. See a sample below: The table to be updated is dbo.Table. We use Table Valued Parameter to update/insert the data. The merge statement is within a stored procedure
CREATE TABLE dbo.[Table]
(
PrimaryKey INT IDENTITY (1, 1) NOT NULL
,Column1 INT NOT NULL
,Column2 INT NOT NULL
)
GO
CREATE TYPE dbo.[TableTVP] AS TABLE (
PrimaryKey INT NULL
,Column1 INT NULL
,Column2 INT NULL
)
GO
CREATE PROCEDURE dbo.CRUD_Table
#TableTVP dbo.TableTVP READONLY
AS
SET NOCOUNT ON
DECLARE #OutPut TABLE (Action VARCHAR(10) NULL,EntityKey INT NULL)
MERGE dbo.[Table] AS TARGET
USING (SELECT
PrimaryKey
,Column1
,Column2
,BINARY_CHECKSUM (Column1, Column2) as DataCheckSum
FROM
#TableTVP) AS SOURCE ON SOURCE.PrimaryKey = TARGET.PrimaryKey
WHEN MATCHED AND SOURCE.DataCheckSum <> BINARY_CHECKSUM (TARGET.Column1, TARGET.Column2) THEN
UPDATE SET
Column1 = SOURCE.Column1
,Column2 = SOURCE.Column2
WHEN NOT MATCHED THEN
INSERT (
Column1
,Column2
)
VALUES (
SOURCE.Column1
,SOURCE.Column2
)
OUTPUT $action as [Action]
,CASE WHEN $action IN ('INSERT', 'UPDATE') THEN Inserted.PrimaryKey ELSE Deleted.PrimaryKey END as [EntityKey] INTO #OutPut;
SELECT Action,EntityKey FROM #OutPut
GO

SQL Server Scalar variable in Inline table valued function

I have a multi statement table valued function which I would like to change to an inline table valued function for optimization purposes ( I don't really know if that will be an optimization but I want to try that anyway ). My problem is that I have a scalar variable which I don't know how to put in my WITH statement.
Code example:
CREATE FUNCTION [dbo].[function]()
RETURNS
#return_table table (id INT,value NVARCHAR(500))
AS
BEGIN
DECLARE #tmp_table TABLE(id INT, value VARCHAR(500))
DECLARE #variable BIGINT
INSERT INTO #tmp_table [...insert code...]
SET #variable = (SELECT MAX(id) FROM #tmp_table)
INSERT INTO #return_table SELECT id,value FROM #tmp_table WHERE id = #variable
RETURN
This code is an example, the actual function is more complex but the problem is exactly the same
I could easily change this to a single WITH statement like this:
CREATE FUNCTION [dbo].[function]()
RETURNS TABLE
AS
RETURN
(
WITH tmp_table AS (
SELECT [...Select Code...]
)
SELECT id,value FROM tmp_table
WHERE id = [variable]
);
GO
My problem lies into the [variable] which I don't know how to put into the query. Also, the variable is used more than once in my function so I'd rather not just replace it with the query.
I also tried this approach:
CREATE FUNCTION [dbo].[function]()
RETURNS TABLE
AS
RETURN
(
WITH tmp_table AS (
SELECT [...Select Code...]
), variable = (SELECT MAX(id) value FROM tmp_table)
SELECT id,value FROM tmp_table
WHERE id = (SELECT TOP 1 value FROM variable)
);
GO
But is seems like it made the function way slower.
Thank you.
Just try
WITH tmp_table AS (
SELECT [...Select Code...]
)
SELECT id,value FROM tmp_table WHERE id = (SELECT MAX(id) FROM tmp_table)
I would actually just change it to
SELECT TOP 1 *
FROm [whatever]
ORDER BY id DESC

Do Inserted Records Always Receive Contiguous Identity Values

Consider the following SQL:
CREATE TABLE Foo
(
ID int IDENTITY(1,1),
Data nvarchar(max)
)
INSERT INTO Foo (Data)
SELECT TOP 1000 Data
FROM SomeOtherTable
WHERE SomeColumn = #SomeParameter
DECLARE #LastID int
SET #LastID = SCOPE_IDENTITY()
I would like to know if I can depend on the 1000 rows that I inserted into table Foo having contiguous identity values. In order words, if this SQL block produces a #LastID of 2000, can I know for certain that the ID of the first record I inserted was 1001? I am mainly curious about multiple statements inserting records into table Foo concurrently.
I know that I could add a serializable transaction around my insert statement to ensure the behavior that I want, but do I really need to? I'm worried that introducing a serializable transaction will degrade performance, but if SQL Server won't allow other statements to insert into table Foo while this statement is running, then I don't have to worry about it.
I disagree with the accepted answer. This can easily be tested and disproved by running the following.
Setup
USE tempdb
CREATE TABLE Foo
(
ID int IDENTITY(1,1),
Data nvarchar(max)
)
Connection 1
USE tempdb
SET NOCOUNT ON
WHILE NOT EXISTS(SELECT * FROM master..sysprocesses WHERE context_info = CAST('stop' AS VARBINARY(128) ))
BEGIN
INSERT INTO Foo (Data)
VALUES ('blah')
END
Connection 2
USE tempdb
SET NOCOUNT ON
SET CONTEXT_INFO 0x
DECLARE #Output TABLE(ID INT)
WHILE 1 = 1
BEGIN
/*Clear out table variable from previous loop*/
DELETE FROM #Output
/*Insert 1000 records*/
INSERT INTO Foo (Data)
OUTPUT inserted.ID INTO #Output
SELECT TOP 1000 NEWID()
FROM sys.all_columns
IF EXISTS(SELECT * FROM #Output HAVING MAX(ID) - MIN(ID) <> 999 )
BEGIN
/*Set Context Info so other connection inserting
a single record in a loop terminates itself*/
DECLARE #stop VARBINARY(128)
SET #stop = CAST('stop' AS VARBINARY(128))
SET CONTEXT_INFO #stop
/*Return results for inspection*/
SELECT ID, DENSE_RANK() OVER (ORDER BY Grp) AS ContigSection
FROM
(SELECT ID, ID - ROW_NUMBER() OVER (ORDER BY [ID]) AS Grp
FROM #Output) O
ORDER BY ID
RETURN
END
END
Yes, they will be contiguous because the INSERT is atomic: complete success or full rollback. It is also performed as a single unit of work: you wont get any "interleaving" with other processes
However (or to put your mind at rest!), consider the OUTPUT clause
DECLARE #KeyStore TABLE (ID int NOT NULL)
INSERT INTO Foo (Data)
OUTPUT INSERTED.ID INTO #KeyStore (ID) --this line
SELECT TOP 1000 Data
FROM SomeOtherTable
WHERE SomeColumn = #SomeParameter
If you want the Identity values for multiple rows use OUTPUT:
DECLARE #NewIDs table (PKColumn int)
INSERT INTO Foo (Data)
OUTPUT INSERTED.PKColumn
INTO #NewIDs
SELECT TOP 1000 Data
FROM SomeOtherTable
WHERE SomeColumn = #SomeParameter
you now have the entire set of values in the #NewIDs table. You can add any columns from the Foo table into the #NewIDs table and insert those columns as well.
It is not good practice to attach any sort of meaning whatsoever to identity values. You should assume that they are nothing more than integers guaranteed to be unique within the scope of your table.
Try adding the following:
option(maxdop 1)

Resources