SQL query taking a long time to execute - sql-server

USE Pooja
GO
----Create TestTable
CREATE TABLE TestTable(RtJobCode VARCHAR(20), RtProfCode smallint,RtTestCode smallint,ProfCode smallint,TestCode smallint)
----INSERT INTO TestTable using SELECT
INSERT INTO TestTable (RtJobCode, RtProfCode,RtTestCode,ProfCode,TestCode)
SELECT RtJobCode,RtTestCode,TestCode,RtProfCode,ProfCode
FROM dbo.ResultTest,dbo.Test,dbo.Profiles
WHERE RtTestCode=ANY(Select TestCode from dbo.Test)
----Verify that Data in TestTable
SELECT *
FROM TestTable
GO
The above code tries to take out entries from a table called resutltest and profiles and test,
The problem was during creation of a cube i was encountering some data which was not consistent in all the tables,
So i tried a join on the tables but as the tables contained a huge number of columns it was'nt feasible so tried making this code which just keeps on executing without stopping
and not displaying any data
Resulttest's Rttestcode is foreign key from testcode

Your query is very slow because it is making a cartesian product between ResultTest, Test and Profiles. you need to provide "join" conditions to link the tables together.
SELECT RtJobCode
, RtTestCode
, TestCode
, RtProfCode
, ProfCode
FROM dbo.ResultTest r
JOIN dbo.Test t
ON r.RtTestCode = t.TestCode
JOIN dbo.Profiles p
ON r.RtProfCode = p.ProfCode
I speculate that this is the query you are looking for. Note the conditions that link ResultTest and Test together and the condition that links ResultTest and Profiles together.

USE Pooja
GO
----Create TestTable
CREATE TABLE TestTable(RtJobCode VARCHAR(20), RtProfCode smallint,RtTestCode smallint,RtCenCode smallint,LabNo int,ProfCode smallint,ProfRate money,ProfName varchar(100),TestCode smallint,TestRate money,TestName varchar(100),TestCategory varchar(50),Cost money)
----INSERT INTO TestTable using SELECT
INSERT INTO TestTable (RtJobCode, RtProfCode,RtTestCode,RtCenCode,LabNo,ProfCode,ProfRate,ProfName,TestCode,TestRate,TestName,TestCategory,Cost)
SELECT RtJobCode
, RtProfCode
, RtTestCode
, RtCenCode
, LabNo
, ProfCode
, ProfRate
, ProfName
, TestCode
, TestRate
, TestName
, TestCategory
, Cost
FROM dbo.ResultTest
JOIN dbo.Test
ON ResultTest.RtTestCode = Test.TestCode
JOIN dbo.Profiles
ON ResultTest.RtProfCode = Profiles.ProfCode

Related

TSQL - subquery inside Begin End

Consider the following query:
begin
;with
t1 as (
select top(10) x from tableX
),
t2 as (
select * from t1
),
t3 as (
select * from t1
)
-- --------------------------
select *
from t2
join t3 on t3.x=t2.x
end
go
I was wondering if t1 is called twice hence tableX being called twice (which means t1 acts like a table)?
or just once with its rows saved in t1 for the whole query (like a variable in a programming lang)?
Just trying to figure out how tsql engine optimises this. This is important to know because if t1 has millions of rows and is being called many times in the whole query generating the same result then there should be a better way to do it..
Just create the table:
CREATE TABLE tableX
(
x int PRIMARY KEY
);
INSERT INTO tableX
VALUES (1)
,(2)
Turn on the execution plan generation and execute the query. You will get something like this:
So, yes, the table is queried two times. If you are using complex common table expression and you are working with huge amount of data, I will advice to store the result in temporary table.
Sometimes, I am getting very bad execution plans for complex CTEs which were working nicely in the past. Also, you are allowed to define indexes on temporary tables and improve performance further.
To be honest, there is no answer... The only answer is Race your horses (Eric Lippert).
The way you write your query does not tell you, how the engine will put it in execution. This depends on many, many influences...
You tell the engine, what you want to get and the engine decides how to get this.
This may even differ between identical calls depending on statistics, currently running queries, existing cached results etc.
Just as a hint, try this:
USE master;
GO
CREATE DATABASE testDB;
GO
USE testDB;
GO
--I create a physical test table with 1.000.000 rows
CREATE TABLE testTbl(ID INT IDENTITY PRIMARY KEY, SomeValue VARCHAR(100));
WITH MioRows(Nr) AS (SELECT TOP 1000000 ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM master..spt_values v1 CROSS JOIN master..spt_values v2 CROSS JOIN master..spt_values v3)
INSERT INTO testTbl(SomeValue)
SELECT CONCAT('Test',Nr)
FROM MioRows;
--Now we can start to test this
GO
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
DECLARE #dt DATETIME2 = SYSUTCDATETIME();
--Your approach with CTEs
;with t1 as (select * from testTbl)
,t2 as (select * from t1)
,t3 as (select * from t1)
select t2.ID AS t2_ID,t2.SomeValue AS t2_SomeValue,t3.ID AS t3_ID,t3.SomeValue AS t3_SomeValue INTO target1
from t2
join t3 on t3.ID=t2.ID;
SELECT 'Final CTE',DATEDIFF(MILLISECOND,#dt,SYSUTCDATETIME());
GO
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
DECLARE #dt DATETIME2 = SYSUTCDATETIME();
--Writing the intermediate result into a physical table
SELECT * INTO test1 FROM testTbl;
SELECT 'Write into test1',DATEDIFF(MILLISECOND,#dt,SYSUTCDATETIME());
select t2.ID AS t2_ID,t2.SomeValue AS t2_SomeValue,t3.ID AS t3_ID,t3.SomeValue AS t3_SomeValue INTO target2
from test1 t2
join test1 t3 on t3.ID=t2.ID
SELECT 'Final physical table',DATEDIFF(MILLISECOND,#dt,SYSUTCDATETIME());
GO
CHECKPOINT;
GO
DBCC DROPCLEANBUFFERS;
GO
DECLARE #dt DATETIME2 = SYSUTCDATETIME();
--Same as before, but with an primary key on the intermediate table
SELECT * INTO test2 FROM testTbl;
SELECT 'Write into test2',DATEDIFF(MILLISECOND,#dt,SYSUTCDATETIME());
ALTER TABLE test2 ADD PRIMARY KEY (ID);
SELECT 'Add PK',DATEDIFF(MILLISECOND,#dt,SYSUTCDATETIME());
select t2.ID AS t2_ID,t2.SomeValue AS t2_SomeValue,t3.ID AS t3_ID,t3.SomeValue AS t3_SomeValue INTO target3
from test2 t2
join test2 t3 on t3.ID=t2.ID
SELECT 'Final physical tabel with PK',DATEDIFF(MILLISECOND,#dt,SYSUTCDATETIME());
--Clean up (Careful with real data!!!)
GO
USE master;
GO
--DROP DATABASE testDB;
GO
On my system the
first takes 674ms, the
second 1.205ms (297 for writing into test1) and the
third 1.727ms (285 for writing into test2 and ~650ms for creating the index.
Although the query is performed twice, the engine can take advantage of cached results.
Conclusio
The engine is really smart... Don't try to be smarter...
If the table would cover a lot of columns and much more data per row the whole test might return something else...
If your CTEs (sub-queries) involve much more complex data with joins, views, functions and so on, the engine might get into troubles finding the best approach.
If performance matters, you can race your horses to test it out. One hint: I sometimes used a TABLE HINT quite successfully: FORCE ORDER. This will perform joins in the order specified in the query.
Here is a simple example to test the theories:
First, via temporary table which calls the matter only once.
declare #r1 table (id int, v uniqueidentifier);
insert into #r1
SELECT * FROM
(
select id=1, NewId() as 'v' union
select id=2, NewId()
) t
-- -----------
begin
;with
t1 as (
select * from #r1
),
t2 as (
select * from t1
),
t3 as (
select * from t1
)
-- ----------------
select * from t2
union all select * from t3
end
go
On the other hand, if we put the matter inside t1 instead of the temporary table, it gets called twice.
t1 as (
select id=1, NewId() as 'v' union
select id=2, NewId()
)
Hence, my conclusion is to use temporary table and not reply on cached results.
Also, ive implemented this on a large scale query that called the "matter" twice only and after moving it to temporary table the execution time went straight half!!

Splitting multiple fields by delimiter

I have to write an SP that can perform Partial Updates on our databases, the changes are stored in a record of the PU table. A values fields contains all values, delimited by a fixed delimiter. A tables field refers to a Schemes table containing the column names for each table in a similar fashion in a Colums fiels.
Now for my SP I need to split the Values field and Columns field in a temp table with Column/Value pairs, this happens for each record in the PU table.
An example:
Our PU table looks something like this:
CREATE TABLE [dbo].[PU](
[Table] [nvarchar](50) NOT NULL,
[Values] [nvarchar](max) NOT NULL
)
Insert SQL for this example:
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Person','John Doe;26');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Person','Jane Doe;22');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Person','Mike Johnson;20');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Person','Mary Jane;24');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Course','Mathematics');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Course','English');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Course','Geography');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Campus','Campus A;Schools Road 1;Educationville');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Campus','Campus B;Schools Road 31;Educationville');
INSERT INTO [dbo].[PU]([Table],[Values]) VALUES ('Campus','Campus C;Schools Road 22;Educationville');
And we have a Schemes table similar to this:
CREATE TABLE [dbo].[Schemes](
[Table] [nvarchar](50) NOT NULL,
[Columns] [nvarchar](max) NOT NULL
)
Insert SQL for this example:
INSERT INTO [dbo].[Schemes]([Table],[Columns]) VALUES ('Person','[Name];[Age]');
INSERT INTO [dbo].[Schemes]([Table],[Columns]) VALUES ('Course','[Name]');
INSERT INTO [dbo].[Schemes]([Table],[Columns]) VALUES ('Campus','[Name];[Address];[City]');
As a result the first record of the PU table should result in a temp table like:
The 5th will have:
Finally, the 8th PU record should result in:
You get the idea.
I tried use the following query to create the temp tables, but alas it fails when there's more that one value in the PU record:
DECLARE #Fields TABLE
(
[Column] INT,
[Value] VARCHAR(MAX)
)
INSERT INTO #Fields
SELECT TOP 1
(SELECT Value FROM STRING_SPLIT([dbo].[Schemes].[Columns], ';')),
(SELECT Value FROM STRING_SPLIT([dbo].[PU].[Values], ';'))
FROM [dbo].[PU] INNER JOIN [dbo].[Schemes] ON [dbo].[PU].[Table] = [dbo].[Schemes].[Table]
TOP 1 correctly gets the first PU record as each PU record is removed once processed.
The error is:
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
In the case of a Person record, the splits are indeed returning 2 values/colums at a time, I just want to store the values in 2 records instead of getting an error.
Any help on rewriting the above query?
Also do note that the data is just generic nonsense. Being able to have 2 fields that both have delimited values, always equal in amount (e.g. a 'person' in the PU table will always have 2 delimited values in the field), and break them up in several column/header rows is the point of the question.
UPDATE: Working implementation
Based on the (accepted) answer of Sean Lange, I was able to work out followin implementation to overcome the issue:
As I need to reuse it, the combine column/value functionality is performed by a new function, declared as such:
CREATE FUNCTION [dbo].[JoinDelimitedColumnValue]
(#splitValues VARCHAR(8000), #splitColumns VARCHAR(8000),#pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
WITH MyValues AS
(
SELECT ColumnPosition = x.ItemNumber,
ColumnValue = x.Item
FROM dbo.DelimitedSplit8K(#splitValues, #pDelimiter) x
)
, ColumnData AS
(
SELECT ColumnPosition = x.ItemNumber,
ColumnName = x.Item
FROM dbo.DelimitedSplit8K(#splitColumns, #pDelimiter) x
)
SELECT cd.ColumnName,
v.ColumnValue
FROM MyValues v
JOIN ColumnData cd ON cd.ColumnPosition = v.ColumnPosition
;
In case of the above sample data, I'd call this function with the following SQL:
DECLARE #FieldValues VARCHAR(8000), #FieldColumns VARCHAR(8000)
SELECT TOP 1 #FieldValues=[dbo].[PU].[Values], #FieldColumns=[dbo].[Schemes].[Columns] FROM [dbo].[PU] INNER JOIN [dbo].[Schemes] ON [dbo].[PU].[Table] = [dbo].[Schemes].[Table]
INSERT INTO #Fields
SELECT [Column] = x.[ColumnName],[Value] = x.[ColumnValue] FROM [dbo].[JoinDelimitedColumnValue](#FieldValues, #FieldColumns, #Delimiter) x
This data structure makes this way more complicated than it should be. You can leverage the splitter from Jeff Moden here. http://www.sqlservercentral.com/articles/Tally+Table/72993/ The main difference of that splitter and all the others is that his returns the ordinal position of each element. Why all the other splitters don't do this is beyond me. For things like this it is needed. You have two sets of delimited data and you must ensure that they are both reassembled in the correct order.
The biggest issue I see is that you don't have anything in your main table to function as an anchor for ordering the results correctly. You need something, even an identity to ensure the output rows stay "together". To accomplish I just added an identity to the PU table.
alter table PU add RowOrder int identity not null
Now that we have an anchor this is still a little cumbersome for what should be a simple query but it is achievable.
Something like this will now work.
with MyValues as
(
select p.[Table]
, ColumnPosition = x.ItemNumber
, ColumnValue = x.Item
, RowOrder
from PU p
cross apply dbo.DelimitedSplit8K(p.[Values], ';') x
)
, ColumnData as
(
select ColumnName = replace(replace(x.Item, ']', ''), '[', '')
, ColumnPosition = x.ItemNumber
, s.[Table]
from Schemes s
cross apply dbo.DelimitedSplit8K(s.Columns, ';') x
)
select cd.[Table]
, v.ColumnValue
, cd.ColumnName
from MyValues v
join ColumnData cd on cd.[Table] = v.[Table]
and cd.ColumnPosition = v.ColumnPosition
order by v.RowOrder
, v.ColumnPosition
I recommended not storing values like this in the first place. I recommend having a key value in the tables and preferably not using Table and Columns as a composite key. I recommend to avoid using reserved words. I also don't know what version of SQL you are using. I am going to assume you are using a fairly recent version of Microsoft SQL Server that will support my provided stored procedure.
Here is an overview of the solution:
1) You need to convert both the PU and the Schema table into a table where you will have each "column" value in the list of columns isolated in their own row. If you can store the data in this format rather than the provided format, you will be a little better off.
What I mean is
Table|Columns
Person|Jane Doe;22
needs converted to
Table|Column|OrderInList
Person|Jane Doe|1
Person|22|2
There are multiple ways to do this, but I prefer an xml trick that I picked up. You can find multiple split string examples online so I will not focus on that. Use whatever gives you the best performance. Unfortunately, You might not be able to get away from this table-valued function.
Update:
Thanks to Shnugo's performance enhancement comment, I have updated my xml splitter to give you the row number which reduces some of my code. I do the exact same thing to the Schema list.
2) Since the new Schema table and the new PU table now have the order each column appears, the PU table and the schema table can be joined on the "Table" and the OrderInList
CREATE FUNCTION [dbo].[fnSplitStrings_XML]
(
#List NVARCHAR(MAX),
#Delimiter VARCHAR(255)
)
RETURNS TABLE
AS
RETURN
(
SELECT y.i.value('(./text())[1]', 'nvarchar(4000)') AS Item,ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) as RowNumber
FROM
(
SELECT CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>')
+ '</i>').query('.') AS x
) AS a CROSS APPLY x.nodes('i') AS y(i)
);
GO
CREATE Procedure uspGetColumnValues
as
Begin
--Split each value in PU
select p.[Table],p.[Values],a.[Item],CHARINDEX(a.Item,p.[Values]) as LocationInStringForSorting,a.RowNumber
into #PuWithOrder
from PU p
cross apply [fnSplitStrings_XML](p.[Values],';') a --use whatever string split function is working best for you (performance wise)
--Split each value in Schema
select s.[Table],s.[Columns],a.[Item],CHARINDEX(a.Item,s.[Columns]) as LocationInStringForSorting,a.RowNumber
into #SchemaWithOrder
from Schemes s
cross apply [fnSplitStrings_XML](s.[Columns],';') a --use whatever string split function is working best for you (performance wise)
DECLARE #Fields TABLE --If this is an ETL process, maybe make this a permanent table with an auto incrementing Id and reference this table in all steps after this.
(
[Table] NVARCHAR(50),
[Columns] NVARCHAR(MAX),
[Column] VARCHAR(MAX),
[Value] VARCHAR(MAX),
OrderInList int
)
INSERT INTO #Fields([Table],[Columns],[Column],[Value],OrderInList)
Select pu.[Table],pu.[Values] as [Columns],s.Item as [Column],pu.Item as [Value],pu.RowNumber
from #PuWithOrder pu
join #SchemaWithOrder s on pu.[Table]=s.[Table] and pu.RowNumber=s.RowNumber
Select [Table],[Columns],[Column],[Value],OrderInList
from #Fields
order by [Table],[Columns],OrderInList
END
GO
EXEC uspGetColumnValues
GO
Update:
Since your working implementation is a table-valued function, I have another recommendation. The problem I see is that your using a table valued function which ultimately handles one record at a time. You are going to have better performance with set based operations and batching as needed. With a tabled valued function, you are likely going to be looping through each row. If this is some sort of ETL process, your team will be better off if you have a stored procedure that processes the rows in bulk. It might make sense to stage the results into a better table that your team can work with down stream rather than have them use a potentially slow table-valued function.

Multiple SQL running two queries then querying the result for one result

I have 1 table with user records on multiple lines. I'm trying to find out who changed their PIN and who did not.
I imported the table into ACCESS and I can create the result I'm looking for in 3 queries. I want to redo these 3 queries into one on the SQL server.
Query 1:
SELECT [dbo].[USER].USERNAME
FROM [dbo].[USER]
WHERE [dbo].[USER].SYSTEMNAME NOT LIKE 'DOMAIN/$SPARE'
AND [dbo].[USER].TYPE LIKE 'voice'
Query 2:
SELECT [dbo].[USER].[USERNAME], [dbo].[USER].[KEYNAME], [dbo].[USER].[STRINGVALUE]
FROM [dbo].[USER]
WHERE [dbo].[USER].TYPE = 'PIN_UPDATED')
The third query which will return the final result:
SELECT
[query1].[USERNAME], [query2].[USERNAME],
[query2].[KEYNAME], [query2].[STRINGVALUE]
FROM
[query1]
LEFT JOIN
[query2] ON [query1].USERNAME = [query2].USERNAME
Using UNION, INNER JOIN and others I get different types of errors with no results.
Be interested to know what your queries looked like and what errors, but here is one potential way, using CTE:
;WITH Query1 AS
(
SELECT [USERNAME]
FROM [dbo].[USER]
WHERE [dbo].[USER].SYSTEMNAME NOT LIKE 'DOMAIN/$SPARE'
AND [dbo].[USER].[TYPE] LIKE 'voice'
)
,Query2 AS
(
SELECT [USERNAME] ,
[KEYNAME] ,
[STRINGVALUE]
FROM [dbo].[USER]
WHERE [dbo].[USER].[TYPE] = 'PIN_UPDATED'
)
SELECT Q1.[USERNAME] ,
Q2.[USERNAME] ,
Q2.[KEYNAME] ,
Q2.[STRINGVALUE]
FROM [Query1] AS Q1
LEFT JOIN [query2] AS Q2 ON Q1.USERNAME = Q2.USERNAME;
SELECT u1.USERNAME,u2.[KEYNAME], u2.[STRINGVALUE]
FROM [dbo].[USER] AS u1
LEFT JOIN [dbo].[USER] AS u2 ON u1.USERNAME=u2.USERNAME AND u2.TYPE Like 'PIN_UPDATED'
WHERE u1.SYSTEMNAME Not Like 'DOMAIN/$SPARE' AND u1.TYPE Like 'voice'

I would like to know on how to convert Oracle triggers into SQL Server triggers

As I understand, SQL SERVER Triggers does not support FOR EACH ROW. Also I am aware that you have to use inserted tables and deleted tables. Other than that, I have no clue how to write SQL Server triggers. They look so different. Can some help please?
Below is the code for Oracle Triggers
create or replace TRIGGER Ten_Percent_Discount
BEFORE INSERT OR UPDATE ON Bookings
FOR EACH ROW
DECLARE CURSOR C_Passengers IS
SELECT StatusName
FROM Passengers
WHERE PassengerNumber = :NEW.Passengers_PassengerNumber;
l_status_name Passengers.StatusName%TYPE;
BEGIN
OPEN C_Passengers;
FETCH C_Passengers INTO l_status_name;
CLOSE C_Passengers;
Below is what I have written so far. I know I am using the inserted tables wrong
IF l_status_name = 'Regular'
THEN
:New.TotalCost := 0.90 * :New.TotalCost;
END IF;
END;
create TRIGGER Ten_Percent_Discount
ON Customer
FOR INSERT ,UPDATE
AS
DECLARE C_Passengers CURSOR FOR
SELECT StatusLevel
FROM Customer
WHERE CustomerID = inserted.CustomerID
Thanks for all the help in advance.
Table structure for customer
Table structure for Order
Below answer is only for reference purpose that you can use to build gradually towards final solution:
create table dbo.customer
(
customerid varchar(10),
firstname nvarchar(50),
statuslevel varchar(50)
)
go
create table dbo.customerorder
(
orderid varchar(10),
totalprice numeric(5,2),
productid varchar(10),
customerid varchar(10)
)
go
go
create trigger dbo.tr_customer on dbo.customer for insert,update
as
begin
update co
set co.totalprice = .9*co.totalprice
from dbo.customerorder co
inner join inserted i
on co.customerid = i.customerid
where i.statuslevel = 'Standard'
end
go
--test for above code
insert into dbo.customer values (1,'jayesh','')
insert into dbo.customerorder values (1,500.25,1,1)
insert into dbo.customerorder values (1,600.25,2,1)
select * from dbo.customer
select * from dbo.customerorder
update dbo.customer set statuslevel = 'Standard' where customerid = 1
select * from dbo.customer
select * from dbo.customerorder
But what I am pretty sure is that when customer is created for the first time, there will not be any orders to apply discounts on, so you will certainly need UPDATE Trigger as well.

SELECT INTO a table variable in T-SQL

Got a complex SELECT query, from which I would like to insert all rows into a table variable, but T-SQL doesn't allow it.
Along the same lines, you cannot use a table variable with SELECT INTO or INSERT EXEC queries.
http://odetocode.com/Articles/365.aspx
Short example:
declare #userData TABLE(
name varchar(30) NOT NULL,
oldlocation varchar(30) NOT NULL
)
SELECT name, location
INTO #userData
FROM myTable
INNER JOIN otherTable ON ...
WHERE age > 30
The data in the table variable would be later used to insert/update it back into different tables (mostly copy of the same data with minor updates). The goal of this would be to simply make the script a bit more readable and more easily customisable than doing the SELECT INTO directly into the right tables.
Performance is not an issue, as the rowcount is fairly small and it's only manually run when needed.
...or just tell me if I'm doing it all wrong.
Try something like this:
DECLARE #userData TABLE(
name varchar(30) NOT NULL,
oldlocation varchar(30) NOT NULL
);
INSERT INTO #userData (name, oldlocation)
SELECT name, location FROM myTable
INNER JOIN otherTable ON ...
WHERE age > 30;
The purpose of SELECT INTO is (per the docs, my emphasis)
To create a new table from values in another table
But you already have a target table! So what you want is
The INSERT statement adds one or more new rows to a table
You can specify the data values in the
following ways:
...
By using a SELECT subquery to specify
the data values for one or more rows,
such as:
INSERT INTO MyTable
(PriKey, Description)
SELECT ForeignKey, Description
FROM SomeView
And in this syntax, it's allowed for MyTable to be a table variable.
You can also use common table expressions to store temporary datasets. They are more elegant and adhoc friendly:
WITH userData (name, oldlocation)
AS
(
SELECT name, location
FROM myTable INNER JOIN
otherTable ON ...
WHERE age>30
)
SELECT *
FROM userData -- you can also reuse the recordset in subqueries and joins
You could try using temporary tables...if you are not doing it from an application. (It may be ok to run this manually)
SELECT name, location INTO #userData FROM myTable
INNER JOIN otherTable ON ...
WHERE age>30
You skip the effort to declare the table that way...
Helps for adhoc queries...This creates a local temp table which wont be visible to other sessions unless you are in the same session. Maybe a problem if you are running query from an app.
if you require it to running on an app, use variables declared this way :
DECLARE #userData TABLE(
name varchar(30) NOT NULL,
oldlocation varchar(30) NOT NULL
);
INSERT INTO #userData
SELECT name, location FROM myTable
INNER JOIN otherTable ON ...
WHERE age > 30;
Edit: as many of you mentioned updated visibility to session from connection. Creating temp tables is not an option for web applications, as sessions can be reused, stick to temp variables in those cases
Try to use INSERT instead of SELECT INTO:
DECLARE #UserData TABLE(
name varchar(30) NOT NULL,
oldlocation varchar(30) NOT NULL
)
INSERT #UserData
SELECT name, oldlocation
First create a temp table :
Step 1:
create table #tblOm_Temp (
Name varchar(100),
Age Int ,
RollNumber bigint
)
**Step 2: ** Insert Some value in Temp table .
insert into #tblom_temp values('Om Pandey',102,1347)
Step 3: Declare a table Variable to hold temp table data.
declare #tblOm_Variable table(
Name Varchar(100),
Age int,
RollNumber bigint
)
Step 4: select value from temp table and insert into table variable.
insert into #tblOm_Variable select * from #tblom_temp
Finally value is inserted from a temp table to Table variable
Step 5: Can Check inserted value in table variable.
select * from #tblOm_Variable
OK, Now with enough effort i am able to insert into #table using the below :
INSERT #TempWithheldTable SELECT
a.SuspendedReason,
a.SuspendedNotes,
a.SuspendedBy ,
a.ReasonCode FROM OPENROWSET( BULK 'C:\DataBases\WithHeld.csv', FORMATFILE =
N'C:\DataBases\Format.txt',
ERRORFILE=N'C:\Temp\MovieLensRatings.txt'
) AS a;
The main thing here is selecting columns to insert .
One reason to use SELECT INTO is that it allows you to use IDENTITY:
SELECT IDENTITY(INT,1,1) AS Id, name
INTO #MyTable
FROM (SELECT name FROM AnotherTable) AS t
This would not work with a table variable, which is too bad...

Resources