How to use INSERT SELECT? - sql-server

I have a table's structure:
[Subjects]:
id int Identity Specification yes
Deleted bit
[Juridical]:
id int
Name varchar
typeid int
[Individual]:
id int
Name varchar
Juridical and Individual it's a children classes of Subjects class. So it's mean that same rows in tables Individual and Subjects have a same id.
Now I have a table:
[MyTable]:
typeid varchar
Name varchar
And I want to select data from this table and insert it into my table structure. But I don't know what to do. I tried to use OUTPUT:
INSERT INTO [Individual](Name)
OUTPUT false
INTO [Subjects].[Deleted]
SELECT [MyTable].[Name] as Name
FROM [MyTable]
WHERE [MyTable].[type] = 'Indv'
But the syntax is not correct.

Just use:
INSERT INTO Individual(Name)
SELECT [MyTable].[Name] as Name
FROM [MyTable]
WHERE [MyTable].[type] = 'Indv'
and
INSERT INTO Subjects(Deleted)
SELECT [MyTable].[Name] as Name
FROM [MyTable]
WHERE [MyTable].[type] = 'Indv'
You can't insert in a single query in two tables, you need two separate queries for that. For that reason I split your initial query into two INSERT statements, to add records to both your Individual and Subjects table.
Just as #marc_s said, you must select the exact number of columns in your SELECT statement with the number of columns you want to insert data into your tables.
Other than these two constraints, which are both related to syntax, you are fully allowed to do any filtering in the SELECT part or make any complex logic as you would do in a normal SELECT query.

You need to use this syntax:
INSERT INTO [Individual] (Name)
SELECT [MyTable].[Name]
FROM [MyTable]
WHERE [MyTable].[type] = 'Indv'
You should define the list of column to insert into in the INSERT INTO line, and then you must have a SELECT that returns exactly that many columns as you need (and the column types need to match, too)

Related

How to append data from one table to another table in Snowflake

I have a table of all employees (employees_all) and then created a new table (employees_new) with the same structure that I would like to append to the original table to include new employees.
I was looking for the right command to use and found that INSERT lets me add data as in the following example:
create table t1 (v varchar);
insert into t1 (v) values
('three'),
('four');
But how do I append data coming from another table and without specifying the fields (both tables have the same structure and hundreds of columns)?
With additional research, I found this specific way to insert data from another table:
insert into employees_all
select * from employees_new;
This script lets you append all rows from a table into another one without specifying the fields.
Hope it helps!
Your insert with a select statement is the most simple answer, but just for fun, here's some extra options that provide some different flexibility.
You can generate the desired results in a select query using
SELECT * FROM employees_all
UNION ALL
SELECT * FROM employees_new;
This allows you to have a few more options with how you use this data downstream.
--use a view to preview the results without impacting the table
CREATE VIEW employees_all_preview
AS
SELECT * FROM employees_all
UNION ALL
SELECT * FROM employees_new;
--recreate the table using a sort,
-- generally not super common, but could help with clustering in some cases when the table
-- is very large and isn't updated very frequently.
INSERT OVERWRITE INTO employees_all
SELECT * FROM (
SELECT * FROM employees_all
UNION ALL
SELECT * FROM employees_new
) e ORDER BY name;
Lastly, you can also do a merge to give you some extra options. In this example, if your new table might have records that already match an existing record then instead of inserting them and creating duplicates, you can run an update for those records
MERGE INTO employees_all a
USING employees_new n ON a.employee_id = n.employee_id
WHEN MATCHED THEN UPDATE SET attrib1 = n.attrib1, attrib2 = n.attrib2
WHEN NOT MATCHED THEN INSERT (employee_id, name, attrib1, attrib2)
VALUES (n.employee_id, n.name, n.attrib1, n.attrib2)

Unique entries over to tables in database

I have a problem where I need to check that two columns in each table in a database are unique.
We have the database with barcode entries called uid and rid.
Table 1: T1.uid
And
Table 2: T2.rid
No barcodes in the two table columns must be the same.
How can we ensure that.
If a insertion of a barcode into table T1.uid matches an entry in
T2.rid we want to throw an error.
The tables are cleaned up and is in a consistent state where the entries in
T1.uid and T2.rid are unique over both table columns.
It is not possible to insert NULL values in the tables respective uid and tid column(T1.uid and T2.rid)
It is not possible to create a new table for all barcodes.
Because we don't have full control of the database server.
EDIT 19-02-2015
This solution cannot work for us, because we cannot make a new table
to keep track of the unique names(see table illustration).
We want to have a constraint over two columns in different tables without changing the schema.
Per the illustration we want to make it impossible for john to exist in
T2 because he already exists in table T1. So an error must be "thrown"
when we try to insert John in T2.Name.
The reason is that we have different suppliers that inserts into these tables
in different ways, if we change the schema layout, all suppliers would
need to change their database queries. The total work is just to much,
if we force every suppplier to make changes.
So we need something unobtrusive, that doesnt require the suppliers to change
their code.
A example could be that T1.Name is unique and do not accept NULL values.
If we try insert an existing name, like "Alan", then an exception will occur
because the column has unique values.
But we want to check for uniqueness in T2.Name at the same time.
The new inserted value should be unique over the two tables.
Maybe something like this:
SELECT uid FROM Table1
Where Exists (
SELECT rid FROM Table2
WHERE Table1.uid = rid )
This will show all rows from Table1 where their column uid has an equivalent in column rid of Table2.
The condition before the insertion happens could look like below. #Idis the id you need to insert the data for.
DECLARE #allowed INT;
SELECT #allowed = COUNT(*)
FROM
(
SELECT T1.uid FROM T1 WHERE T1.uid = #Id
UNION ALL
SELECT T2.rid FROM T2 WHERE T2.rid = #id
)
WHERE
#id IS NOT NULL;
IF #allowed = 0
BEGIN
---- insert allowed
SELECT 0;
END
Thanks to all who answered.
I have solved the problem. A trigger is added to the database
everytime an insert or update procedure is executed, we catch it
check that the value(s) to be inserted doens't exist in the columns of the two
tables. if that check is succesfull we exceute the original query.
Otherwise we rollback the query.
http://www.codeproject.com/Articles/25600/Triggers-SQL-Server
Instead Of Triggers

SQL Script add records with identity FK

I am trying to create an SQL script to insert a new row and use that row's identity column as an FK when inserting into another table.
This is what I use for a one-to-one relationship:
INSERT INTO userTable(name) VALUES(N'admin')
INSERT INTO adminsTable(userId,permissions) SELECT userId,255 FROM userTable WHERE name=N'admin'
But now I also have a one-to-many relationship, and I asked myself whether I can use less SELECT queries than this:
INSERT INTO bonusCodeTypes(name) VALUES(N'1500 pages')
INSERT INTO bonusCodeInstances(codeType,codeNo,isRedeemed) SELECT name,N'123456',0 FROM bonusCodeTypes WHERE name=N'1500 pages'
INSERT INTO bonusCodeInstances(codeType,codeNo,isRedeemed) SELECT name,N'012345',0 FROM bonusCodeTypes WHERE name=N'1500 pages'
I could also use sth like this:
INSERT INTO bonusCodeInstances(codeType,codeNo,isRedeemed)
SELECT name,bonusCode,0 FROM bonusCodeTypes JOIN
(SELECT N'123456' AS bonusCode UNION SELECT N'012345' AS bonusCode)
WHERE name=N'1500 pages'
but this is also a very complicated way of inserting all the codes, I don't know whether it is even faster.
So, is there a possibility to use a variable inside SQL statements? Like
var lastinsertID = INSERT INTO bonusCodeTypes(name) OUTPUT inserted.id VALUES(N'300 pages')
INSERT INTO bonusCodeInstances(codeType,codeNo,isRedeemed) VALUES(lastinsertID,N'123456',0)
OUTPUT can only insert into a table. If you're only inserting a single record, it's much more convenient to use SCOPE_IDENTITY(), which holds the value of the most recently inserted identity value. If you need a range of values, one technique is to OUTPUT all the identity values into a temp table or table variable along with the business keys, and join on that -- but provided the table you are inserting into has an index on those keys (and why shouldn't it) this buys you nothing over simply joining the base table in a transaction, other than lots more I/O.
So, in your example:
INSERT INTO bonusCodeTypes(name) VALUES(N'300 pages');
DECLARE #lastInsertID INT = SCOPE_IDENTITY();
INSERT INTO bonusCodeInstances(codeType,codeNo,isRedeemed) VALUES (#lastInsertID, N'123456',0);
SELECT #lastInsertID AS id; -- if you want to return the value to the client, as OUTPUT implies
Instead of VALUES, you can of course join on a table instead, provided you need the same #lastInsertID value everywhere.
As to your original question, yes, you can also assign variables from statements -- but not with OUTPUT. However, SELECT #x = TOP(1) something FROM table is perfectly OK.

Insert column values from different records in the same table

I have a table, Table_A with columns Id_A1 and Id_A2. I have another table, Table_B with columns Id and Description_B.
Now, I have with me some pairs of descriptions from Table_B, like ("desc 1","desc 3"), ("desc 1","desc 4"), ("desc 4","desc 2"), etc. And I want to insert the Ids corresponding to these descriptions in columns Id_A1 and Id_A2 of Table_A.
How to do this in a single insert statement per pair?
Thanks in advance!
I think this stored procedure would do it or you can just copy out the SQL and replace the parameters with what you're matching on if you prefer...
CREATE PROCEDURE dbo.usp_InsertPairs #Desc1 varchar(255), #Desc2 varchar(5)
AS
BEGIN
Insert Into Table_A(Id_A1, Id_A2)
SELECT
(SELECT id from Table_B where Description_b=#Desc1) as IDA1,
(SELECT id from Table_B where Description_b=#Desc2) as IDA2
END
Then call it as exec dbo.ups_InsertPairs 'desc 1','desc 3'
Note it's a single sql statement but there would be no way of doing it without multiple selects one way or another.

Table Valued Parameter has slow performance because of table scan

I have an aplication that passes parameters to a procedure in SQL. One of the parameters is an table valued parameter containing items to include in a where clause.
Because the table valued parameter has no statistics attached to it when I join my TVP to a table that has 2 mil rows I get a very slow query.
What alternatives do I have ?
Again, the goal is to pass certain values to a procedure that will be included in a where clause:
select * from table1 where id in
(select id from #mytvp)
or
select * from table1 t1 join #mytpv
tvp on t1.id = tvp.id
although it looks like it would need to run the query once for each row in table1, EXISTS often optimizes to be more efficient than a JOIN or an IN. So, try this:
select * from table1 t where exists (select 1 from #mytvp p where t.id=p.id)
also, be sure that t.id is the same datatype as p.id and t.id has an index.
You can use a temp table with an index to boost performance....(assuming you have more than a couple of records in your #mytvp)
just before you join the table you could insert the data from the variable #mytvp to a temp table...
here's a sample code to create a temp table with index....The primary key and unique field determines which columns to index on..
CREATE TABLE #temp_employee_v3
(rowID int not null identity(1,1)
,lname varchar (30) not null
,fname varchar (30) not null
,city varchar (20) not null
,state char (2) not null
,PRIMARY KEY (lname, fname, rowID)
,UNIQUE (state, city, rowID) )
I had the same issue that table-valued parameters where very slow in my context. I came up with a solution that passed the list of values as a comma separated string to the stored procedure. the procedure then made a PATINDEX(...) > 0 comparision. This was about a factor of 1:6 faster.
As mentioned here and explained here you can have primary key and unique constraints on the table type. E.g.
CREATE TYPE IdList AS TABLE ( Id UNIQUEIDENTIFIER NOT NULL PRIMARY KEY )
However, check if it improves performance in your case as now, these indexes exist when the TVP is populated which might lead to a counter effect depending if your input is sorted and/or if you use more than one column.
In common with table variables, table-valued parameters have no statistics (see the section "restrictions"); the query optimiser works on the assumption that they contain only one row, which if your parameter contains a lot of rows is likely to result in an inappropriate query plan.
One way to improve your chances of a better plan is to add a statement level recompile; this should enable the optimiser to take the size of the TVP into account when selecting a plan.
select * from table1 t where exists (select 1 from #mytvp p where t.id=p.id) OPTION (RECOMPILE)
(incorporating KM's suggestion)

Resources