insert query in sql from another table with running number - sql-server

Am inserting rows in the table from another table
I need to the id columns should be running number like the below how to do that
i have set id column is unique key, so that the below code shows error
insert into Tbl1 (Id, DislayName,IsEnabled)
select 16000,Names,0 from Tbl2
Insertion should be like
16000 | John | false
16001 | Deo | false
16002 | Jake | false
NOTE: no auto increment should be used, because already its been assigned for another column

Add row_number() window function (minus one)
insert into Tbl1 (Id, DislayName,IsEnabled)
select 16000 -1 + row_number () over (order by Names),
Names,0
from Tbl2;

Related

Extracting data from a table into another table based on a common value

I have a table which somewhat looks like this
Table A:
Voter_id Id
----------------------
null | DEPT 1f7h
null | DEPT 3k9n
null | DEPT 2lp0
null | DEPT 2f6k
(250,000 rows like this)
This table Table A has close to 250,000 rows.
I have another table Table B which looks like this
Name_of_variable |Id | value_of_variable
--------------------------------------------------
Voter_id |DEPT 1f7h | 12OK9MJL
First_Name |DEPT adas | Umar
DOB |DEPT opwe | 20-02-199
Age |DEPT jqwq | 24
Voter_id |DEPT 90aa | 189H8MLI
(almost 1 million rows like this)
Table B id column has index
I wanted to fill Voter_id column of Table A using Table B column such that Voter_id column of table A = value_of_variable of Table B where Name_of_variable of Table A is 'Voter_id' and TableA.Id=TableB.Id
I have used this query for extracting data and it is working fine on my development database which has 15,000 records in Table A.I want to know if i can further optimize it because it may not work that good on bigger data.
update TableA
set Voter_id =(select value_of_variable
from TableB
where Name_of_variable like 'Voter_id'
and TableA.Id = TableB.id
limit 1);
You need to create an index on TableA.Id
CREATE UNIQUE INDEX Id_idx ON TableA (Id);
In case your TableA.Id can contain duplicate entries, please remove UNIQUE
You might also wanna play with
CREATE UNIQUE INDEX Id_idx ON TableB (Id) INCLUDE (Name_of_variable);
I have resolved this question by changing my update query like this
update TableA set Voter_id = TableB.value_of_variable
from TableB where TableA.id = TableB.id and TableB.Name_of_variable='Voter_id';

Lookup delimited values in a table in sql-server

In a table A i have a column (varchar*30) city-id with the value e.g. 1,2,3 or 2,4.
The description of the value is stored in another table B, e.g.
1 Amsterdam
2 The Hague
3 Maastricht
4 Rotterdam
How must i join table A with table B to get the descriptions in one or maybe more rows?
Assuming this is what you meant:
Table A:
id
-------
1
2
3
Table B:
id | Place
-----------
1 | Amsterdam
2 | The Hague
3 | Maastricht
4 | Rotterdam
Keep id column in both tables as auto increment, and PK.
Then just do a simple inner join.
select * from A inner join B on (A.id = B.id);
Ideal way to deal with such scenarios is to have a normalized table as Collin. In case that can't be done here is the way to go about -
You would need to use a table-valued function to split the comma-seperated value. If you are having SQL-Server 2016, there is a built-in SPLIT_STRING function, if not you would need to create one as shown in this link.
create table dbo.sCity(
CityId varchar(30)
);
create table dbo.sCityDescription(
CityId int
,CityDescription varchar(30)
);
insert into dbo.sCity values
('1,2,3')
,('2,4');
insert into dbo.sCityDescription values
(1,'Amsterdam')
,(2,'The Hague')
,(3,'Maastricht')
,(4,'Rotterdam');
select ctds.CityDescription
,sst.Value as 'CityId'
from dbo.sCity ct
cross apply dbo.SplitString(CityId,',') sst
join dbo.sCityDescription ctds
on sst.Value = ctds.CityId;

Max Value with unique values in more than one column

I feel like I'm missing something really obvious here.
Using T-SQL/SQL-Server:
I have unique values in more than one column but want to select the max version based on one particular column.
Dataset:
Example
ID | Name| Version | Code
------------------------
1 | Car | 3 | NULL
1 | Car | 2 | 1000
1 | Car | 1 | 2000
Target status: I want my query to only select the row with the highest version value. Running a MAX on the version column pulls all three because of the distinct values in the 'Code' column:
SELECT ID
,Name
,MAX(Version)
,Code
FROM Table
GROUP BY ID, Name, Code
The net result is that I get all three entries as per the data set due to the unique values in the Code column, but I only want the top row (Version 3).
Any help would be appreciated.
You need to identify the row with the highest version as 1 query and use another outer query to pull out all the fields for that row. Like so:
SELECT t.ID, t.Name, GRP.Version, t.Code
FROM (
SELECT ID
,Name
,MAX(Version) as Version
FROM Table
GROUP BY ID, Name
) GRP
INNER JOIN Table t on GRP.ID = t.ID and GRP.Name = t.Name and GRP.Version = t.Version
You can also use row_number() to do this kind of logic, for example like this:
select ID, Name, Version, Code
from (
select *, row_number() over (order by Version desc) as RN
from Table1
) X where RN = 1
Example in SQL Fiddle
add the top statment to force the return of a single row. Also add the order by notation
SELECT top 1 ID
,Name
,MAX(Version)
,Code
FROM Table
GROUP BY ID, Name, Code
order by max(version) desc

Keep nulls with two IN()

I'm refactoring very old code. Currently, PHP generates a separate select for every value. Say loc contains 1,2 and data contains a,b, it generates
select val from tablename where loc_id=1 and data_id=a;
select val from tablename where loc_id=1 and data_id=b;
select val from tablename where loc_id=2 and data_id=a;
select val from tablename where loc_id=2 and data_id=b;
...etc which all return either a single value or nothing. That meant I always had n(loc_id)*n(data_id) results, including nulls, which is necessary for subsequent processing. Knowing the order, this was used to generate an HTML table. Both data_id and loc_id can in theory scale up to a couple thousands (which is obviously not great in a table, but that's another concern).
+-----------+-----------+
| data_id 1 | data_id 2 |
+----------+-----------+-----------+
| loc_id 1 | - | 999.99 |
+----------+-----------+-----------+
+ loc_id 2 | 888.88 | - |
+----------+-----------+-----------+
To speed things up, I was looking at replacing this with a single query:
select val from tablename where loc_id in (1,2) and data_id in (a,b) order by loc_id asc, data_id asc;
to get a result like (below) and iterate to build my table.
Rownum VAL
------- --------
1 null
2 999.99
3 777.77
4 null
Unfortunately that approach drops the nulls from the resultset so I end up with
Rownum VAL
------- --------
1 999.99
2 777.77
Note that it is possible that neither data_id or loc_id have any match, in which case I would still need a null, null.
So I don't know which value matches which. I ways to match with the expected loc_id/data_id combination in php if I add loc_id and data_id... but that's getting messy.
Still a novice in SQL in general and that's absolutely the first time I work on PostgreSQL so hopefully that's not too obvious... As I post this I'm looking at two ways to solve this: any in array[] and joins. Will update if anything new is found.
tl;dr question
How do I do a where loc_id in (1,2) and data_id in (a,b) and keep the nulls so that I always get n(loc)*n(data) results?
You can achieve that in a single query with two steps:
Generate a matrix of all desired rows in the output.
LEFT [OUTER] JOIN to actual rows.
You get at least one row for every cell in your table.
If (loc_id, data_id) is unique, you get exactly one row.
SELECT t.val
FROM (VALUES (1), (2)) AS l(loc_id)
CROSS JOIN (VALUES ('a'), ('b')) AS d(data_id) -- generate total grid of rows
LEFT JOIN tablname t USING (loc_id, data_id) -- attach matching rows (if any)
ORDER BY l.loc_id, d.data_id;
Works for any number of columns with any number of values.
For your simple case:
SELECT t.val
FROM (
VALUES
(1, 'a'), (1, 'b')
, (2, 'a'), (2, 'b')
) AS ld (loc_id, data_id) -- total grid of rows
LEFT JOIN tablname t USING (loc_id, data_id) -- attach matching rows (if any)
ORDER BY ld.loc_id, ld.data_id;
where (loc_id in (1,2) or loc_id is null)
and (data_id in (a,b) or data_id is null)
Select the fields you use for filtering, so you know where the values came from:
select loc,data,val from tablename where loc in (1,2) and data in (a,b);
You won't get nulls this way either, but it's not a problem anymore. You know which fields are missing, and you know those are nulls.

A trigger to update one column value to equal the pkid of the record

I need to write a trigger that will set the value in column 2 = to the value in column 1 after a record has been created.
This is what I have so far:
create trigger update_docindex2_to_docid
ON dbo.TABLENAME
after insert
AS BEGIN
set DOCINDEX2 = DOCID
END;
I answered my own question one I sat and thought about it long enough....
This seems way to simple. I'm concerned that I'm going break something because I don't have a where condition that would identify the correct record. I want this to update docindex2 to the newly created DOCID after a record is created in the database. The docid is the pkid.
Any ideas/suggestions are appreciated....
Are you looking for something like this?
CREATE TABLE Table1 (docid INT IDENTITY PRIMARY KEY, docindex2 INT);
CREATE TRIGGER tg_mytrigger
ON Table1 AFTER INSERT
AS
UPDATE t
SET t.docindex2 = t.docid
FROM Table1 t JOIN INSERTED i
ON t.docid = i.docid;
INSERT INTO Table1 (docindex2) VALUES(0), (0);
Contents of Table after insert
| DOCID | DOCINDEX2 |
---------------------
| 1 | 1 |
| 2 | 2 |
Here is SQLFiddle demo

Resources