Join TSQL Tables in same DB - sql-server

I want to do a simple join of two tables in the same DB.
The expected result is:
To get all Node_ID's From the Table T_Tree that are the same as the TREE_CATEGORY from the Table T_Documents
My T_Documents Tabel:
+--------+----------------+---------------------+
| Doc_ID | TREEE_CATEGORY | Desc |
+--------+----------------+---------------------+
| 89893 | 1363 | Test |
| 89894 | 1364 | with a tab or 4 spa |
+--------+----------------+---------------------+
T_Tree Tabel
+----------+-------+
| Node_ID | Name |
+----------+-------+
| 89893 | Hallo |
| 89894 | BB |
+----------+-------+
Doc_ID is the primary key in the T_Documents Table and Tree_Category is the Foreign key
Node_ID is the primary key in the T_Tree Tabel
SELECT DBName.dbo.T_Tree.NODE_ID
FROM DBName.dbo.T_Documents
inner join TREE_CATEGORY on T_Documents.TREE_CATEGORY = DBName.dbo.T_Tree.NODE_ID
I can not figure it out how to do it correctly .. is this even the right approach ?

You were close. Try this:
SELECT t2.NODE_ID
FROM DBName.dbo.T_Documents t1
INNER JOIN DBName.dbo.T_Tree t2
ON t1.Doc_ID = t2.NODE_ID
Comments:
I used aliases in the query, which are a sort of shorthand for the table names. Aliases can make a query easier to read because it removes the need to always list full table names.
You need to specify table names in the JOIN clause, and the columns used for joining in the ON clause.

Your SQL should be:
SELECT DBName.dbo.T_Tree.NODE_ID
FROM DBName.dbo.T_Documents d
inner join T_Tree t on d.Doc_ID = t.Node_ID
Remember: you join relations (tables), not fields.
Also, for it to work, you need to have common values on Node_ID and Doc_ID fields. That is, for each value in Doc_ID of T_Documents table there must be an equal value in Node_ID field of T_Tree table.

Related

Simplify multiple joins

I have a Claims table with 70 columns, 16 of which contain diagnosis codes. The codes mean nothing, so I need to pull the descriptions for each code located in a separate table.
There has to be a simpler way of pulling these code descriptions:
-- This is the claims table
FROM
[database].[schema].[claimtable] AS claim
-- [StagingDB].[schema].[Diagnosis] table where the codes located
-- [ICD10_CODE] column contains the code
LEFT JOIN
[StagingDB].[schema].[Diagnosis] AS diag1 ON claim.[ICDDiag1] = diag1.[ICD10_CODE]
LEFT JOIN
[StagingDB].[schema].[Diagnosis] AS diag2 ON claim.[ICDDiag2] = diag2.[ICD10_CODE]
LEFT JOIN
[StagingDB].[schema].[Diagnosis] AS diag3 ON claim.[ICDDiag3] = diag3.[ICD10_CODE]
-- and so on, up to ....
LEFT JOIN
[StagingDB].[schema].[Diagnosis]AS diag16 ON claim.[ICDDiag16] = diag16.[ICD10_CODE]
-- reported column will be [code_desc]
-- ie:
-- diag1.[code_desc] AS Diagnosis1
-- diag2.[code_desc] AS Diagnosis2
-- diag3.[code_desc] AS Diagnosis3
-- diag4.[code_desc] AS Diagnosis4
-- etc.
I think what you are doing is already correct in given scenario.
Another way can be from programming point of view or you can give try and compare ther performace.
i) Pivot Claim table on those 16 description columns.
ii) Join the Pivoted column with [StagingDB].[schema].[Diagnosis]
Another way can be to put [StagingDB].[schema].[Diagnosis] table in some #temp table
instead of touching large Staging 16 times.
But for data analysis has to be done to decide if there is any way.
You can go for UNPIVOT of the claimTable and then join with Diagnosis table.
TEST SETUP
create table #claimTable(ClaimId INT, Diag1 VARCHAR(10), Diag2 VARCHAR(10))
CREATE table #Diagnosis(code VARCHAR(10), code_Desc VARCHAR(255))
INSERT INTO #ClaimTable
VALUES (1, 'Fever','Cold'), (2, 'Headache','toothache');
INSERT INTO #Diagnosis
VALUEs ('Fever','Fever Desc'), ('cold','cold desc'),('headache','headache desc'),('toothache','toothache desc');
Query to Run
;WITH CTE_Claims AS
(SELECT ClaimId,DiagnosisNumeral, code
FROM #claimTable
UNPIVOT
(
code FOR DiagnosisNumeral in ([Diag1],[Diag2])
) as t
)
SELECT c.ClaimId, c.code, d.code_Desc
FROM CTE_Claims AS c
INNER JOIN #Diagnosis as d
on c.code = d.code
ResultSet
+---------+-----------+----------------+
| ClaimId | code | code_Desc |
+---------+-----------+----------------+
| 1 | Fever | Fever Desc |
| 1 | Cold | cold desc |
| 2 | Headache | headache desc |
| 2 | toothache | toothache desc |
+---------+-----------+----------------+

Postgres - join on array values

Say I have a table with schema as follows
id | name | tags |
1 | xyz | [4, 5] |
Where tags is an array of references to ids in another table called tags.
Is it possible to join these tags onto the row? i.e. replacing the id numbers with the values for thise rows in the tags table such as:
id | name | tags |
1 | xyz | [[tag_name, description], [tag_name, description]] |
If not, I wonder if this an issue with the design of the schema?
Example tags table:
create table tags(id int primary key, name text, description text);
insert into tags values
(4, 'tag_name_4', 'tag_description_4'),
(5, 'tag_name_5', 'tag_description_5');
You should unnest the column tags, use its elements to join the table tags and aggregate columns of the last table. You can aggregate arrays to array:
select t.id, t.name, array_agg(array[g.name, g.description])
from my_table as t
cross join unnest(tags) as tag
join tags g on g.id = tag
group by t.id;
id | name | array_agg
----+------+-----------------------------------------------------------------
1 | xyz | {{tag_name_4,tag_description_4},{tag_name_5,tag_description_5}}
(1 row)
or strings to array:
select t.id, t.name, array_agg(concat_ws(', ', g.name, g.description))
...
or maybe strings inside a string:
select t.id, t.name, string_agg(concat_ws(', ', g.name, g.description), '; ')
...
or the last but not least, as jsonb:
select t.id, t.name, jsonb_object_agg(g.name, g.description)
from my_table as t
cross join unnest(tags) as tag
join tags g on g.id = tag
group by t.id;
id | name | jsonb_object_agg
----+------+------------------------------------------------------------------------
1 | xyz | {"tag_name_4": "tag_description_4", "tag_name_5": "tag_description_5"}
(1 row)
Live demo: db<>fiddle.
not sure if this is still helpful for anyone, but unnesting the tags is quite a bit slower than letting postgres do the work directly from the array. you can rewrite the query and this is generally more performant because the g.id = ANY(tags) is a simple pkey index scan without the expansion step:
SELECT t.id, t.name, ARRAY_AGG(ARRAY[g.name, g.description])
FROM my_table AS t
LEFT JOIN tags AS g
ON g.id = ANY(tags)
GROUP BY t.id;

Adding constant in the results from a multiple SQL Server join

I have four tables to join and I need to identify which table the data is coming from in a column within the results. I join Tbl1 to Tbl2 using left outer join. I join Tbl1 to Tbl3 using left outer join and same with Tbl1 to Tbl 4. SerialNo is the key field I join all tables by. The results needs to indicate which table the data came from. For matching results between tables I want the right table to be identified. For example, in my sample tables I want to show that Tbl2 is in the results for those records where SerialNo is ABC123, DEF987 and HJK321.
Because of how the data will be extract from the database I'm unable to initiate a stored procedure so I'm planning on having a view to pull the data from, unless I can utilize a temp table in the process.
Tbl1
*Hostname* | *SerialNo*
Laptop1 | ABC123
Laptop2 | DEF987
Desktop1 | WER987
Desktop2 | YRT848
Desktop3 | YTT876
Laptop2 | HJK321
Tbl2
*Location* | *SerialNo*
MS | ABC123
CO | DEF987
CA | ZYC342
AZ | XYZ789
IN | HJK321
What I'd like to see in the results...
Result1
*Hostname* | *SerialNo* |*Location* |*RecordOrigin*
Laptop1 | ABC123 |MS |Tbl2
Laptop2 | DEF987 |CO |Tbl2
Desktop1 | WER987 |NULL |Tbl1
Desktop2 | YRT848 |NULL |Tbl1
Desktop3 | YTT876 |NULL |Tbl1
Laptop2 | HJK321 |IN |Tbl2
I tried creating an additional table for RecordOrigin information, but I wasn't able to join to the other tables correctly.
I'm should also note that I'm not able to edit the data in, or change the structure of, the source tables tables (i.e. Tbl1, Tbl2, etc.).
You can use a case expression to see if a table is returning a column or not like so:
select
t.Hostname
, t.SerialNo
, l.Location
, RecordOrigin = case
when l.SerialNo is not null
then 'Tbl2'
else 'Tbl1'
end
from Tbl1 as t
left join Tbl2 as l
on t.SerialNo = l.SerialNo
I think I might have figured it out by UNION the tables and using DISTINCT and MAX to remove duplication.
SELECT DISTINCT Hostname, MAX(SerialNo), MAX(Location), RecordOrigin
FROM
(
SELECT Hostname, SerialNo, Location, 'Tbl1' AS RecordOrigin
FROM Tbl1
UNION
SELECT Hostname, SerialNo, Location, 'Tbl2' AS RecordOrigin
FROM Tbl1
)
GROUP BY Hostname, RecordOrigin

ON CONFLICT DO UPDATE/DO NOTHING not working on FOREIGN TABLE

ON CONFLICT DO UPDATE/DO NOTHING feature is coming in PostgreSQL 9.5.
Creating Server and FOREIGN TABLE is coming in PostgreSQL 9.2 version.
When I'm using ON CONFLICT DO UPDATE for FOREIGN table it is not working,
but when i'm running same query on normal table it is working.Query is given below.
// For normal table
INSERT INTO app
(app_id,app_name,app_date)
SELECT
p.app_id,
p.app_name,
p.app_date FROM app p
WHERE p.app_id=2422
ON CONFLICT (app_id) DO
UPDATE SET app_date = excluded.app_date ;
O/P : Query returned successfully: one row affected, 5 msec execution time.
// For foreign table concept
// foreign_app is foreign table and app is normal table
INSERT INTO foreign_app
(app_id,app_name,app_date)
SELECT
p.app_id,
p.app_name,
p.app_date FROM app p
WHERE p.app_id=2422
ON CONFLICT (app_id) DO
UPDATE SET app_date = excluded.app_date ;
O/P : ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
Can any one explain why is this happening ?
There are no constraints on foreign tables, because PostgreSQL cannot enforce data integrity on the foreign server – that is done by constraints defined on the foreign server.
To achieve what you want to do, you'll have to stick with the “traditional” way of doing this (e.g. this code sample).
I know this is an old question, but in some cases there is a way to do it with ROW_NUMBER OVER (PARTION). In my case, my first take was to try ON CONFLICT...DO UPDATE, but that doesn't work on foreign tables (as stated above; hence my finding this question). My problem was very specific, in that I had a foreign table (f_zips) to be populated with the best zip code (postal code) information possible. I also had a local table, postcodes, with very good data and another local table, zips, with lower-quality zip code information but much more of it. For every record in postcodes, there is a corresponding record in zips but the postal codes may not match. I wanted f_zips to hold the best data.
I solved this with a union, with a value of ind = 0 as the indicator that a record came from the better data set. A value of ind = 1 indicates lesser-quality data. Then I used row_number() over a partion to get the answer (where get_valid_zip5() is a local function to return either a five-digit zip code or a null value):
insert into f_zips (recnum, postcode)
select s2.recnum, s2.zip5 from (
select s1.recnum, s1.zip5, s1.ind, row_number()
over (partition by recnum order by s1.ind) as rn from (
select recnum, get_valid_zip5(postcode) as zip5, 0 as ind
from postcodes
where get_valid_zip5(postcode) is not null
union
select recnum, get_valid_zip5(zip9) as zip5, 1 as ind
from zips
where get_valid_zip5(zip9) is not null
order by 1, 3) s1
) s2 where s2.rn = 1
;
I haven't run any performance tests, but for me this runs in cron and doesn't directly affect the users.
Verified on more than 900,000 records (SQL formatting omitted for brevity) :
/* yes, the preferred data was entered when it existed in both tables */
select t1.recnum, t1.postcode, t2.zip9 from postcodes t1 join zips t2 on t1.recnum = t2.recnum where t1.postcode is not null and t2.zip9 is not null and t2.zip9 not in ('0') and length(t1.postcode)=5 and length(t2.zip9)=5 and t1.postcode <> t2.zip9 order by 1 limit 5;
recnum | postcode | zip9
----------+----------+-------
12022783 | 98409 | 98984
12022965 | 98226 | 98225
12023113 | 98023 | 98003
select * from f_zips where recnum in (12022783, 12022965, 12023113) order by 1;
recnum | postcode
----------+----------
12022783 | 98409
12022965 | 98226
12023113 | 98023
/* yes, entries came from the less-preferred dataset when they didn't exist in the better one */
select t1.recnum, t1.postcode, t2.zip9 from postcodes t1 right join zips t2 on t1.recnum = t2.recnum where t1.postcode is null and t2.zip9 is not null and t2.zip9 not in ('0') and length(t2.zip9)= 5 order by 1 limit 3;
recnum | postcode | zip9
----------+----------+-------
12021451 | | 98370
12022341 | | 98501
12022695 | | 98597
select * from f_zips where recnum in (12021451, 12022341, 12022695) order by 1;
recnum | postcode
----------+----------
12021451 | 98370
12022341 | 98501
12022695 | 98597
/* yes, entries came from the preferred dataset when the less-preferred one had invalid values */
select t1.recnum, t1.postcode, t2.zip9 from postcodes t1 left join zips t2 on t1.recnum = t2.recnum where t1.postcode is not null and t2.zip9 is null order by 1 limit 3;
recnum | postcode | zip9
----------+----------+------
12393585 | 98118 |
12393757 | 98101 |
12393835 | 98101 |
select * from f_zips where recnum in (12393585, 12393757, 12393835) order by 1;
recnum | postcode
----------+----------
12393585 | 98118
12393757 | 98101
12393835 | 98101

Set-based approach to updating multiple tables, rather than a WHILE loop?

Apparently I'm far too used to procedural programming, and I don't know how to handle this with a set-based approach.
I have several temporary tables in SQL Server, each with thousands of records. Some of them have tens of thousands of records each, but they're all part of a record set. I'm basically loading a bunch of xml data that looks like this:
<root>
<entry>
<id-number>12345678</id-number>
<col1>blah</col1>
<col2>heh</col2>
<more-information>
<col1>werr</col1>
<col2>pop</col2>
<col3>test</col3>
</more-information>
<even-more-information>
<col1>czxn</col1>
<col2>asd</col2>
<col3>yyuy</col3>
<col4>moat</col4>
</even-more-information>
<even-more-information>
<col1>uioi</col1>
<col2>qwe</col2>
<col3>rtyu</col3>
<col4>poiu</col4>
</even-more-information>
</entry>
<entry>
<id-number>12345679</id-number>
<col1>bleh</col1>
<col2>sup</col2>
<more-information>
<col1>rrew</col1>
<col2>top</col2>
<col3>nest</col3>
</more-information>
<more-information>
<col1>234k</col1>
<col2>fftw</col2>
<col3>west</col3>
</more-information>
<even-more-information>
<col1>asdj</col1>
<col2>dsa</col2>
<col3>mnbb</col3>
<col4>boat</col4>
</even-more-information>
</entry>
</root>
Here's a brief display of what the temporary tables look like:
Temporary Table 1 (entry):
+------------+--------+--------+
| UniqueID | col1 | col2 |
+------------+--------+--------+
| 732013 | blah | heh |
| 732014 | bleh | sup |
+------------+--------+--------+
Temporary Table 2 (more-information):
+------------+--------+--------+--------+
| UniqueID | col1 | col2 | col3 |
+------------+--------+--------+--------+
| 732013 | werr | pop | test |
| 732014 | rrew | top | nest |
| 732014 | 234k | ffw | west |
+------------+--------+--------+--------+
Temporary Table 3 (even-more-information):
+------------+--------+--------+--------+--------+
| UniqueID | col1 | col2 | col3 | col4 |
+------------+--------+--------+--------+--------+
| 732013 | czxn | asd | yyuy | moat |
| 732013 | uioi | qwe | rtyu | poiu |
| 732014 | asdj | dsa | mnbb | boat |
+------------+--------+--------+--------+--------+
I am loading this data from an XML file, and have found that this is the only way I can tell which information belongs to which record, so every single temporary table has the following inserted at the top:
T.value('../../id-number[1]', 'VARCHAR(8)') UniqueID,
As you can see, each temporary table has a UniqueID assigned to it's particular record to indicate that it belongs to the main record. I have a large set of items in the database, and I want to update every single column in each non-temporary table using a set-based approach, but it must be restricted by UniqueID.
In tables other than the first one, there is a Foreign_ID based on the PrimaryKey_ID of the main table, and the UniqueID will not be inserted... it's just to help tell what goes where.
Here's the exact logic that I'm trying to figure out:
If id-number currently exists in the main table, update tables based on the PrimaryKey_ID number of the main table, which is the same exact number in every table's Foreign_ID. The foreign-key'd tables will have a totally different number than the id-number -- they are not the same.
If id-number does not exist, insert the record. I have done this part.
However, I'm currently stuck in the mind-set that I have to set temporary variables, such as #IDNumber, and #ForeignID, and then loop through it. Not only am I getting multiple results instead of the current result, but everyone says WHILE shouldn't be used, especially for such a large volume of data.
How do I update these tables using a set-based approach?
Assuming you already have this XML extracted, you could do something similar to:
UPDATE ent
SET ent.col1 = tmp1.col1,
ent.col2 = tmp1.col2
FROM dbo.[Entry] ent
INNER JOIN #TempEntry tmp1
ON tmp1.UniqueID = ent.UniqueID;
UPDATE mi
SET mi.col1 = tmp2.col1,
mi.col2 = tmp2.col2,
mi.col3 = tmp2.col3
FROM dbo.[MoreInformation] mi
INNER JOIN dbo.[Entry] ent -- mapping of Foreign_ID ->UniqueID
ON ent.PrimaryKey_ID = mi.Foreign_ID
INNER JOIN #TempMoreInfo tmp2
ON tmp2.UniqueID = ent.UniqueID
AND tmp2.SomeOtherField = mi.SomeOtherField; -- need 1 more field
UPDATE emi
SET ent.col1 = tmp3.col1,
emi.col2 = tmp3.col2,
emi.col3 = tmp3.col3,
emi.col4 = tmp3.col4
FROM dbo.[EvenMoreInformation] emi
INNER JOIN dbo.[Entry] ent -- mapping of Foreign_ID ->UniqueID
ON ent.PrimaryKey_ID = mi.Foreign_ID
INNER JOIN #TempEvenMoreInfo tmp3
ON tmp3.UniqueID = ent.UniqueID
AND tmp3.SomeOtherField = emi.SomeOtherField; -- need 1 more field
Now, I should point out that if the goal is truly to
update every single column in each non-temporary table
then there is a conceptual issue for any sub-tables that have multiple records. If there is no record in that table that will remain the same outside of the Foreign_ID field (and I guess the PK of that table?), then how do you know which row is which for the update? Sure, you can find the correct Foreign_ID based on the UniqueID mapping already in the non-temporary Entry table, but there needs to be at least one field that is not an IDENTITY (or UNIQUEIDENTIFIER populated via NEWID or NEWSEQUENTIALID) that will be used to find the exact row.
If it is not possible to find a stable, matching field, then you have no choice but to do a wipe-and-replace method instead.
P.S. I used to recommend the MERGE command but have since stopped due to learning of all of the bugs and issues with it. The "nicer" syntax is just not worth the potential problems. For more info, please see Use Caution with SQL Server's MERGE Statement.
you can use MERGE which does upsert ( update and insert ) in a single statement
First merge entries to the main table
For other tables, you can do a join with main table to get foreign id mapping
MERGE Table2 as Dest
USING ( select t2.*, m.primaryKey-Id as foreign_ID
from #tempTable2 t2
join mainTable m
on t2.id-number = m.id-number
) as Source
on Dest.Foreign_ID = m.foreign_ID
WHEN MATCHED
THEN Update SET Dest.COL1 = Source.Col1
WHEN NOT MATCHED then
INSERT (FOREGIN_ID, col1, col2,...)
values ( src.foreign_Id, src.col1, src.col2....)

Resources