How to perform a LEFT JOIN in SOQL - salesforce

I have a query in SQL that I want to convert to SOQL.
I know that a LEFT JOIN is not possible in SOQL. But I don't how to write this in SOQL.
I want to check Cases without Decision__c. There is a Lookup relation between Case(Id) and Decision__c (Case__c).
That would be in SQL:
Select Id
FROM Case
LEFT JOIN Decision__c D
on D.Case__c = Case.Id
WHERE Case__c IS NULL
I exported all Cases (Case) and all Decisions (Decision__c) to Excel. With a VLOOKLUP I connected the Case with the decision. An error = no linked decision.
I exported the objects in PowerQuery and performed a left join to merge the two queries. Those with no decision where easily filtered (null value).
So I got my list of Cases without Decision, but I want to know if I can get this list with a SOQL query, instead of these extra steps.

To simply put it, you must, literally, select cases without Decision__c, the query should look like this:
SELECT Id FROM Case WHERE Id NOT IN(SELECT Case__c FROM Decision__c)
Although we don't JOINs in Salesforce we can use several "subqueries" to help filter records.
refer to the following link:
https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_select.htm

Related

Merging Legacy Data on Best Key

I am bringing in a field from a legacy system that does not have a Primary Key-Foreign Key relationship with the new table. The data is transactional, and each line has a customer and sales rep.
The legacy field has a many to many relationship with customer (but only on some), but it goes to one to many when you link customer and sales rep. However, the data is messy and the transaction may not match to a sales rep exactly.
It seems that the best way to tackle this problem is to join on customer and sales rep when possible, if there is not a match, then just join on customer.
I was able to do this in Excel by using the following:
=IFERROR(VLOOKUP(Customer_SalesRep_Combo, DataTable, 3, FALSE),VLOOKUP(Customer,Datatable,3,FALSE))
This function in excel works, but the spreadsheet is so large that it tends to crash, so I am trying to duplicate this using SQL code.
Note that the legacy system just outputs CSV files, so I uploaded that CSV to the cloud, and now I am using Databricks to convert that into a Spark dataframe, so I can use SQL logic on it.
Initially, my idea was to do a left join using both conditions (which matches 50k of my 80k) rows, and do a left join using one condition. I would then bring in the legacy field twice (twice if matched, once if not). Then I would use a CASE statement to only bring in the "soft match" if there was not a hard match. However, due to the many to many relationship, I would experience join duplication on the left join. Since I am also bringing in Sales Data, I cannot have any duplication. However, I would be able to live with some inaccuracy if I could just use the first match and suppress any duplication.
I have seen examples of using case statements in joins, but I do not know how to use that in this case. If I cannot get this to work, I will resort to iterating over the dataframes to match the logic in Scala, but I would prefer a SQL solution.
My code is below. The real version contains more fields, but this is the simplest I could get while retaining the basic logic.
SELECT
InnerQry.Customer,
InnerQry.SalesRep,
InnerQry.Sales,
CASE
WHEN InnerQry.LegacyFieldHard IS NULL
THEN InnerQry.LegacyFieldSoft
ELSE InnerQry.LegacyFieldHard
END AS LegacyField
FROM
(SELECT
A.Customer,
A.SalesRep,
A.Sales,
B.LegacyFieldHard,
C.LegacyFieldSoft
FROM
DBS AS A
LEFT JOIN
LEGACY AS B ON A.Customer = B.Customer AND A.SalesRep = B.SalesRep
LEFT JOIN
LEGACY AS C ON A.Customer = B.Customer) AS InnerQry
The main problem here is that you get multiple rows when you map based on just on Customer (Legacy C). To avoid this you can create a row number field and restrict it to 1, provided you don't really care which among that customer's records gets mapped:
SELECT
A.Customer,
A.SalesRep,
A.Sales,
COALESCE(B.LegacyField,C.LegacyField) as LegacyField
FROM DBS AS A
LEFT JOIN LEGACY AS B ON A.Customer=B.Customer AND A.SalesRep=B.SalesRep
LEFT JOIN
(select *,
row_number() Over (partition by Customer order by SalesRep) as rownum1
from LEGACY) AS C ON A.Customer=C.Customer and C.rownum1=1
Also, you could use the COALESCE function directly, instead of the case statement. This will automatically use the first non-null value . i.e) C value will be taken only when B is NULL. Hope this helps.

SQL Multiple Table Join - Best Optimization

Hi am hoping someone can help my SQL theory out. I have to create a set of reports which use joins from multiple tables. These reports are running far slower than I would like and I am hoping to optimize my SQL although my knowledge has hit a wall and I cant seem to find anything on Google.
I am hoping someone here can give me some best practice guidance.
Essentially I am trying to filter on the results set as it comes back to reduce the number of rows included in later joins
Items INNER JOIN BlueItems ON Items.ItemID = BlueItems.ItemID AND BlueItems.shape = 'square'
LEFT JOIN ItemHistory ON Items.ItemID = ItemHistory.ItemsID
LEFT JOIN ItemDates ON Items.ItemID = ItemDates.ItemID
WHERE ItemDates.ManufactureDate BETWEEN '01/01/2017' AND '01/05/2017'
I figure that Inner Joining on Blue items that are squares vastly reduces the data set at this point?
I also understand that the Where clause is intelligent enough to reduce the data set on run time? Am I mistaken? Is it returning all the data and then just filtering on that data?
Any guidance on essentially how to speed this kind of query up would be fantastic, Index's and such have already been put in place. Unfortunately the database is actually looked after by someone else and I am simply creating reports based on their database. This does limit me to just being able to optimize my queries rather than the data itself.
I guess at this point its time for me to try and improve my knowledge on how SQL handles the various ways you can filter on data and try to understand which actually reduce the dataset used and which simply filter on it. Any guidance would be very appreciated!
You mentioned that the primary keys are all indexed, but this is always the case for primary key fields. The only portion of your current query which would possibly benefit from this is the first join with Items. For the other joins and the WHERE clause, these primary key fields are not being used.
For this particular query, I would suggest the following indices:
ALTER TABLE BlueItems ADD INDEX bi_item_idx (ItemID, shape)
ALTER TABLE ItemHistory ADD INDEX ih_item_idx (ItemID)
ALTER TABLE ItemDates ADD INDEX id_idx (ItemID, ManufactureDate)
For the ItemHistory table, the index ih_item_idx should speed up the join involving the ItemID foreign key. A column by the same name is also involved with the other two joins, and hence is part of the other indices. The reason for the composite indices (i.e. indices involving more than one column) is that we want to cover all the columns which appear in either the join or the WHERE clause.
These comments are not really an answer but too big to put in a comment...
IF the dates being passed in as parameters (i'm guessing they are) then it might be parameter sniffing that is causing the issue. The query may be using a bad plan.
I've seen this a lot especially when using the between operator. A few quick things to try as adding OPTION(RECOMPILE) to the end of your query. This might seem counter intuitive but just try it. Although compiled queries should be faster than recompiling, if a bad plan is being used, it can slow things down A LOT.
Also, If ItemDates is big, try dumping yuor filtered results to a temp table and joining to that, so something like.
SELECT * INTO #id FROM ItemDates i WHERE i.ManufactureDate BETWEEN '01/01/2017' AND '01/05/2017'
The change you main query to be something like
SELECT *
FROM Items
JOIN BlueItems ON Items.ItemID = BlueItems.ItemID AND BlueItems.shape = 'square'
JOIN #id i ON Items.ItemID = i.ItemID
LEFT JOIN ItemHistory ON Items.ItemID = ItemHistory.ItemsID
I also changed the JOIN from being a LEFT JOIN to a JOIN (implicitly an inner join) as you are only selecting items which have a match in ItemDates so LEFT joining makes no sense.

Is it more efficient to use the CASE in the original query or in a separate query?

I can't show the real query, but here is an example of the type of thing I'm doing:
select
t1.contract,
t1.state,
t1.status,
t2.product,
case
when t2.product_cost > 100 and t3.status = 'Closed' then 'Success'
when t2.product_cost <= 100 and t3.status = 'Closed' then 'Semi-Success'
else 'Failure'
end as performance,
t3.sales_date
from contract_table as t1
left join product_table as t2 on t1.prodkey = t2.prodkey
left join sales_table as t3 on (t1.client_number = t3.client_number and t1.contract=t3.contract)
where t1.client_number in (1, 2, 5, 8, 10)
The tables involved have currently have 27 million records in them and are growing fast.
This query will be put in a view and then used to generate various summaries.
My question is this. Would it be more efficient to join the 3 tables into 1 view that has the detail needed to do the case statements and then run a second query that creates the new variables using the case statements? Or is it more efficient to do what I'm doing here?
Basically, I'm ignorant as to how SQL processes the select statement and accounts for the where statement filtering on the clients from the contract table but not the sales table even though the client_number field is in both.
All other things being equal, the only thing I can see that would change the efficiency one way or another would be whether you have where clause conditions in your outer query. If that outer query performed on the view is going to have where clauses that limit the number of records returned, then it would be more efficient to put the case statements on it. That way the case operation will only be performed on the records that pass those conditions, rather than getting performed on every record that passes the view's conditions, only to have those values thrown away by the outer query.
With views, I tend to keep to pretty raw data, as much as possible. Partly, for this reason, so that any query operating on the view, after deciding what rows to use, can do the necessary operations only on those rows.
As for how sql accounts for filtering on the clients from the contract table but not the sales table, think through both the where clause and the joins. The where clause says grab only the records where the contract table's client is 1,2,5,8,10. But the join conditions tell it to only grab the records from sales where that table's client number matches the contract table's client number. If the only records it's grabbing from contract are 1,2,5,8,10, then the only records from sale that will match that join will be the ones where the client numbers are also 1,2,5,8,10. Does that make sense?

TSQL Joining three tables where none have a common column

This will probably be pretty simple, as I am very much a novice.
I have two tables I have imported from Excel, and pretty much I need to update an existing table of e-mail addresses based off of the email addresses from the spreadsheet.
My only issue is I cannot join on a common column, as none of the tables share a column.
So I am wondering if, in a Join, I can just put something like
FROM table a
INNER JOIN table b ON b.column 'name' = a.column 'nameplus' `
Any help would be appreciated!
A join without matching predicates can be implemented effectively be a cross join: i.e. every row in table A matched with every row in table B.
If you specify an INNER JOIN then you have to have an ON term, which either matches something or it doesn't: in your example you may have a technical match (i.e. b.column really does - perhaps totally coincidentally - match a.column) that makes no business sense.
So you either have
a CROSS JOIN: no way of linking the tables but the result is all possible combinations of rows
Or:
an inner join where you must specify how the rows are to be combined (or a left/right outer join if you want to include all rows from either side, regardless of matched-ness)

join versus explicit in condition

are there some valid reasons, in a Oracle db, to preferring in a generic query, a filter condition expressed by a join table , instead of a filter with an IN condition with a large number of elements (some hundreds). I mean if you can write something like
SELECT .... FROM t1
WHERE t1. IN (......) with 100-200 items
or if it is better to change it with
SELECT .... FROM t1
JOIN t2 ON t1. = T2.
where the t2 table contains the values needed for the filter
many thanks
Thanks for the answers
I try to explain the situation and my doubt
I have an user interface where the user can choose in a control many items (for example one or more people a list of professionals).
I can use directly this list adding this in a IN condition, that is
SELECT .... FROM t1 WHERE t1. IN (p1... p200)
but this solution, could raise some problems:
- if the selected items are a lot, then the string can exceed a limit of sql string (I remember in Oracle existed a limit of 4000 bytes)
- an IN condition with many valuesmay be inefficient
So an alternative solution can be
1. create a temporary table with the selected item
2. using a join between the temparary table and the main table
Usually the filling of a temporary table is fast and my question is if this second solution is more efficient of the first
The two queries are not functionally equivalent, so the question is somewhat odd--I can't imagine this comes up very often (if ever).
That said, if you have a table that contains exactly the rows that need to be filtered, a JOIN would be a more natural/standard way to handle it.
Is the idea in the first example is to query t2 to get all the values, then add them to a collection and generate an IN clause? If so, I would say this would be a very bad practice.
From what I see, there are two different questions.
a) Using a Static List/table.
If the (100-200) item list is a list of static values, for eg.Let's say a list of Countries or currencies, I think it would be better to add this to a static table/parameter table and change the query to use the table instead. If you need to track a new code/country etc. later, all you need to do later is insert a new code in the look up table.
Also, if there are other queries that use the same conditions (and there usually are), this look up table will promote re-use.
select * from t1 where id in (select id from t2);
and
select * from t1,t2
where t1.id = t2.id
are both equivalent and better than
select * from t1 where
id in ('USD','EUR'..... ); -- 100 to 200 items to track.
b) The choice of Join vs IN:
It really does not matter a lot. The final query that oracle executes will be the transformed version of your query which might evaluate to the same query in both cases.
You should see which of the two queries are more easier to read and convey the intentions correctly.
Useful Link : http://explainextended.com/2009/09/30/in-vs-join-vs-exists-oracle/
http://explainextended.com/2009/09/30/in-vs-join-vs-exists-oracle/

Resources