I've come across a scenario where I need to return a complex set of calculated values at a crossover point from "legacy" to current.
To cut a long story short I have something like this ...
with someofit as
(
select id, col1, col2, col3 from table1
)
select someofit.*,
case when id < #lastLegacyId then
(select ... from table2 where something = id) as 'bla'
,(select ... from table2 where something = id) as 'foo'
,(select ... from table2 where something = id) as 'bar'
else
(select ... from table3 where something = id) as 'bla'
,(select ... from table3 where something = id) as 'foo'
,(select ... from table3 where something = id) as 'bar'
end
from someofit
No here lies the problem ...
I don't want to be constantly doing that case check for each sub selection but at the same time when that condition applies I need all of the selections within the relevant case block.
Is there a smarter way to do this?
if I was in a proper OO language I would use something like this ...
var common = GetCommonSuff()
foreach (object item in common)
{
if(item.id <= lastLegacyId)
{
AppendLegacyValuesTo(item);
}
else
{
AppendCurrentValuesTo(item);
}
}
I did initially try doing 2 complete selections with a union all but this doesn't work very well due to efficiency / number of rows to be evaluated.
The sub selections are looking for total row counts where some condition is met other than the id match on either table 2 or 3 but those tables may have millions of rows in them.
The cte is used for 2 reasons ...
firstly it pulls only the rows from table 1 i am interested in so straight away im only doing a fraction of the sub selections in each case.
secondly its returning the common stuff in a single lookup on table 1
Any ideas?
EDIT 1 :
Some context to the situation ...
I have a table called "imports" (table 1 above) this represents an import job where we take data from a file (csv or similar) and pull the records in to the db.
I then have a table called "steps" this represents the processing / cleaning rules we go through and each record contains a sproc name and a bunch of other stuff about the rule.
There is then a join table that represents the rule for a particular import "ImportSteps" (table 2 above - for current data), this contains a "rowsaffected" column and the import id
so for the current jobs my sql is quite simple ...
select 123 456
from imports
join importsteps
for the older legacy stuff however I have to look through table 3 ... table 3 is the holding table, it contains every record ever imported, each row has an import id and each row contains key values.
on the new data rowsaffected on table 2 for import id x where step id is y will return my value.
on the legacy data i have to count the rows in holding where col z = something
i need data on about 20 imports and this data is bound to a "datagrid" on my mvc web app (if that makes any difference)
the cte i use determines through some parameters the "current 20 im interested in" those params represent start and end record (ordered by import id).
My biggest issue is that holding table ... it's massive .. individual jobs have been known to contain 500k + records on their own and this table holds years of imported rows so i need my lookups on that table to be as fast as possible and as few as possible.
EDIT 2:
The actual solution (suedo code only) ...
-- declare and populate the subset to reduce reads on the big holding table
declare table #holding ( ... )
insert into #holding
select .. from holding
select
... common stuff from inner select in "from" below
... bunch of ...
case when id < #legacy then (select getNewValue(id, stepid))
else (select x from #holding where id = ID and ... ) end as 'bla'
from
(
select ROW_NUMBER() over (order by importid desc) as 'RowNum'
, ...
) as I
-- this bit handles the paging
where RowNum >= #StartIndex
and RowNum < #EndIndex
i'm still confident i can clean it up more but my original query that looked something like bills solution was about 45 seconds in execution time, this is about 7
I take it the subqueries must return a single scalar value, correct? This point is important because it is what ensures the LEFT JOINs will not multiply the result.
;with someofit as
(
select id, col1, col2, col3 from table1
)
select someofit.*,
bla = coalesce(t2.col1, t3.col1),
foo = coalesce(t2.col2, t3.col2),
bar = coalesce(t2.bar, t3.bar)
from someofit
left join table2 t2 on t2.something=someofit.id and somefit.id < #lastLegacyId
left join table3 t3 on t3.something=someofit.id and somefit.id >= #lastLegacyId
Beware that I have used id >= #lastLegacyId as the complement of the condition, by assuming that id is not nullable. If it is, you need an IsNull there, i.e. somefit.id >= isnull(#lastLegacyId,somefit.id).
Your edit to the question doesn't change the fact that this is an almost literal translation of the O-O syntax.
foreach (object item in common) --> "from someofit"
{
if(item.id <= lastLegacyId) --> the precondition to the t2 join
{
AppendLegacyValuesTo(item); --> putting t2.x as first argument of coalesce
}
else --> sql would normally join to both tables
--> hence we need an explicit complement
--> condition as an "else" clause
{
AppendCurrentValuesTo(item); --> putting t3.x as 2nd argument
--> tbh, the order doesn't matter since t2/t3
--> are mutually exclusive
}
}
function AppendCurrentValuesTo --> the correlation between t2/t3 to someofit.id
Now, if you have actually tried this and it doesn't solve your problem, I'd like to know where it broke.
Assuming you know that there are no conflicting ID's between the two tables, you can do something like this (DB2 syntax, because that's what I know, but it should be similar):
with combined_tables as (
select ... as id, ... as bla, ...as bar, ... as foo from table 2
union all
select ... as id, ... as bla, ...as bar, ... as foo from table 3
)
select someofit.*, combined_ids.bla, combined_ids.foo, combined_ids.bar
from someofit
join combined_tables on someofit.id = combined_tables.id
If you had cases like overlapping ids, you could handle that within the combined_tables() section
Related
I am working on an ETL optimization problem and that requires creating a temp table that could be merged with the final table. Currently I have a couple Views that are used to load the final table and that is taking a lot of time. I tried to take the SQL logic from the view and created a temp table and noticed that the values in the temp table do not match the values in the final table. To look deeper I was running count(*) on the view couple of times and noticed that the result for total row count is different for every run by about 10/15 rows give or take. The view has 16 columns from 9 tables which load only once a day. So the time when I run the count(*) the underlying data does not change but the result of the count from the view does change.
This is on a SQL Server 2016 server. I have tried looking into the View logic and nothing stands out as odd. I have tried doing a count(*) on the tables that loads this view and the counts for the tables do not change. I have also tried to create 2 column table from the view logic to simplify the problem and tried an EXCEPT command and that still yields about 20 rows of inconsistent values between the 2 column table created from the same exact view logic.
Here is a reproduction of the VIEW definition that has the row count inconsistency
USE [PROD]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW Base_View
AS
select
concat(x, y, z)feild1
,*
,ROW_NUMBER() OVER(PARTITION BY a,b ORDER BY some_Date) AS rec_num
,count(a) OVER(PARTITION BY a) AS rec_total
from (
SELECT
case when RESULT='stored value' and e.code is not null then 'x' else '' end x
,case when RESULT='stored value 2' and r.l_id is not null then 'y' else '' end y
,case when RESULT in ('stored value 3','stored value 4') and t.amount is not null then 'z' else '' end z
,case when
CASE WHEN
(m.status = 'stored value 4' OR m.status = 'stored value 5')
AND m.bal < 0
THEN
CASE WHEN DATEDIFF(day,m.due,m.SNAP_DATE) < 0
THEN 0
ELSE DATEDIFF(day,m.due,m.SNAP_DATE)
END
ELSE 0
END=0 AND w.W_ID is null AND m.status<>'stored value 5'
then case
when RESULT in ('stored value 5','stored value 4')
then case when isnull(AMOUNT,0)<>0
then 'abc'
else 'def' end
else 'abc' end
else 'def'
end imp_feild
,result
,es.emp_id
,concat(es.fname,' ',es.lname)task_emp
,concat(e.fname,' ',e.lname)ext_emp
,case when RESULT ='stored value' then t.P_STATUS else null end p_status
,t.CREATE_DATE
,t.l_key
,t.l_id
,m.status
,cast(w.wodate as date)wo_date
,rm.balance refi_balance,rnl.LOAN_key refi_loan,r.effective refi_effective
,case trancode when 'ext' then m.payment else null end ext_amount,e.entered ext_entered,e.effective ext_effective
FROM
(
select t0.*,ROW_NUMBER() OVER(PARTITION BY t0.some_KEY,cast(t0.CREATE_DATE as date),t0.output
ORDER BY t0.some_KEY,cast(t0.CREATE_DATE as date),t0.output ) AS SEQ_NUM
from base_table_1 t0
left join base_table_2 e0
on t0.c_e_key=e0.e_key
where t0.active_rec_ind='Y'
and t0.output in (d,e,f,g)
and (t0.output2 in (j,k)
or ISNULL(e0.some_KEY,'h') in ('u','w'))
) t
join
base_table_3 l
on t.loan_sf_id=l.loan_sf_id
and t.active_rec_ind='Y'
join base_table_4 m
on
t.SOME_DATE=m.SNAP_DATE
and t.L_ID=m.L_ID
left
join base_table_5 es
on t.c_emp_key=es.emp_key
left
join base_table_6 r
on l.l_id=r.l_old_id
and r.entered between dateadd(day,0,cast(t.CREATE_DATE as date)) and dateadd(day,0,t.SOME_DATE)
left
join base_table_7 w
on l.l_id=w.l_id
and w.wodate between cast(t.CREATE_DATE_ETZ as date) and dateadd(day,0,t.SOME_DATE)
left
join base_table_8 wl
on w.l_id=wl.l_id
left
join base_table_8 rnl
on r.l_new_id=rnl.l_id
left
join base_table_8 rol
on r.l_old_id=rol.l_id
left
join base_table_4 rm
on
dateadd(day,-1,r.effective)=rm.SNAP_DATE
and rol.L_ID=rm.L_ID
left
join
(select e0.*,ew.value_1,ew.new_key,ROW_NUMBER() OVER(PARTITION BY e0.L_ID,e0.ENT ORDER BY e0.L_ID,e0.ENT) AS SEQ_NUM
from base_table_9 e0
join base_table_5 ew
on e0.EMP_ID=ew.EMP_ID
where e0.code='a'
) e
on l.sid=e.sid
and e.code='a' and RESULT='stored value 5'
and e.entered between cast(t.CREATE_DATE as date) and dateadd(day,0,t.HOLD_DATE)
AND e.SEQ_NUM=t.SEQ_NUM
and ((isnumeric(e.roll_key)=1 and isnumeric(es.roll_key)=1 and e.roll_key=es.roll_key)
or ((isnumeric(e.roll_key)=0 or isnumeric(es.roll_key)=0) and e.FNAME+e.LNAME=es.FNAME+es.LNAME))
where t.RESULT in ('abc','def')
and cast(t.CREATE_DATE as date) between cast(dateadd(month,-12,getdate()) as date) and cast(getdate() as date)
and (AGENT in ('lmn', 'pqr')
or ISNULL(es.VKEY,'stored value 8') in ('xx','yy','zz'))
)x
where imp_feild='abc'
and concat(x, y, z)<>''
or imp_feild='def'
GO
Expected result is that it should return a consistent number for the row count and that hopefully should solve the inconsistent values problem on the temp table.
Your query has between cast(dateadd(month,-12,getdate()) as date) and cast(getdate() as date) near the bottom. Of course the result of getdate() will be different with each execution and each call to getdate(). That will affect the result.
BTW, having * in your SELECT list is not a good idea. You should only return the columns needed. It makes the view results vulnerable to changes in the underlying tables.
There are a few other things that wouldn't pass code review where I work but that's kinda OT, I think.
This is too long for a comment. Using * in a view is a very bad idea. Not only does the view NOT update (unless you execute sp_refreshview) when you change the base table you can actually get some very interesting things happening.
Check this out as an example of just how bad this can be.
create table ViewExample (Col1 int, Col2 int)
go
create view ViewExampleView as select * from ViewExample
go
insert ViewExample select 1, 2
go
select * from ViewExampleView --obviously we get just a single column
alter table ViewExample add Col3 int --add a new column to the table, surely the view will pick this up?
go
insert ViewExample select 3, 4, 5 --insert a new row with data in all three columns
go
select * from ViewExampleView --what??? The view says select * but we only get Col1 and Col2?
alter table ViewExample drop column Col2 --Oops we decide to drop this column because we don't need it anymore
select * from ViewExampleView --What in the world? Col2 doesn't exist in the table, why is it in the view? And what the heck is going on here. The data from Col3 is now moved to Col2
drop view ViewExampleView
drop table ViewExample
Notice how in the last select from the view that the data from Col3 is being displayed in Col2. If this doesn't convince you to stop using * in views (and pretty much everywhere) I don't know what will.
I'm still fairly new to SQL. This is a stripped down version of the query I'm trying to run. This query is suppose to find those customers with more than 3 cases and display either the top 1 case or all cases but still show the overall count of cases per customer in each row in addition to all the case numbers.
The TOP 1 subquery approach didn't work but is there another way to get the results I need? Hope that makes sense.
Here's the code:
SELECT t1.StoreID, t1.CustomerID, t2.LastName, t2.FirstName
,COUNT(t1.CaseNo) AS CasesCount
,(SELECT TOP 1 t1.CaseNo)
FROM MainDatabase t1
INNER JOIN CustomerDatabase t2
ON t1.StoreID = t2.StoreID
WHERE t1.SubmittedDate >= '01/01/2017' AND t1.SubmittedDate <= '05/31/2017'
GROUP BY t1.StoreID, t1.CustomerID, t2.LastName, t2.FirstName
HAVING COUNT (t1.CaseNo) >= 3
ORDER BY t1.StoreID, t1.PatronID
I would like it to look something like this, either one row with just the most recent case and detail or several rows showing all details of each case in addition to the store id, customer id, last name, first name, and case count.
Data Example
For these I usually like to make a temp table of aggregates:
DROP TABLE IF EXISTS #tmp;
CREATE TABLE #tmp (
CustomerlD int NOT NULL DEFAULT 0,
case_count int NOT NULL DEFAULT 0,
case_max int NOT NULL DEFAULT 0,
);
INSERT INTO #tmp
(CustomerlD, case_count, case_max)
SELECT CustomerlD, COUNT(tl.CaseNo), MAX(tl.CaseNo)
FROM MainDatabase
GROUP BY CustomerlD;
Then you can join this "tmp" table back to any other table you want to display the number of cases on, or the max case number on. And you can limit it to customers that have more than 3 cases with WHERE case_count > 3
This is my first post - so I apologise if it's in the wrong seciton!
I'm joining two tables with a one-to-many relationship using their respective ID numbers: but I only want to return the most recent record for the joined table and I'm not entirely sure where to even start!
My original code for returning everything is shown below:
SELECT table_DATES.[date-ID], *
FROM table_CORE LEFT JOIN table_DATES ON [table_CORE].[core-ID] = table_DATES.[date-ID]
WHERE table_CORE.[core-ID] Like '*'
ORDER BY [table_CORE].[core-ID], [table_DATES].[iteration];
This returns a group of records: showing every matching ID between table_CORE and table_DATES:
table_CORE date-ID iteration
1 1 1
1 1 2
1 1 3
2 2 1
2 2 2
3 3 1
4 4 1
But I need to return only the date with the maximum value in the "iteration" field as shown below
table_CORE date-ID iteration Additional data
1 1 3 MoreInfo
2 2 2 MoreInfo
3 3 1 MoreInfo
4 4 1 MoreInfo
I really don't even know where to start - obviously it's going to be a JOIN query of some sort - but I'm not sure how to get the subquery to return only the highest iteration for each item in table 2's ID field?
Hope that makes sense - I'll reword if it comes to it!
--edit--
I'm wondering how to integrate that when I'm needing all the fields from table 1 (table_CORE in this case) and all the fields from table2 (table_DATES) joined as well?
Both tables have additional fields that will need to be merged.
I'm pretty sure I can just add the fields into the "SELECT" and "GROUP BY" clauses, but there are around 40 fields altogether (and typing all of them will be tedious!)
Try using the MAX aggregate function like this with a GROUP BY clause.
SELECT
[ID1],
[ID2],
MAX([iteration])
FROM
table_CORE
LEFT JOIN table_DATES
ON [table_CORE].[core-ID] = table_DATES.[date-ID]
WHERE
table_CORE.[core-ID] Like '*' --LIKE '%something%' ??
GROUP BY
[ID1],
[ID2]
Your example field names don't match your sample query so I'm guessing a little bit.
Just to make sure that I have everything you’re asking for right, I am going to restate some of your question and then answer it.
Your source tables look like this:
table_core:
table_dates:
And your outputs are like this:
Current:
Desired:
In order to make that happen all you need to do is use a subquery (or a CTE) as a “cross-reference” table. (I used temp tables to recreate your data example and _ in place of the - in your column names).
--Loading the example data
create table #table_core
(
core_id int not null
)
create table #table_dates
(
date_id int not null
, iteration int not null
, additional_data varchar(25) null
)
insert into #table_core values (1), (2), (3), (4)
insert into #table_dates values (1,1, 'More Info 1'),(1,2, 'More Info 2'),(1,3, 'More Info 3'),(2,1, 'More Info 4'),(2,2, 'More Info 5'),(3,1, 'More Info 6'),(4,1, 'More Info 7')
--select query needed for desired output (using a CTE)
; with iter_max as
(
select td.date_id
, max(td.iteration) as iteration_max
from #table_dates as td
group by td.date_id
)
select tc.*
, td.*
from #table_core as tc
left join iter_max as im on tc.core_id = im.date_id
inner join #table_dates as td on im.date_id = td.date_id
and im.iteration_max = td.iteration
select *
from
(
SELECT table_DATES.[date-ID], *
, row_number() over (partition by table_CORE date-ID order by iteration desc) as rn
FROM table_CORE
LEFT JOIN table_DATES
ON [table_CORE].[core-ID] = table_DATES.[date-ID]
WHERE table_CORE.[core-ID] Like '*'
) tt
where tt.rn = 1
ORDER BY [core-ID]
I have a table Emp like this for example.
----------------------
eName | eId
----------------------
Anusha 1
Sunny 2
Say i am looking for an entry whose id is 3.I want to write a query which finds the row and displays it.But if it doesnt find it it is expected to display a default row (temp,999)
select case
when (total != 0) then (select eName from Emp where eId = 3)
when (total == 0) then "temp"
end as eName,
case
when (total != 0) then (select eId from Emp where eId = 3)
when (total == 0) then 999
end as eId
from Emp,(select count(*) as total from Emp where eId = 3);
Using this query that i wrote it gives me two rows as a result.
temp 999
temp 999
I assume it is because of
(select count(*) as total from Emp where eId = 3) this query in the from list of the query.
I tried using the distinct clause and it gives me just a single row. But i am a little doubtful if i am messing the query and only trying to probably employ a hack to do it.Please suggest if there is a better way to do this or if i am wrong.
I'll get to how to do this right, but first let me give you a long answer to maybe help you with your understanding of SQL. What's happening to use is this:
Your select clause does not affect the number of records you get. So to understand what's happenning, let's simpify the query a little. Let's change it to,
select * from emp, (select count(*) as total from emp where eid=3)
I'm not sure what you think the comma after "emp" does here, but SQL see this as an old-style join on two tables: emp and the temporary table created by the select count(*), etc. There is no WHERE clause, so this is a cross join, but the second table only has one record anyway, so that part doesn't matter. But the fact that there is no WHERE clause means that you will get every record in emp, joined to the count. So the output of this query is:
ename eid count(*)
Anusha 1 0
Sunny 2 0
If you had 100 records, you would get 100 results.
Frankly there is no really clean way to do what you want in SQL. It's the sort of thing that's cleaner to do in code: do a plain "select ... where eid=3", and if you get no records, fill in the default at run-time.
But assuming that you need to do it in SQL for some reason, I think the simplest way would be:
select eid, ename from emp where eid=3
union
select 999 as eid, 'temp' as ename
where not exists (select 1 from emp where eid=3)
In some versions of SQL you need to give a dummy table name on the second select, like Oracle requires you to say "from dual".
I am trying to output all of my database reports into one report. I'm currently using nested select statements to get each line for each ID (the number of ID's is unknown). Now I would like to return all the rows for every ID (e.g. 1-25 if there are 25 rows) in one query. How would I do this?
SELECT (
(SELECT ... FROM ... WHERE id = x) As Col1
(SELECT ... FROM ... WHERE id = x) As Col2
(SELECT ... FROM ... WHERE id = x) As Col3
)
EDIT: Here's an example:
SELECT
(select post_id from posts where report_id = 1) As ID,
(select isnull(rank, 0) from results where report_id = 1 and url like '%www.testsite.com%') As Main,
(select isnull(rank, 0) from results where report_id = 1 and url like '%.testsite%' and url not like '%www.testsite%') As Sub
This will return the rank of a result for the main domain and the sub-domain, as well as the ID for the posts table.
ID Main Sub
--------------------------------------
1 5 0
I'd like to loop through this query and change report_id to 2, then 3, then 4 and carry on until all results are displayed. Nothing else needs to change other than the report_id.
Here's a basic example of what is inside the tables
POSTS
post_id post report_id
---------------------------------------------------------
1 "Hello, I am..." 1
2 "This may take..." 2
3 "Bla..." 2
4 "Bla..." 3
5 "Bla..." 4
RESULTS
result_id url title report_id
--------------------------------------------------------
1 http://... "Intro" 1
2 http://... "Hello!" 1
3 http://... "Question" 2
4 http://... "Help" 3
REPORTS
report_id description
---------------------------------
1 Introductions
2 Q&A
3 Starting Questions
4 Beginner Guides
5 Lectures
The query will want to pull the first post, the first result from the main website (www) and the first result from a subdomain by their report_id. These tables are part of a complicated join structure with many other tables but for these purposes these tables are the only ones that are needed.
I've managed to solve the problem by creating a table, setting variables to take all the contents and insert them in a while loop, then selecting them and dropping the table. I'll leave this open for a bit to see if anyone picks up a better way of doing it because I hate doing it this way.
If you need each report id on its own column, take a look at the PIVOT/UNPIVOT commands.
Here's one way of doing it :
SELECT posts.post_id AS ID,
IsNull(tblMain.Rank, 0) AS Main,
IsNull(tblSub.Rank, 0) AS Sub
FROM posts
LEFT JOIN results AS tblMain ON posts.post_id = tblMain.report_id AND tblMain.url like '%www.testsite.com%'
LEFT JOIN results AS tblSub ON posts.post_id = tblSub.report_id AND tblSub.url like '%.testsite%' and tblSub.url not like '%www.testsite%'
That is one query? You've provided your own answer?
If you mean you want to return a series of 'rows' as, for some reason, 'columns', this ability does exist, but I can't remember the exact name. Possible pivot. But it's a little odd.
see if this is what you are looking
SELECT
CASE WHEN reports.id = 1 THEN reports.Name
ELSE "" AS Col1,
CASE WHEN reports.id = 2 THEN reports.Name
ELSE "" AS Col2
....
FROM reports
Best Regards,
Iordan
Assuming you have a "master" table of IDs (if not I suggest you do so for Foreign Key purposes):
SELECT (
(SELECT ... FROM ... WHERE id = m.ID) As Col1
(SELECT ... FROM ... WHERE id = m.ID) As Col2
(SELECT ... FROM ... WHERE id = m.ID) As Col3
)
FROM MasterIDs m
Depending on how much each report is similar,you may be able to speed that up by moving some of the logic out of the nested statements and into the main body of the query.
Possibly a better way of thinking about this is to alter each report statement to return (ID,value) and do something like:
SELECT
report1.Id
,report1.Value AS Col1
,report2.Value AS Col2
FROM (SELECT Id, ... AS Value FROM ...) report1
JOIN (SELECT Id, ... AS Value FROM ...) report2 ON report1.Id = report2.Id
again, depending on the similarity of your reports you could probably combine these in someway.