Script to find the the number of rows those were update in the last 1hr - sql-server

This script does give me the list of changed rows but I also need to make sure that the number of changed rows should be equal to the number of rows those were update in the last 1 hour which would give me more comfort level on validation.
Here is the code which gives me the list of changed rows. Except does give me all the rows from the first select statement which are not in the second select if I am not wrong. I am just wondering how to check the number of rows that were updated in the last hr which must match the count of rows when I run this below query.
Select [Accounting_Period_ID]
,[Policy_Number]
,[Risk_ID]
,[Product_ID]
,[Inception_Date_ID]
,[Effective_Date_ID]
,[Expiration_Date_ID]
,[Cancellation_Date_ID]
,[Reinstate_Date_ID]
,[Policy_Source_System_ID]
,[Risk_Geo_ID]
,[Risk_Profile_ID]
,[Policy_Status_ID]
,[Agency_ID]
,[Limit_Selection_ID]
,[Written_Premium_MTD]
,[Written_Premium_ITD]
,[Fees_MTD]
,[Fees_ITD]
,[Commission_MTD]
,[Commission_ITD]
,[Earned_Premium_MTD]
,[Earned_Premium_ITD]
,[In_Force_Count]
,[New_Business_Count]
,[Renewed_Count]
,[Cancelled_Count]
,[Reinstated_Count]
,[Dwelling_Limit]
,[Other_Structures_Base_Limit]
,[Other_Structures_Extended_Limit]
,[Other_Structures_Total_Limit]
,[Contents_Limit]
,[Additional_Living_Expense_Limit]
,[Liability_Limit]
,[Medical_Limit]
,[Total_Insured_Value]
,[Replacement_Value]
,[AOP_Deductible]
,[Days_in_Force]
,[Earned_House_Years]
,[Cancellation_Entry_Date_ID]
,[Reinstate_Entry_Date_ID]
,[Seq]
,[Inserted_Date]
,[Inserted_By]
,[Last_Updated_Date]
,[Last_Updated_By]
,[Insurance_score]
,[Rewrite_Count]
,[Entry_Date_ID] from Datamart.Policy.Fact_Monthly_Policy_Snap_20190403
where Policy_Source_System_ID = 8
EXCEPT
Select [Accounting_Period_ID]
,[Policy_Number]
,[Risk_ID]
,[Product_ID]
,[Inception_Date_ID]
,[Effective_Date_ID]
,[Expiration_Date_ID]
,[Cancellation_Date_ID]
,[Reinstate_Date_ID]
,[Policy_Source_System_ID]
,[Risk_Geo_ID]
,[Risk_Profile_ID]
,[Policy_Status_ID]
,[Agency_ID]
,[Limit_Selection_ID]
,[Written_Premium_MTD]
,[Written_Premium_ITD]
,[Fees_MTD]
,[Fees_ITD]
,[Commission_MTD]
,[Commission_ITD]
,[Earned_Premium_MTD]
,[Earned_Premium_ITD]
,[In_Force_Count]
,[New_Business_Count]
,[Renewed_Count]
,[Cancelled_Count]
,[Reinstated_Count]
,[Dwelling_Limit]
,[Other_Structures_Base_Limit]
,[Other_Structures_Extended_Limit]
,[Other_Structures_Total_Limit]
,[Contents_Limit]
,[Additional_Living_Expense_Limit]
,[Liability_Limit]
,[Medical_Limit]
,[Total_Insured_Value]
,[Replacement_Value]
,[AOP_Deductible]
,[Days_in_Force]
,[Earned_House_Years]
,[Cancellation_Entry_Date_ID]
,[Reinstate_Entry_Date_ID]
,[Seq]
,[Inserted_Date]
,[Inserted_By]
,[Last_Updated_Date]
,[Last_Updated_By]
,[Insurance_score]
,ISNULL([Rewrite_Count],0) Rew
,[Entry_Date_ID] from Datamart.Policy.Fact_Monthly_Policy_Snap
where Policy_Source_System_ID = 8

DATEADD(hh,-1,GETDATE()) gives you the actual Time minus 1 hour .this you can compare with Last_Updated_Date.
Count(*) gives you the number of rows.

Related

Peewee select query with multiple joins and multiple counts

I've been attempting to write a peewee select query which results in a table with 2 counts (one for the number of prizes associated with the lottery, and the for the number of packages associated with the lottery), as well as the fields in the Lottery model.
I've managed to write select queries with 1 count working (seen below), and then I've had to convert the ModelSelects to lists and join them manually (which I think is very hacky).
I did manage to write a select query where the results were joined, but it would multiply the packages count with the prizes count (I've since lost that query).
I also tried using a .switch(Lottery) but I didn't have any luck with this.
query1 = (Lottery.select(Lottery,fn.count(Package.id).alias('packages'))
.join(LotteryPackage)
.join(Package)
.order_by(Lottery.id)
.group_by(Lottery)
.dicts())
query2 = (Lottery.select(Lottery.id.alias('lotteryID'), fn.count(Prize.id).alias('prizes'))
.join(LotteryPrize)
.join(Prize)
.group_by(Lottery)
.order_by(Lottery.id)
.dicts())
lottery = list(query1)
query3 = list(query2)
for x in range(len(lottery)):
lottery[x]['prizes'] = query3[x]['prizes']
While the above code works, is there a cleaner way to write this query?
Your best bet is to do this with subqueries.
# Create query which gets lottery id and count of packages.
L1 = Lottery.alias()
subq1 = (L1
.select(L1.id, fn.COUNT(LotteryPackage.package).alias('packages'))
.join(LotteryPackage, JOIN.LEFT_OUTER)
.group_by(L1.id))
# Create query which gets lottery id and count of prizes.
L2 = Lottery.alias()
subq2 = (L2
.select(L2.id, fn.COUNT(LotteryPrize.prize).alias('prizes'))
.join(LotteryPrize, JOIN.LEFT_OUTER)
.group_by(L2.id))
# Select from lottery, joining on each subquery and returning
# the counts.
query = (Lottery
.select(Lottery, subq1.c.packages, subq2.c.prizes)
.join(subq1, on=(Lottery.id == subq1.c.id))
.join(subq2, on=(Lottery.id == subq2.c.id))
.order_by(Lottery.name))
for row in query.objects():
print(row.name, row.packages, row.prizes)

column ambiguously defined on using offset and fetch first

SELECT
qplt.description,
qplab.status_code,
qplab.start_date,
qplc.start_date,
qplc.end_date
FROM
price_lists_dur qplab,
PRICE_LISTS_Tbl qplt,
PRICE_LIST_CHARGES qplc
WHERE
qplt.price_list_id = qplab.price_list_id
AND qplt.price_list_id = qplc.price_list_id
OFFSET 10 ROWS FETCH FIRST 40 ROWS ONLY
The above code returns an error.
But when I remove the last line offset fetch first, it works fine.
Can someone help with the query?
ORDER BY is mandatory to use OFFSET and FETCH clause
So use:
SELECT
qplt.description,
qplab.status_code,
qplab.start_date,
qplc.start_date,
qplc.end_date
FROM price_lists_dur qplab,
PRICE_LISTS_Tbl qplt,
PRICE_LIST_CHARGES qplc
WHERE qplt.price_list_id=qplab.price_list_id
AND qplt.price_list_id =qplc.price_list_id
ORDER BY qplab.status_code --(or the column you want)
OFFSET 10 ROWS FETCH FIRST 40 ROWS ONLY
to fix this try to give different aliases to your similar columns, so for example :
SELECT
qplt.description,
qplab.status_code,
qplab.start_date,
qplc.start_date as qplc_start_date, --notice the alias
qplc.end_date
FROM
price_lists_dur qplab,
PRICE_LISTS_Tbl qplt,
PRICE_LIST_CHARGES qplc
WHERE
qplt.price_list_id = qplab.price_list_id
AND qplt.price_list_id = qplc.price_list_id
OFFSET 10 ROWS FETCH FIRST 40 ROWS ONLY

Count the 'X' then arrange the value as row in SQL

I have a table in SQL that have many columns which the value of each columns in every row is either ' ' or 'X'. I need to count this 'X' for every columns which can be done by following code;
SELECT COUNT(GVI0) AS GVI0, COUNT(GVI1) AS GVI1, COUNT(GVI2) AS GVI2
FROM dbo.HullInspectionProgram
WHERE (StructureEntry='1' AND Year='2016')
The result of the query is;
GVI0 NDT0 GVI1 NDT1 GVI2 NDT2
11 11 2 4 11 11
However, (in my understanding) in order for this count value to be bind into ASP.net Chart Control with multiple series name 'GVI' and 'NDT', I need to make the column into row for the DataTable.
I try to use UNPIVOT in SQL like this;
SELECT GVI0Count
FROM (
SELECT COUNT(GVI0) AS GVI0, COUNT(GVI1) AS GVI1, COUNT(GVI2) AS GVI2
FROM dbo.HullInspectionProgram
WHERE (StructureEntry='1' AND Year='2016')
)
UNPIVOT (GVI0Count FOR ListOfColumns IN (GVI0)) AS unpivott
but it seem that the code is wrong.
How do I do this?
I think the following might work for you. At least, as a start.
SELECT *
FROM (
SELECT COUNT(GVI0) AS GVI0, COUNT(GVI1) AS GVI1, COUNT(GVI2) AS GVI2
FROM dbo.HullInspectionProgram
WHERE (StructureEntry='1' AND Year='2016')
) P
UNPIVOT (GVI0Count FOR ListOfColumns IN (GVI0, GVI1, GVI2)) AS unpivott

Convert Statement to Crystal Reports SQL Expression

I have a SQL command that works great in SQL Server. Here's the query:
SELECT TOP 1000
(
SELECT COUNT(LINENUM)
FROM OEORDD D1
WHERE D1.ORDUNIQ = OEORDD.ORDUNIQ
)
- (SELECT COUNT(LINENUM)
FROM OEORDD D1
WHERE D1.ORDUNIQ = OEORDD.ORDUNIQ
AND D1.LINENUM > OEORDD.LINENUM)
FROM OEORDD
ORDER BY ORDUNIQ, LINENUM
The query looks at the total lines on an order, then looks at the current "LINENUM" field. With the value of the LINENUM field, it looks to see how many lines have a greater LINENUM value on the order and subtracts it from the number of lines on an order to get the correct Line number.
When I try to add it as a SQL expression in version 14.0.2.364 as follows:
(
(
SELECT COUNT("OEORDD"."LINENUM")
FROM "OEORDD" "D1"
WHERE "D1"."ORDUNIQ" = "OEORDD"."ORDUNIQ"
)
- (SELECT COUNT("OEORDD"."LINENUM")
FROM "OEORDD" "D1"
WHERE "D1"."ORDUNIQ" = "OEORDD"."ORDUNIQ"
AND "D1"."LINENUM" > "OEORDD"."LINENUM"
)
)
I get the error "Column 'SAMDB.dbo.OEORDD.ORDUNIQ' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
If I try to add GROUP BY "OEORDD"."ORDUNIQ" at the end, I get "Incorrect syntax near the keyword 'GROUP'. I've tried adding "FROM OEORDD" at the end of query and it errors out on the word "FROM". I have the correct tables linked in the Database Expert.
EDIT --------------
I was able to get the first query working by getting rid of the alias, it's as follows:
(
SELECT COUNT(LINENUM)
FROM OEORDD
WHERE OEORDH.ORDUNIQ=OEORDD.ORDUNIQ)
)
However, I believe I need to use the alias in the second query to compare line numbers. I'm still stuck on that one.

Eliminating extra rows in nested selects

I am trying to get the specific UserRating by month for a specified range. When I use this below, it works, with no extra rows:
Select
Distinct(Table2.AccountNumber),
Jan11=Case
When Datepart(yy,Org.Billdate)=2011 and Datepart(mm,Org.Billdate) = 01 then Table2.UserRating
END
From (
Select Distinct(Table1.AccountNumber) as UseThisNumber, Table1.RegionID as UseThisRegionID
From AccountDetail Table1
Where Table1.RegionID in (
Select Distinct(Reg.RegionID)
From RegionOrganizationTable Reg where Datepart(yy,Reg.Billdate)=2011 and Datepart(mm,Reg.Billdate) = 01) and
Table1.UserRating in (‘Very Satisfied’, ‘Mostly Satisfied, ‘Satisfied’)
Group by Table1.AccountNumber, Table1.RegionID) GroupedValues,
AccountDetail Table2
RegionOrganizationTable Org
Where Table2.AccountNumber =GroupedValues.UseThisNumber
and Table2 UseThisRegionID=GroupedValues.UseThisRegionID
and Org.RegionID= GroupedValues.UseThisRegionID
Order by Table2.AccountNumber
However, when I change the Datepart criteria in the nested component to:
Datepart(yy,Reg.Billdate)>2010
(as this is really the date range I want to examine), and remove:
Datepart(mm,Reg.Billdate)=01
all of the previously qualifying AccountNumbers from January 2011 are repeated but return a NULL value. This is compounded when I add the other months (i.e. Feb11=Case when....)
Here is what the output looks like in first scenario:
AccountNumber.....Jan11
123456...................Very Satisfied
143457...................Mostly Satisfied
163458...................Satisfied
183459...................Very Satisfied
203460...................Very Satisfied
And here's the second (I BOLDED the duplicates here for easier recognition)
AccountNumber.....Jan11
123456...................Very Satisfied
123456...................NULL
123499...................NULL
133499...................NULL
143457...................Mostly Satisfied
143457...................NULL
143499...................NULL
153499...................NULL
163458...................NULL
163458...................Satisfied
173458...................NULL
173499...................NULL
183459...................Very Satisfied
183459...................NULL
183499...................NULL
193459...................NULL
203460...................NULL
203460...................Very Satisfied

Resources