Optimization of SQL Server Query - sql-server

My problem is that I created a query that takes too long to execute.
City | Department | Employee | Attendance Date | Attendance Status
------------------------------------------------------------------------
C1 | Dept 1 | Emp 1 | 2016-01-01 | ABSENT
C1 | Dept 1 | Emp 2 | 2016-01-01 | LATE
C1 | Dept 2 | Emp 3 | 2016-01-01 | VACANCY
So I want to create a view that contains same data and adds a column that contains the total number of employees (that serves me later in a SSRS project to determine the percentage of each status).
So I created a function that makes simple select filtering by department and date.
and this is the query that uses the function:
SELECT City, Department, Employee, [Attendence Date], [Attendance Status], [Get Department Employees By Date](Department, [Attendence Date]) AS TOTAL
FROM attendenceTable
This is the function:
CREATE FUNCTION [dbo].[Get Department Employees By Date]
(
#deptID int = null,
#date datetime = null
)
RETURNS nvarchar(max)
AS
BEGIN
declare #result int = 0;
select #result = count(*) from attendenceTable where DEPT_ID = #deptID and ATT_DATE_G = #date;
RETURN #result;
END
The problem is that query takes too long (I mean very long time) to execute.
Any Suggestion of optimization?

Your function is a scalar function, which is run once for every row in the result set (~600,000) times, and is a known performance killer. It can be rewritten into an inline table-valued function, or if the logic is not required elsewhere, a simple group, count & join would suffice:
WITH EmployeesPerDeptPerDate
AS ( SELECT DEPT_ID ,
ATT_DATE_G ,
COUNT(DISTINCT Employee) AS EmployeeCount
FROM attendenceTable
GROUP BY DEPT_ID ,
ATT_DATE_G
)
SELECT A.City ,
A.Department ,
A.Employee ,
A.[Attendence Date] ,
A.[Attendance Status] ,
ISNULL(B.EmployeeCount, 0) AS EmployeeCount
FROM attendenceTable AS A
LEFT OUTER JOIN EmployeesPerDeptPerDate AS B ON A.DEPT_ID = B.DEPT_ID
AND A.ATT_DATE_G = B.ATT_DATE_G;

Related

SQL SERVER update or insert after left join

I have a Table Animals
Id | Name | Count | -- (other columns not relevant)
1 | horse | 11
2 | giraffe | 20
I want to try to insert or update values from a CSV string
Is it possible to do something like the following in 1 query?
;with results as
(
select * from
(
values ('horse'), ('giraffe'), ('lion')
)
animal_csv(aName)
left join animals on
animals.[Name] = animal_csv.aName
)
update results
set
[Count] = 1 + animals.[Count]
-- various other columns are set here
where Id is not null
--else
--insert into results ([Name], [Count]) values (results.aName, 1)
-- (essentially Where id is null)
It looks like what you're looking for is a table variable or temporary table rather than a common table expression.
If I understand your problem correctly, you are building a result set based on data you're getting from a CSV, merging it by incrementing values, and then returning that result set.
As I read your code, it looks as if your results would look like this:
aName | Id | Name | Count
horse | 1 | horse | 12
giraffe | 2 | giraffe | 21
lion | | |
I think what you're looking for in your final result set is this:
Name | Count
horse | 12
giraffe | 21
lion | 1
First, you can get from your csv and table to a resultset in a single CTE statement:
;WITH animal_csv AS (SELECT * FROM (VALUES('horse'),('giraffe'), ('lion')) a(aName))
SELECT ISNULL(Name, aName) Name
, CASE WHEN [Count] IS NULL THEN 1 ELSE 1 + [Count] END [Count]
FROM animal_csv
LEFT JOIN animals
ON Name = animal_csv.aName
Or, if you want to build your resultset using a table variable:
DECLARE #Results TABLE
(
Name VARCHAR(30)
, Count INT
)
;WITH animal_csv AS (SELECT * FROM (VALUES('horse'),('giraffe'), ('lion')) a(aName))
INSERT #Results
SELECT ISNULL(Name, aName) Name
, CASE WHEN [Count] IS NULL THEN 1 ELSE 1 + [Count] END [Count]
FROM animal_csv
LEFT JOIN animals
ON Name = animal_csv.aName
SELECT * FROM #results
Or, if you just want to use a temporary table, you can build it like this (temp tables are deleted when the connection is released/closed or when they're explicitly dropped):
;WITH animal_csv AS (SELECT * FROM (VALUES('horse'),('giraffe'), ('lion')) a(aName))
SELECT ISNULL(Name, aName) Name
, CASE WHEN [Count] IS NULL THEN 1 ELSE 1 + [Count] END [Count]
INTO #results
FROM animal_csv
LEFT JOIN animals
ON Name = animal_csv.aName
SELECT * FROM #results

Joining two tables and need to have MAX aggregate function in ON clause

This is my code! I want to give a part id and purchase order id to my report and it brings all the related information with those specification. The important thing is that, if we have same purchase order id and part id we need the code to return the result with the highest transaction id. The following code is not providing what I expected. Could you please help me?
SELECT MAX(INVENTORY_TRANS.TRANSACTION_ID), INVENTORY_TRANS.PART_ID
, INVENTORY_TRANS.PURC_ORDER_ID, TRACE_INV_TRANS.QTY, TRACE_INV_TRANS.CREATE_DATE, TRACE_INV_TRANS.TRACE_ID
FROM INVENTORY_TRANS
JOIN TRACE_INV_TRANS ON INVENTORY_TRANS.TRANSACTION_ID = TRACE_INV_TRANS.TRANSACTION_ID
WHERE INVENTORY_TRANS.PART_ID = #PartID
AND INVENTORY_TRANS.PURC_ORDER_ID = #PurchaseOrderID
GROUP BY TRACE_INV_TRANS.QTY, TRACE_INV_TRANS.CREATE_DATE, TRACE_INV_TRANS.TRACE_ID, INVENTORY_TRANS.PART_ID
, INVENTORY_TRANS.PURC_ORDER_ID
The sample of trace_inventory_trans table is :
part_id trace_id transaction id qty create_date
x 1 10
x 2 11
x 3 12
the sample of inventory_trans table is :
transaction_id part_id purc_order_id
11 x p20
12 x p20
I wanted to have the result of biggest transaction which is transaction 12 but it shows me transaction 11
I would use a sub-query to find the MAX value, then join that result to the other table.
The ORDER BY + TOP (1) returns the MAX value for transaction_id.
SELECT
inv.transaction_id
,inv.part_id
,inv.purc_order_id
,tr.qty
,tr.create_date
,tr.trace_id
FROM
(
SELECT TOP (1)
transaction_id,
part_id,
purc_order_id
FROM
INVENTORY_TRANS
WHERE
part_id = #PartID
AND
purc_order_id = #PurchaseOrderID
ORDER BY
transaction_id DESC
) AS inv
JOIN
TRACE_INV_TRANS AS tr
ON inv.transaction_id = tr.transaction_id;
Results:
+----------------+---------+---------------+------+-------------+----------+
| transaction_id | part_id | purc_order_id | qty | create_date | trace_id |
+----------------+---------+---------------+------+-------------+----------+
| 12 | x | p20 | NULL | NULL | 3 |
+----------------+---------+---------------+------+-------------+----------+
Rextester Demo

TSQL - Return duplicate rows with highest value and longest date

I have got a list of staff who are contractors and it includes duplicates as some work on multiple contracts at the same time. I need to find the row with the most hours for that person and secondly with the end date furthest away (if the hours is the same). I guess this is the Current main contract. I also need to make sure the Date From and the Date to is in between the current date - how can this be done?
+------------+----------+------+-------+------------+------------+
| ContractID | PersonID | Name | Hours | Date From | Date To |
+------------+----------+------+-------+------------+------------+
| 8 | 1 | John | 30 | 20/02/2018 | 26/02/2018 |
| 8 | 2 | Paul | 5 | 20/02/2018 | 26/02/2018 |
| 7 | 3 | John | 7 | 20/02/2018 | 26/02/2018 |
+------------+----------+------+-------+------------+------------+
In the above example, I would need to bring back the John – 30hours and the Paul 5 Hours row. PS - The PersonID is different for each row but the "Name" is the same for the person if on multiple contracts.
Thanks
One approach is simply to use exists with appropriate ordering logic:
select c.*
from contracts c
where c.contractid = (select top 1 c2.contractid
from contracts c2
where c2.name = c.cname and
getdate() >= c2.datefrom and
getdate() < c2.dateto
order by c2.hours desc, c2.dateto desc
);
You can put similar logic into a window function:
select c.*
from (select c.*,
row_number() over (partition by c.name order by c.hours desc, c.dateto desc) as seqnum
from contracts c
where getdate() >= c.dateto and getdate() < c.datefrom
) c
where seqnum = 1;
If you need the full row, I'd do somehthing like this:
with
rankedByHours as (
select
ContractID,
PersonID,
Name,
Hours,
[Date From],
[Date To],
row_number() over (partition by PersonID order by Hours desc) as RowID
from
Contracts
)
select
ContractID,
PersonID,
Name,
Hours,
[Date From],
[Date To],
case
when getdate() between [Date From] and [Date To] then 'Current'
when getdate() < [Date From] then 'Not Started'
else 'Expired'
end as ContractStatus
from
RankedByHours
where
RowID = 1;
Use the CTE to inject a row_number() sorting all rows by your sort criteria, then select out the top one in the main body. It can be easily extended to also capture your farthest-out end date.

How to increment data by week in SQL

I am trying to find out the number of stores that had no sales for the last 4 weeks. Because of the way the grid is structured in DevExpress, I need to show the weeks as Wk1, Wk2, Wk3, Wk4 as columns (the data in these columns would be the store count of stores with zero sales).
Right now my data has the Sale Fiscal Week as a row. What I need it to look like is basically like:
Vendor | Store | City | State | Category | Wk1 | Wk2 | Wk3| Wk4
------------------------------------------------------------------------
Prairie Farms | #16141 | Adrian | MI | 2% Gallon | 1 | 0 | 1 | 1
Prairie Farms | #16141 | Adrian | MI | Whole Gallon | 1 | 1 | 0 | 0
instead of
Vendor | Store | City | State | Category | Sale Cal Wk | Sale Fiscal Wk
--------------------------------------------------------------------------
Prairie Farms | #16141 | Adrian | MI | 2% Gallon | 2015-10-23 | 38
Prairie Farms | #16141 | Adrian | MI | Whole Gallon | 2015-10-23 | 38
I have included my code which works so far. I have a count(store) function as 'ZeroStore' in the final select statement to indicate the store as a Zero Sales store.
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
IF 1=0
BEGIN
SET FMTONLY OFF
END
DECLARE #4WeeksAgo date
DECLARE #Today date
SET #4WeeksAgo = DATEADD(day,-28,getdate())
SET #Today = cast(GETDATE() as date)
DECLARE #maxweek as tinyint
SET #maxweek = case when (SELECT DGFiscalWeek
FROM scans.dbo.DimDate
WHERE CAST(Actualdate as DATE)=dateadd(dd,0,CAST(getdate() as DATE))) = 1 THEN 52 ELSE
(SELECT DGFiscalWeek FROM scans.dbo.DimDate WHERE CAST(Actualdate as DATE)=dateadd(dd,0,CAST(getdate() as DATE)))-1 END
DECLARE #LYear as int
SET #LYear = cast ((select max(FiscalYr) from Scans.dbo.Scans)-1 as varchar(6))
DECLARE #Year as int
SET #Year = #LYear+1
CREATE TABLE #Initialize
(
[Master Vendor] varchar (100),
[Dairy Vendor] varchar (100),
Store float,
City varchar (50),
State varchar(5),
District int,
Category varchar(20),
[Sale Cal Wk] date,
SaleYear int,
[Sale Fiscal Week] tinyint,
Units int,
Sales money,
Grade int
)
INSERT INTO #Initialize
SELECT DISTINCT
d.[Master Vendor], d.[Dairy Vendor], d.[Store],
City, State, District,
'2% Gallon',
CAST(MAX(ActualDate) AS DATE) AS [Sale Cal Wk],
DGFiscalYear,
DGFiscalWeek,
Units = 0,
Sales = 0,
Grade = 0
FROM
Scans.dbo.DGStores s
FULL JOIN
Scans.dbo.DimDate dd ON s.State <> 'XX'
FULL JOIN
Tableau.dbo.DollarGeneralDairyDistributors d ON d.Store = s.Store
WHERE
((DGFiscalYear = #LYear AND DGFiscalWeek >= #maxweek) OR
(DGFiscalYear = #Year AND DGFiscalWeek BETWEEN 1 AND #maxweek))
AND d.Store IN (SELECT Store
FROM DollarGeneralDairyDistributors)
GROUP BY
d.[Master Vendor], d.[Dairy Vendor], District, d.Store,
DGFiscalYear, DGFiscalWeek, City, State
INSERT INTO #Initialize
SELECT DISTINCT
d.[Master Vendor] ,
d.[Dairy Vendor] ,
d.[Store],
City,
[State],
District,
Category = 'Whole Gallon',
cast(max(ActualDate) as DATE) as [Sale Cal Wk],
DGFiscalYear,
DGFiscalWeek,
Units=0,
Sales=0,
Grade=0
FROM
Scans.dbo.DGStores s
FULL JOIN
Scans.dbo.DimDate dd
ON
s.State <> 'XX'
FULL JOIN
Tableau.dbo.DollarGeneralDairyDistributors d
ON
d.Store = s.Store
WHERE
((DGFiscalYear = #LYear AND DGFiscalWeek >= #maxweek) OR (DGFiscalYear = #Year AND DGFiscalWeek BETWEEN 1 AND #maxweek))
AND
d.Store IN
(SELECT Store
FROM DollarGeneralDairyDistributors)
GROUP BY d.[Master Vendor], d.[Dairy Vendor] ,District, d.Store, DGFiscalYear, DGFiscalWeek, City, State
CREATE TABLE #Update
(Store varchar(15),
Category varchar(25),
FiscalYr int,
FiscalWk tinyint,
Units int,
Sales money)
INSERT #Update
SELECT DISTINCT Store, Category, FiscalYr, FiscalWk,
isnull(sum(isnull(SumUnits,0)),0) as Units,
isnull(sum(isnull(SumSales,0)),0) as Sales
FROM scans.dbo.Scans sc
JOIN [Scans].[dbo].DollarGeneralDairyCategory c
ON sc.ItemSku = c.ItemSku
WHERE
FiscalYr >= #LYear
GROUP BY[Store], FiscalYr, FiscalWk, Category
UPDATE #Initialize
SET Units = u.Units,
Sales = u.Sales,
Grade=100
FROM #Update u
WHERE #Initialize.[Sale Fiscal Week] = u.FiscalWk AND #Initialize.SaleYear = u.FiscalYr
AND #Initialize.Store=u.Store AND #Initialize.Category = u.Category
SELECT *
, COUNT(store) AS 'ZeroStore'
FROM #Initialize
GROUP BY [Sale Cal Wk], [Master Vendor], [Dairy Vendor], Store, City, State, District, Category, SaleYear, [Sale Fiscal Week], Units, Sales, Grade
HAVING SUM( Units) = 0 AND [Sale Cal Wk] BETWEEN #4WeeksAgo AND #Today
drop table #Initialize,#Update
END
Thank you very much for any input or help.

Sequential SQL inserts when triggered by CROSS APPLY

This process has several steps which are reflected in various tables of a database:
Production --> UPDATE to the inventory table using something like
UPDATE STOR SET
STOR.BLOC1 = T.BLOC1,
STOR.BLOC2 = T.BLOC2,
STOR.BLOC3 = T.BLOC3,
STOR.PRODUCTION = T.PROD,
STOR.DELTA = T.DELTA
FROM BLDG B INNER JOIN STOR S
ON S.B_ID = B.B_ID
CROSS APPLY dbo.INVENTORIZE(B.B_ID) AS T;
The above feeds a log table with a TRIGGER like this:
CREATE TRIGGER trgrCYCLE
ON STOR
FOR UPDATE
AS
INSERT INTO dbo.INVT
(TS, BLDG, PROD, ACT, VAL)
SELECT CURRENT_TIMESTAMP, B_ID, PRODUCTION,
CASE WHEN DELTA < 0 THEN 'SELL' ELSE 'BUY' END,
DELTA
FROM inserted WHERE COALESCE(DELTA,0) <> 0
And finally, every update should INSERT a row into a financials table which I added to the TRIGGER above:
INSERT INTO dbo.FINS
(COMPANY, TS, COST2, BAL)
SELECT CORP, CURRENT_TIMESTAMP, COST,
((SELECT TOP 1 BAL FROM FINS WHERE COMPANY = CORP ORDER BY TS DESC)- COST)
FROM inserted WHERE COALESCE(COST,0) <> 0
The problem is with this line:
((SELECT TOP 1 BAL FROM FINS WHERE COMPANY = CORP ORDER BY TS DESC)- COST)
which is meant to calculate the latest balance of an account. But because the CROSS APPLY treats all the INSERTS as a batch, the calculation is done off of the same last record and I get an incorrect balance figure. Example:
COST BALANCE
----------------
1,000 <-- initial balance
-150 850
-220 780 <-- should be 630
What would be the way to solve that? A trigger on the FINS table instead for the balance calculation?
Understanding existing logic in your query
UPDATE statement will fire a trigger only once for a set or batch satisfying join conditions, Inserted statement will have all the records that are being updated. This is because of BATCH processing not because of CROSS APPLY but because of UPDATE.
In this query of yours
SELECT CORP, CURRENT_TIMESTAMP, COST,
((SELECT TOP 1 BAL FROM FINS WHERE COMPANY = CORP ORDER BY TS DESC)- COST)
FROM inserted WHERE COALESCE(COST,0) <> 0
For each CORP from an Outer query, same BAL will be returned.
(SELECT TOP 1 BAL FROM FINS WHERE COMPANY = CORP ORDER BY TS DESC)
That being said, your inner query will be replaced by 1000(value you used in your example) every time CORP = 'XYZ'
SELECT CORP, CURRENT_TIMESTAMP, COST, (1000- COST)
FROM inserted WHERE COALESCE(COST,0) <> 0
Now your inserted statement has all the records that are being inserted. So every record's cost will be subtracted by 1000. Hence you are getting unexpected result.
Suggested solution
As per my understanding, you want to calculate some cumulative frequency kind of thing. Or last running total
Data Preparation for problem statement. Used my dummy data to give you an idea.
--Sort data based on timestamp in desc order
SELECT PK_LoginId AS Bal, FK_RoleId AS Cost, AddedDate AS TS
, ROW_NUMBER() OVER (ORDER BY AddedDate DESC) AS Rno
INTO ##tmp
FROM dbo.M_Login WHERE AddedDate IS NOT NULL
--Check how data looks
SELECT Bal, Cost, Rno, TS FROM ##tmp
--Considering ##tmp as your inserted table,
--I just added Row_Number to apply Top 1 Order by desc logic
+-----+------+-----+-------------------------+
| Bal | Cost | Rno | TS |
+-----+------+-----+-------------------------+
| 172 | 10 | 1 | 2012-12-05 08:16:28.767 |
| 171 | 10 | 2 | 2012-12-04 14:36:36.483 |
| 169 | 12 | 3 | 2012-12-04 14:34:36.173 |
| 168 | 12 | 4 | 2012-12-04 14:33:37.127 |
| 167 | 10 | 5 | 2012-12-04 14:31:21.593 |
| 166 | 15 | 6 | 2012-12-04 14:30:36.360 |
+-----+------+-----+-------------------------+
Alternative logic for subtracting cost from last running balance.
--Start a recursive query to subtract balance based on cost
;WITH cte(Bal, Cost, Rno)
AS
(
SELECT t.Bal, 0, t.Rno FROM ##tmp t WHERE t.Rno = 1
UNION ALL
SELECT c.Bal - t.Cost, t.Cost, t.Rno FROM ##tmp t
INNER JOIN cte c ON t.RNo - 1 = c.Rno
)
SELECT * INTO ##Fin FROM cte;
SELECT * FROM ##Fin
Output
+-----+------+-----+
| Bal | Cost | Rno |
+-----+------+-----+
| 172 | 0 | 1 |
| 162 | 10 | 2 |
| 150 | 12 | 3 |
| 138 | 12 | 4 |
| 128 | 10 | 5 |
| 113 | 15 | 6 |
+-----+------+-----+
You have to tweet your columns little bit to get this functionality into your trigger.
I think you can try a trigger on the Fins.
You can use IDENT_CURRENT('Table')) to take the last primary key from the table and make a select.
I think it's better than "select top 1".
To to take the last balance value, set a variable last_bal = select bal from FINS where primary_key = Ident_Current("FINS")
well
first sql is a game where it work with groups or rather "set" so always you have think about that.
if you work with a simple item is correct, it maybe be better approach
declare #myinsert table(id int identity(1,1), company VArchar(35), ts datetime, cost2 smallmoney, bal smallmoney)
insert into #myinsert(company,ts, cost2, bal)
SELECT CORP, CURRENT_TIMESTAMP, COST,
FROM inserted WHERE COALESCE(COST,0) <> 0
declare #current int
select #current = min(id) from #myinsert
while exists(select * from #myinsert where id = #current)
begin
INSERT INTO dbo.FINS
(COMPANY, TS, COST2, BAL)
SELECT COMPANY, CURRENT_TIMESTAMP, COST,
((SELECT TOP 1 BAL FROM FINS WHERE COMPANY = my.COMPANY ORDER BY TS DESC)- COST)
from #myinsert my where id = #current
select #current = min(id) from #myinsert where id > #current
end
i am not giving you exact query .For a moment forget trigger.Because you are unable to test your query .
I suggest to use Output clause .This will atleast help you to construct proper query and test it.
this query is running ok,(if you can use merge then that is best).
Declare #t table
(
BLOC1,BLOC2,BLOC3 ,PRODUCTION ,DELTA --whatever column is require here
)
UPDATE STOR SET
STOR.BLOC1 = T.BLOC1,
STOR.BLOC2 = T.BLOC2,
STOR.BLOC3 = T.BLOC3,
STOR.PRODUCTION = T.PROD,
STOR.DELTA = T.DELTA
Output inserted.BLOC1 ,inserted.BLOC2, and so on into #t
FROM BLDG B INNER JOIN STOR S
ON S.B_ID = B.B_ID
CROSS APPLY dbo.INVENTORIZE(B.B_ID) AS T;
now you have inserted value in table variable #t
SELECT CORP, CURRENT_TIMESTAMP, COST,
BAL,Row_Number() over(partition by company order by TS desc) RN
FROM #t inner join FINS on COMPANY = CORP
WHERE COALESCE(COST,0) <> 0
Verify this query till here.Think of optimizing or trigger later on.
I think i gave good suggestion.and I guess subtraction is not a problem.I am telling to put everything in output clause and analyze the query and test it.
you can use CTE inside trigger also but how will you test it.
;With CTE as
(
SELECT CORP, CURRENT_TIMESTAMP, COST,BAL
ROW_NUMBER()over(ORDER BY TS DESC )rn
FROM inserted
inner join FINS on COMPANY = CORP
WHERE COALESCE(COST,0) <> 0
)
select * from CTE --check this what you are getting
Something like that, Isn't complete.
CREATE TRIGGER trgrCYCLE
ON STOR
FOR UPDATE
AS
begin
declare #last_bal int
declare #company varchar(50)
declare #ts --type
declare #cost int
declare #bal --type
--etc whatever you need
select #company = company, #ts= ts , #cost = cost , #bal = bal from INSERTED
--others selects and sets
set #last_bal = select bal from dbo.FINS where you_primary_key = IDENT_CURRENT('FINS'))
set #last_bal = #last_bal - #cost
Insert INTO FINS (company, ts, cost2, bal) VALUES (#company, #ts, #cost, #last_bal) where --your conditions
end
If, similar to #Shantanu's method, you could associate a sequence with inserted, the virtual table associated with the trigger you could do this by subtracting all the COSTs that come before the current record.
This could be accomplished by adding a rowversion to STOR, which will be updated automatically with each delete.
Then instead of:
((SELECT TOP 1 BAL FROM FINS WHERE COMPANY = CORP ORDER BY TS DESC)- COST)
from inserted ...
make the rowversion RV, and:
(SELECT SUM(X.B) FROM
(SELECT TOP 1 BAL B
FROM FINS
WHERE COMPANY = CORP
ORDER BY TS DESC
UNION
SELECT -COST B
FROM inserted ii
WHERE ii.RV >= i.RV AND ii.CORP = i.CORP
) AS X)
FROM inserted i WHERE COALESCE(COST,0) <> 0
Should do what you want. You could conceivably do this with a timestamp that was more find-grained than CURRENT_TIMESTAMP which, I believe, goes down only to seconds but that requires you update it in the UPDATE statement. The rowversion may cause problems with your STOR insert statements.

Resources