I have a question to collapse or roll up data based on the logic below.
How can I implement it?
The logic that allows episodes to be condensed into a single continuous care episode is a discharge code of 22 followed by an admission code of 4 on the same day.
continuous care implementation update
EPN--is a business_key.
episode_continuous_care_key is an artificial key that can be a row number function.
Below is the table structure.
drop table #source
CREATE TABLE #source(patidid varchar(20),epn int,preadmitdate datetime,adminttime varchar(10),
admitcode varchar(10),datedischarge datetime,disctime varchar(10),disccode varchar(10))
INSERT INTO #source VALUES
(1849,1,'4/23/2020','7:29',1,'7/31/2020','9:03',22)
,(1849,2,'7/31/2020','11:00',4,'7/31/2020','12:09',22)
,(1849,3,'7/31/2020','13:10',4,'8/24/2020','10:36',10)
,(1849,4,'8/26/2020','12:25',2,null,null,null)
,(1850,1,'4/23/2020','7:33',1,'6/29/2020','7:30',22)
,(1850,2,'6/29/2020','9:35',4,'7/8/2020','10:51',7)
,(1850,3,'7/10/2020','11:51',3,'7/29/2020','9:12',7)
,(1850,4,'7/31/2020','11:00',2,'8/6/2020','10:24',22)
,(1850,5,'8/6/2020','12:26',4,null,null,null)
,(1851,1,'4/23/2020','7:35',1,'6/24/2020','13:45',22)
,(1851,2,'6/24/2020','15:06',4,'9/24/2020','15:00',2)
,(1851,3,'12/4/2020','8:59',0,null,null,null)
,(1852,1,'4/23/2020','7:37',1,'7/6/2020','11:15',20)
,(1852,2,'7/8/2020','10:56',0,'7/10/2020','11:46',2)
,(1852,3,'7/10/2020','11:47',2,'7/28/2020','13:16',22)
,(1852,4,'7/28/2020','15:17',4,'8/4/2020','11:37',22)
,(1852,5,'8/4/2020','13:40',4,'11/18/2020','15:43',2)
,(1852,6,'12/2/2020','15:23',2,null,null,null)
,(1853,1,'4/23/2020','7:40',1,'7/1/2020','8:30',22)
,(1853,2,'7/1/2020','14:57',4,'12/4/2020','12:55',7)
,(1854,1,'4/23/2020','7:44',1,'7/31/2020','13:07',20)
,(1854,2,'8/3/2020','16:30',0,'8/5/2020','9:32',2)
,(1854,3,'8/5/2020','10:34',2,'8/24/2020','8:15',22)
,(1854,4,'8/24/2020','10:33',4,'12/4/2020','7:30',22)
,(1854,5,'12/4/2020','9:13',4,null,null,null)
That Excel sheet image says little about your database design so I invented my own version that more or less resembles your image. With a proper database design the first step of the solution should not be required...
Unpivot timestamp information so that admission timestamp and discharge timestamps become one column.
I used a common table expression Log1 for this action.
Use the codes to filter out the start of the continuous care periods. Those are the admissions, marked with Code.IsAdmission = 1 in my database design.
Also add the next period start as another column by using the lead() function.
These are all the actions from Log2.
Add a row number as continuous care key.
Using the next period start date, find the current continuous period end date with a cross apply.
Replace empty period end dates with the current date using the coalesce() function.
Calculate the difference as the continuous care period duration with the datediff() function.
Sample data
create table Codes
(
Code int,
Description nvarchar(50),
IsAdmission bit
);
insert into Codes (Code, Description, IsAdmission) values
( 1, 'First admission', 1),
( 2, 'Re-admission', 1),
( 4, 'Campus transfer IN', 0),
(10, 'Trial visit', 0),
(22, 'Campus transfer OUT', 0);
create table PatientLogs
(
PatientId int,
AdmitDateTime smalldatetime,
AdmitCode int,
DischargeDateTime smalldatetime,
DischargeCode int
);
insert into PatientLogs (PatientId, AdmitDateTime, AdmitCode, DischargeDateTime, DischargeCode) values
(1849, '2020-04-23 07:29', 1, '2020-07-31 09:03', 22),
(1849, '2020-07-31 11:00', 4, '2020-07-31 12:09', 22),
(1849, '2020-07-31 13:10', 4, '2020-08-24 10:36', 10),
(1849, '2020-08-26 12:25', 2, null, null);
Solution
with Log1 as
(
select updt.PatientId,
case updt.DateTimeType
when 'AdmitDateTime' then updt.AdmitCode
when 'DischargeDateTime' then updt.DischargeCode
end as Code,
updt.LogDateTime,
updt.DateTimeType
from PatientLogs pl
unpivot (LogDateTime for DateTimeType in (AdmitDateTime, DischargeDateTime)) updt
),
Log2 as (
select l.PatientId,
l.Code,
l.LogDateTime,
lead(l.LogDateTime) over(partition by l.PatientId order by l.LogDateTime) as LogDateTimeNext
from Log1 l
join Codes c
on c.Code = l.Code
where c.IsAdmission = 1
)
select la.PatientId,
row_number() over(partition by la.PatientId order by la.LogDateTime) as ContCareKey,
la.LogDateTime as AdmitDateTime,
coalesce(ld.LogDateTime, convert(smalldatetime, getdate())) as DischargeDateTime,
datediff(day, la.LogDateTime, coalesce(ld.LogDateTime, convert(smalldatetime, getdate()))) as ContStay
from Log2 la -- log admission
outer apply ( select top 1 l1.LogDateTime
from Log1 l1
where l1.PatientId = la.PatientId
and l1.LogDateTime < la.LogDateTimeNext
order by l1.LogDateTime desc ) ld -- log discharge
order by la.PatientId,
la.LogDateTime;
Result
PatientId ContCareKey AdmitDateTime DischargeDateTime ContStay
--------- ----------- ---------------- ----------------- --------
1849 1 2020-04-23 07:29 2020-08-24 10:36 123
1849 2 2020-08-26 12:25 2021-02-03 12:49 161
Fiddle to see things in action with intermediate results.
Here is a T-SQL solution that contains primary and foreign key relationships.
To make it a bit more realistic, I added a simple "Patient" table.
I put all your "codes" into a single table which should make it easier to manage the codes.
I do not understand the purpose of your concept of "continuous care" so I just added an "is first" binary column to the Admission table.
You might also consider adding something about the medical condition for which the patient is being treated.
CREATE SCHEMA Codes
GO
GO
CREATE TABLE dbo.Code
(
codeNr int NOT NULL,
description nvarchar(50),
CONSTRAINT Code_PK PRIMARY KEY(codeNr)
)
GO
CREATE TABLE dbo.Patient
(
patientNr int NOT NULL,
birthDate date NOT NULL,
firstName nvarchar(max) NOT NULL,
lastName nvarchar(max) NOT NULL,
CONSTRAINT Patient_PK PRIMARY KEY(patientNr)
)
GO
CREATE TABLE dbo.Admission
(
admitDateTime time NOT NULL,
patientNr int NOT NULL,
admitCode int,
isFirst bit,
CONSTRAINT Admission_PK PRIMARY KEY(patientNr, admitDateTime)
)
GO
CREATE TABLE dbo.Discharge
(
dischargeDateTime time NOT NULL,
patientNr int NOT NULL,
dischargeCode int NOT NULL,
CONSTRAINT Discharge_PK PRIMARY KEY(patientNr, dischargeDateTime)
)
GO
ALTER TABLE dbo.Admission ADD CONSTRAINT Admission_FK1 FOREIGN KEY (patientNr) REFERENCES dbo.Patient (patientNr) ON DELETE NO ACTION ON UPDATE NO ACTION
GO
ALTER TABLE dbo.Admission ADD CONSTRAINT Admission_FK2 FOREIGN KEY (admitCode) REFERENCES dbo.Code (codeNr) ON DELETE NO ACTION ON UPDATE NO ACTION
GO
ALTER TABLE dbo.Discharge ADD CONSTRAINT Discharge_FK1 FOREIGN KEY (patientNr) REFERENCES dbo.Patient (patientNr) ON DELETE NO ACTION ON UPDATE NO ACTION
GO
ALTER TABLE dbo.Discharge ADD CONSTRAINT Discharge_FK2 FOREIGN KEY (dischargeCode) REFERENCES dbo.Code (codeNr) ON DELETE NO ACTION ON UPDATE NO ACTION
GO
GO
I'm stuck trying to figure out how to get one of the MERGE statements to work. See below code snippet:
DECLARE #PipelineRunID VARCHAR(100) = 'testestestestest'
MERGE [TGT].[AW_Production_Culture] as [Target]
USING [SRC].[AW_Production_Culture] as [Source]
ON [Target].[MD5Key] = [Source].[MD5Key]
WHEN MATCHED AND [Target].[MD5Others] != [Source].[MD5Others]
THEN UPDATE SET
[Target].[CultureID] = [Source].[CultureID]
,[Target].[ModifiedDate] = [Source].[ModifiedDate]
,[Target].[Name] = [Source].[Name]
,[Target].[MD5Others] = [Source].[MD5Others]
,[Target].[PipelineRunID] = #PipelineRunID
WHEN NOT MATCHED BY TARGET THEN
INSERT VALUES (
[Source].[AW_Production_CultureKey]
,[Source].[CultureID]
,[Source].[ModifiedDate]
,[Source].[Name]
,#PipelineRunID
,[Source].[MD5Key]
,[Source].[MD5Others]);
When I try and run this query I receive the following error:
Msg 257, Level 16, State 3, Line 16
Implicit conversion from data type varchar to varbinary is not allowed. Use the CONVERT function to run this query.
The only VARBINARY column types are MD5Key and MD5Others. As they are both linked to their corresponding columns I don't understand why my error message indicates there is a VARCHAR problem involved. Does anybody understand how and why I should use a CONVERT() function here?
Thanks!
--EDIT: Schema definitions
CREATE VIEW [SRC].[AW_Production_Culture]
WITH SCHEMABINDING
as
SELECT
CAST(CONCAT('',[CultureID]) as VARCHAR(100)) as [AW_Production_CultureKey]
,CAST(HASHBYTES('MD5',CONCAT('',[CultureID])) as VARBINARY(16)) as [MD5Key]
,CAST(HASHBYTES('MD5',CONCAT([ModifiedDate],'|',[Name])) as VARBINARY(16)) as [MD5Others]
,[CultureID],[ModifiedDate],[Name]
FROM
[SRC].[tbl_AW_Production_Culture]
CREATE TABLE [TGT].[AW_Production_Culture](
[AW_Production_CultureKey] [varchar](100) NOT NULL,
[CultureID] [nchar](6) NULL,
[ModifiedDate] [datetime] NULL,
[Name] [nvarchar](50) NULL,
[MD5Key] [varbinary](16) NOT NULL,
[MD5Others] [varbinary](16) NOT NULL,
[RecordValidFrom] [datetime2](7) GENERATED ALWAYS AS ROW START NOT NULL,
[RecordValidUntil] [datetime2](7) GENERATED ALWAYS AS ROW END NOT NULL,
[PipelineRunID] [varchar](36) NOT NULL,
PRIMARY KEY CLUSTERED
(
[MD5Key] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY],
PERIOD FOR SYSTEM_TIME ([RecordValidFrom], [RecordValidUntil])
) ON [PRIMARY]
WITH
(
SYSTEM_VERSIONING = ON ( HISTORY_TABLE = [TGT].[AW_Production_Culture_History] )
)
Reposting my comment as an answer for the sweet, sweet, internet points:
You're getting that error because your varbinary value is being inserted into a varchar column. As your columns have the correct types already then it means your INSERT clause has mismatched columns.
As it is, your MERGE statement is not explicitly listing the destination columns - you should always explicitly list columns in production code so that your DML queries won't break if columns are added or reordered or marked HIDDEN.
So to fix this, change your INSERT clause to explicitly list destination column names.
Also, when using MERGE you should use HOLDLOCK (Or a more suitable lock, if applicable) - otherwise you’ll run into concurrency issues. MERGE is not concurrency-safe by default!
Minor nit-picks that are largely subjective:
I personally prefer avoiding [escapedName] wherever possible and prefer using short table aliases.
e.g. use s and t instead of [Source] and [Target].
"Id" (for "identity" or "identifier") is an abbreviation, not an acronym - so it should be cased as Id and not ID.
Consider using an OUTPUT clause to help diagnose/debug issues too.
So I'd write it like so:
DECLARE #PipelineRunId VARCHAR(100) = 'testestestestest'
MERGE INTO
tgt.AW_Production_Culture WITH (HOLDLOCK) AS t
USING
src.AW_Production_Culture AS s ON t.MD5Key = s.MD5Key
WHEN MATCHED AND t.MD5Others != s.MD5Others THEN UPDATE SET
t.CultureId = s.CultureId,
t.ModifiedDate = s.ModifiedDate,
t.Name = s.Name,
t.MD5Others = s.MD5Others,
t.PipelineRunID = #PipelineRunId
WHEN NOT MATCHED BY TARGET THEN INSERT
(
AW_Production_CultureKey,
CultureId,
ModifiedDate,
[Name],
PipelineRunId,
MD5Key,
MD5Others
)
VALUES
(
s.AW_Production_CultureKey,
s.CultureId,
s.ModifiedDate,
s.[Name],
#PipelineRunId,
s.MD5Key,
s.MD5Others
)
OUTPUT
$action AS [Action],
inserted.*,
deleted.*;
I have this code, that works, but I want to insert in the temp table the same values (DateTime and Value) from another variable (UBB_PreT_Line_LA.If_TotalInFeddWeight) present in the same table ([Runtime].[dbo].[History]). Then, I show the result in SQL Report Builder 3.0 in a table.
SET NOCOUNT ON
DECLARE #fechaItem DATETIME;
DECLARE #fechaFinTotal DATETIME;
SET #fechaItem = DateAdd(hh,7,#Fecha)
SET #fechaFinTotal = DateAdd(hh,23,#Fecha)
SET NOCOUNT OFF
DECLARE #tblTotales TABLE
(
VALOR_FECHA DATETIME,
VALOR_VALUE float
)
WHILE #fechaItem < #fechaFinTotal
BEGIN
DECLARE #fechaFin DATETIME;
SET #fechaFin = DATEADD(minute, 15, #fechaItem );
INSERT INTO #tblTotales
SELECT
MAX( [DateTime] ),
MAX( [Value] )
FROM [Runtime].[dbo].[History]
WHERE
[DateTime] >= #fechaItem
AND [DateTime] <= #fechaFin
AND (History.TagName='UBB_PreT_Belt_PF101A.Time_Running')
SET #fechaItem = #fechaFin;
END
SELECT TOP 64 VALOR_FECHA as Fecha,VALOR_VALUE as Valor
FROM #tblTotales
order by Valor ASC
What I want, is to join in a single query the result I get in these two tables, with the same query in which only the variable that is queried changes.
The purpose is to create a unique Dataset in Report Builder to display in a single table, the data of the two tables of the image. The 15 minute interval is because I just want to show the variation of the values every 15 minutes.
enter image description here
I have modified the code (Image_02), and with the Query Designer of the Report Builder I have obtained what is shown in the Image_03. The final goal would be to have the data of the second variable, in two more columns on the right (Fecha_Ton and Valor_Ton). How can I do it?
enter image description here
enter image description here
If I've understood your question correctly, I think that this query replaces your code entirely (and adds the second value):
declare #sample table (Datetime datetime not null, Value int not null,
TagName varchar(50) not null)
insert into #sample (DateTime, Value, TagName) values
('2018-08-16T10:14:00',6,'UBB_PreT_Belt_PF101A.Time_Running'),
('2018-08-16T10:08:00',8,'UBB_PreT_Belt_PF101A.Time_Running'),
('2018-08-16T10:23:00',7,'UBB_PreT_Belt_PF101A.Time_Running'),
('2018-08-16T10:07:00',7,'UBB_PreT_Line_LA.If_TotalInFeddWeight')
declare #Fecha datetime
set #Fecha = '20180816'
select
MAX(DateTime),
MAX(CASE WHEN TagName='UBB_PreT_Line_LA.If_TotalInFeddWeight' THEN Value END) as Fed,
MAX(CASE WHEN TagName='UBB_PreT_Belt_PF101A.Time_Running' THEN Value END) as Running
from
#sample
where
DateTime >= DATEADD(hour,7,#Fecha) and
DateTime < DATEADD(hour,23,#Fecha) and
TagName in ('UBB_PreT_Line_LA.If_TotalInFeddWeight',
'UBB_PreT_Belt_PF101A.Time_Running')
group by DATEADD(minute,((DATEDIFF(minute,0,DateTime)/15)*15),0)
order by MAX(DateTime) asc
Results:
Fed Running
----------------------- ----------- -----------
2018-08-16 10:14:00.000 7 8
2018-08-16 10:23:00.000 NULL 7
(You may want two separate dates following the same pattern using CASE as the values)
You shouldn't be building your data up row by agonising row1, you should find as way (such as that above) to express what the entire result set should look like as a single query. Let SQL Server itself decide whether it's going to do that by searching through the rows in date order, etc.
1There may be circumstances where you end up having to do this, but first exhaust any likely set-based options first.
I wish to make sure that my data has a constraint the following check (constraint?) in place
This table can only have one BorderColour per hub/category. (eg. #FFAABB)
But it can have multiple nulls. (all the other rows are nulls, for this field)
Table Schema
ArticleId INT PRIMARY KEY NOT NULL IDENTITY
HubId TINYINT NOT NULL
CategoryId INT NOT NULL
Title NVARCHAR(100) NOT NULL
Content NVARCHAR(MAX) NOT NULL
BorderColour VARCHAR(7) -- Can be nullable.
I'm gussing I would have to make a check constraint? But i'm not sure how, etc.
sample data.
1, 1, 1, 'test', 'blah...', '#FFAACC'
1, 1, 1, 'test2', 'sfsd', NULL
1, 1, 2, 'Test3', 'sdfsd dsf s', NULL
1, 1, 2, 'Test4', 'sfsdsss', '#AABBCC'
now .. if i add the following line, i should get some sql error....
INSERT INTO tblArticle VALUES (1, 2, 'aaa', 'bbb', '#ABABAB')
any ideas?
CHECK constraints are ordinarily applied to a single row, however, you can cheat using a UDF:
CREATE FUNCTION dbo.CheckSingleBorderColorPerHubCategory
(
#HubID tinyint,
#CategoryID int
)
RETURNS BIT
AS BEGIN
RETURN CASE
WHEN EXISTS
(
SELECT HubID, CategoryID, COUNT(*) AS BorderColorCount
FROM Articles
WHERE HubID = #HubID
AND CategoryID = #CategoryID
AND BorderColor IS NOT NULL
GROUP BY HubID, CategoryID
HAVING COUNT(*) > 1
) THEN 1
ELSE 0
END
END
Then create the constraint and reference the UDF:
ALTER TABLE Articles
ADD CONSTRAINT CK_Articles_SingleBorderColorPerHubCategory
CHECK (dbo.CheckSingleBorderColorPerHubCategory(HubID, CategoryID) = 1)
Another option that is available is available if you are running SQL2008. This version of SQL has a feature called filtered indexes.
Using this feature you can create a unique index that includes all rows except those where BorderColour is null.
CREATE TABLE [dbo].[UniqueExceptNulls](
[HubId] [tinyint] NOT NULL,
[CategoryId] [int] NOT NULL,
[BorderColour] [varchar](7) NULL,
)
GO
CREATE UNIQUE NONCLUSTERED INDEX UI_UniqueExceptNulls
ON [UniqueExceptNulls] (HubID,CategoryID)
WHERE BorderColour IS NOT NULL
This approach is cleaner than the approach in my other answer because it doesn't require creating extra computed columns. It also doesn't require you to have a unique column in the table, although you should have that anyway.
Finally, it will also be much faster than the UDF/Check Constraint solutions.
You can also do a trigger with something like this (this is actually overkill - you can make it cleaner by assuming the database is already in a valid state - i.e. UNION instead of UNION all etc):
IF EXISTS (
SELECT COUNT(BorderColour)
FROM (
SELECT INSERTED.HubId, INSERTED.CategoryId, INSERTED.BorderColour
UNION ALL
SELECT HubId, CategoryId, BorderColour
FROM tblArticle
WHERE EXISTS (
SELECT *
FROM INSERTED
WHERE tblArticle.HubId = INSERTED.HubId
AND tblArticle.CategoryId = INSERTED.CategoryId
)
) AS X
GROUP BY HubId, CategoryId
HAVING COUNT(BorderColour) > 1
)
RAISEERROR
If you have a unique column in your table, then you can accomplish this by creating a unique constraint on a computer column.
The following sample created a table that behaved as you described in your requirements and should perform better than a UDF based check constraint. You might also be able to improve the performance further by making the computed column persisted.
CREATE TABLE [dbo].[UQTest](
[Id] INT IDENTITY(1,1) NOT NULL,
[HubId] TINYINT NOT NULL,
[CategoryId] INT NOT NULL,
[BorderColour] varchar(7) NULL,
[BorderColourUNQ] AS (CASE WHEN [BorderColour] IS NULL
THEN cast([ID] as varchar(50))
ELSE cast([HuBID] as varchar(3)) + '_' +
cast([CategoryID] as varchar(20)) END
),
CONSTRAINT [UQTest_Unique]
UNIQUE ([BorderColourUNQ])
)
The one possibly undesirable facet of the above implementation is that it allows a category/hub to have both a Null AND a color defined. If this is a problem, let me know and I'll tweak my answer to address that.
PS: Sorry about my previous (incorrect) answer. I didn't read the question closely enough.