Check in and check out time in SQL - sql-server

simple data of table
My table is:
SELECT TOP (1000)
[ID]
,[UserName]
,[CheckTime]
,[Checktype]
,[CheckinLocation]
,[lat]
,[lng]
FROM
[dbo].[CheckTime]
INSERT INTO [dbo].[CheckTime] ([UserName], [CheckTime], [Checktype],[CheckinLocation], [lat], [lng])
VALUES (<UserName, nchar(10),>
,<CheckTime, datetime,>
,<Checktype, nvarchar(50),>
,<CheckinLocation, nvarchar(50),>
,<lat, float,>
,<lng, float,>)
GO
Create table script:
CREATE TABLE [dbo].[CheckTime]
(
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserName] [nchar](10) NULL,
[CheckTime] [datetime] NULL,
[Checktype] [nvarchar](50) NULL,
[CheckinLocation] [nvarchar](50) NULL,
[lat] [float] NULL,
[lng] [float] NULL,
CONSTRAINT [PK_CheckTime]
PRIMARY KEY CLUSTERED ([ID] ASC)
WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
I need to select each distinct home holding the maximum value of datetime.
max CheckTime as check out
min CheckTime as check in
I need a result like this:
id | Username | check in | check out
---+----------+-------------------+-------------------
1 | 10 | 2017-1-2 08:02:05 | 2017-1-2 10:02:05
1 | 12 | 2017-1-2 08:02:05 | 2017-1-2 10:02:05
1 | 12 | 2017-1-3 08:02:05 | 2017-1-3 10:02:05
1 | 10 | 2017-1-3 08:02:05 | 2017-1-3 10:02:05
I have tried:

You can try the following query.
Select Username
, Cast(CheckTime as Date) as CheckDate
, min(CheckTime) as [check in]
, max(CheckTime) as check out
From CheckInTable
Group by id, Username, Cast(CheckTime as Date)

Related

JSON Many to Many RelationShip Group By

I'm trying to create an SQL query allowing me to do this:
I have 3 tables in SQL Server 2017:
CREATE TABLE [dbo].[PRODUCTCATEGORY]
(
[PROD_ID] [int] NOT NULL,
[CAT_ID] [int] NOT NULL
CONSTRAINT [PK_PRODUCTCATEGORY]
PRIMARY KEY CLUSTERED ([PROD_ID] ASC, [CAT_ID] ASC)
)
CREATE TABLE [dbo].[CATEGORY]
(
[CAT_ID] [int] IDENTITY(1,1) NOT NULL,
[CAT_TITLE] [varchar](50) NOT NULL
CONSTRAINT [PK_CATEGORY]
PRIMARY KEY CLUSTERED ([CAT_ID] ASC)
)
CREATE TABLE [dbo].[PRODUCT]
(
[PROD_ID] [int] IDENTITY(1,1) NOT NULL,
[PROD_TITLE] [varchar](50) NOT NULL
CONSTRAINT [PK_PRODUCT]
PRIMARY KEY CLUSTERED ([PROD_ID] ASC)
)
A product can have 1 to many categories
A category can have 1 to many products
PROD_ID
PROD_TITLE
1
Book 1
2
Book 2
CAT_ID
CAT_TITLE
1
Cat 1
2
Cat 2
3
Cat 3
PROD_ID
CAT_ID
1
1
1
2
2
1
2
3
I would like to retrieve this:
| CAT_ID |CAT_TITLE | PRODUCTS |
|:------- |:--------:|:------------------------------------------------------------------------|
| 1 | Cat 1 |[{"PROD_ID":1,"PROD_TITLE":"Book 1"},{"PROD_ID":2,"PROD_TITLE":"Book 2"}]|
| 2 | Cat 2 |[{"PROD_ID":1,"PROD_TITLE":"Book 1"}] |
| 3 | Cat 3 |[{"PROD_ID":2,"PROD_TITLE":"Book 2"}] |
Thanks for your help
I just found this, using FOR JSON:
https://learn.microsoft.com/en-us/sql/relational-databases/json/format-query-results-as-json-with-for-json-sql-server?view=sql-server-ver15
I think something like this might work:
SELECT c.CAT_ID, c.CAT_TITLE,
(
SELECT p.PROD_ID, p.PROD_TITLE
FROM PRODUCT p
JOIN PRODUCTCATEGORY pc ON pc.PROD_ID = p.PROD_ID
WHERE pc.CAT_ID = c.CAT_ID
FOR JSON PATH
) AS ProductsAsJson
FROM CATEGORY c

Create 2 pair unique id data rows

I want create 2 data rows with 1 same findable unique id to each one by 1 query
2 difference is side column {1 buyer} {0 seller } and userId column {userID's}
id userId side price qty pairId
1 6 0 60 10 1
2 9 1 60 10 1
trying to visualize result table:
In SQL Server I tried SCOPE_IDENTITY()
insert into [dbo].[deals] (side, price, qty,pairId)
values (1, 60, 10 ,SCOPE_IDENTITY()),
(0, 60, 10 ,SCOPE_IDENTITY()),
create table command:
CREATE TABLE [demonstration].[dbo].[Deals](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[userId] [int] NULL,
[side] [smallint] NULL,
[qty] [decimal](18, 4) NULL,
[price] [decimal](18, 4) NULL,
[pairId] [bigint] NULL
) ON [PRIMARY]
GO
Add an IDENTITY column to deals table(or alter one column to identity) then use your query:
insert into [dbo].[deals] (side, price, qty,pairId)
values (1, 60, 10 ,IDENT_CURRENT('deals')+1),
(0, 60, 10 ,IDENT_CURRENT('deals')+1)
added +1

Get all rows between start and end flag

I've got a similar data structure
Parameter | Value | DateTime
----------------------------
Switch | "on" | 2019-10-13 15:01:25
Temp | 25 | 2019-10-13 15:01:37
Pressure | 1006 | 2019-10-13 15:01:53
...
Temp | 22 | 2019-10-13 15:04:41
Switch | "off" | 2019-10-13 15:04:59
...
Switch | "on" | 2019-10-13 17:14:51
Temp | 27 | 2019-10-13 17:15:07
...
Switch | "off" | 2019-10-13 17:17:43
Between each pair of Switch "on" and "off" I have to calculate the values for the parameters, i.e. average or max/min and so on. How can I get the different data sets to have multiple groups for the calculation?
I think this should be solvable with
- Stored Procedure (statement?)
- SSIS package (how?)
- .NET application.
What might be the best way to solve this issue?
Thanks in advance.
Update
This is the full structure of the table.
CREATE TABLE [schema].[foo]
(
[Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY,
[Group] VARCHAR(20) NOT NULL,
[Parameter] VARCHAR(50) NOT NULL,
[Type] VARCHAR(50) NOT NULL,
[Timestamp] DATETIME NOT NULL,
[Value] NVARCHAR(255) NOT NULL,
[Unit] VARCHAR(10) NOT NULL,
// Only for logging. No logic for the use case.
[InsertedTimestampUtc] DATETIME NOT NULL DEFAULT(GetUtcDate()),
[IsProcessed] INT NOT NULL DEFAULT(0)
)
If I understand your question correctly, the next approach may help to get the expected results:
Table:
CREATE TABLE #Data (
[DateTime] datetime,
[Parameter] varchar(50),
[Value] varchar(10)
)
INSERT INTO #Data
([DateTime], [Parameter], [Value])
VALUES
('2019-10-13T15:01:25', 'Switch', 'on'),
('2019-10-13T15:01:37', 'Temp', '25'),
('2019-10-13T15:01:53', 'Pressure', '1006'),
('2019-10-13T15:04:41', 'Temp', '22'),
('2019-10-13T15:04:59', 'Switch', 'off'),
('2019-10-13T17:14:51', 'Switch', 'on'),
('2019-10-13T17:15:07', 'Temp', '27'),
('2019-10-13T17:17:43', 'Switch', 'off')
Statement:
;WITH ChangesCTE AS (
SELECT
*,
CASE WHEN [Parameter] = 'Switch' AND [Value] = 'on' THEN 1 ELSE 0 END AS ChangeIndex
FROM #Data
), GroupsCTE AS (
SELECT
*,
SUM(ChangeIndex) OVER (ORDER BY [DateTime]) AS GroupIndex
FROM ChangesCTE
)
SELECT [GroupIndex], [Parameter], AVG(TRY_CONVERT(int, [Value]) * 1.0) AS [AvgValue]
FROM GroupsCTE
WHERE [Parameter] <> 'Switch'
GROUP BY [GroupIndex], [Parameter]
Results:
GroupIndex Parameter AvgValue
1 Pressure 1006.000000
1 Temp 23.500000
2 Temp 27.000000

How could I make a series of joins work with max value when aggregates do not work in them?

I'm looking only to get classification ids which are between the valid year range in classification. I'm using left joins because NULLs should be permitted.
I have tables:
CREATE TABLE classifications (
[id] [bigint] IDENTITY(1,1) NOT NULL,
[classification_code] [varchar](20) NOT NULL,
[description] [varchar](255) NULL,
[valid_from] [int] NULL,
[valid_to] [int] NULL
--Rest of constraints...
)
insert into classifications (classification_code, description, valid_from, valid_to)
values ('05012','Classification Number 1',2007,2012),
('05012','Classification Number 1',2013,2016),
('05012','Classification Number 1',2017,2020).
('12043','Classification Number 2',2007,2010),
('12043','Classification Number 2',2011,2020),
('12345','Classification Number 3',2013,2015),
('12345','Classification Number 3',2016,2020),
('54321','Classification Number 4',2007,2009),
('54321','Classification Number 4',2010,2013),
('54321','Classification Number 4',2014,2020)
CREATE TABLE comm_info_a (
[id] [bigint] IDENTITY(1,1) NOT NULL,
[comm_code] [nchar](10) NOT NULL, /*should be unique*/
[classification_code] [nchar](6) NULL,
[thing] [nchar](6) NULL
--Rest of constraints...
)
insert into comm_info_a (comm_code, classification_code)
values ('0100100000','54321'),
('8090010000','05012'),
('5002310010','12043'),
('0987654321','54321')
CREATE TABLE comm_info_b (
[id] [bigint] IDENTITY(1,1) NOT NULL,
[comm_code] [nchar](10) NOT NULL, /*should be unique*/
[classification_code] [nchar](6) NULL
--Rest of constraints...
)
insert into comm_info_b (comm_code, classification_code)
values ('0100100000','12043'),
('8090010000','00000'),
('5002310010','05012'),
('1234567890','12345')
CREATE TABLE transactions (
[comm_code] [varchar](50) NULL,
[year] [varchar](255) NULL
--Rest of constraints...
)
insert into transactions (comm_code, year) values
('0100100000', 2013),
('0100100000', 2015),
('0100100000', 2017),
('8090010000', 2009),
('8090010000', 2010),
('8090010000', 2011),
('8090010000', 2015),
('8090010000', 2017),
('8090010000', 2018),
('5002310010', 2008),
('5002310010', 2014),
And finally:
CREATE TABLE comm (
[id] [bigint] IDENTITY(1,1) NOT NULL,
[comm_code] [varchar](20) NULL, /*should be unique*/
[fk_classification_id_a] [bigint] NULL,
[fk_classification_id_b] [bigint] NULL
--Rest of constraints...
)
I am working on a query to insert comms from transactions, and comms should have unique comm_code
The query is as follows:
INSERT INTO comm
(comm_code,
fk_classification_id_a,
fk_classification_id_b)
SELECT comm_code,
ca.id,
cb.id,
MAX(year)
FROM transactions t
LEFT JOIN comm_info_a mia ON mia.comm_code=t.comm_code
LEFT JOIN comm_info_b mib ON mib.comm_code=t.comm_code
--these next two joins obviously do not work so I'm looking for something like it. Treat them as 'pseudo-code'
LEFT JOIN classifications ca ON ca.classification_code=mia.classification_code AND
MAX(t.year) BETWEEN ca.valid_from AND ca.valid_to
LEFT JOIN classifications cb ON cb.classification_code=mib.classification_code AND
MAX(t.year) BETWEEN cb.valid_from AND cb.valid_to
-- end of the two joins
WHERE NOT EXISTS
(SELECT DISTINCT comm_code FROM comm)
GROUP BY
t.comm_code
t.classification_code
So in the end I'm looking to get something like this as a result:
comm_code | fk_classification_id_a | fk_classification_id_b
-----------|------------------------|-----------------------
0100100000 | 5 | 10
8090010000 | 3 | NULL
5002310010 | 5 | 2
Please note that the comm_code is unique in this table!! Therefore: i want the comms on the newest transactions (thus the aggegate max year), and they should have the ids of the classification that the transaction year is in.
The real query is much more complex and longer but this pretty much covers all bases. Take a look into what is commented. I understand that it should be doable with a sub query of some sort, and I've tried, but so far I haven't found a way to pass aggregates to subqueries.
How could I tackle this problem?
Revised answer uses a common table expression to calculate the maximum year per comm_code and to exclude the comm_codes not wanted in the final result. After that the joins to the classification tables are straight forward as we have the comm_max_year value on each row to use in the joins.
with transCTE as (
select
t.*
, max(t.year) over(partition by comm_code) comm_max_year
from transactions t
left join comm on t.comm_code = comm.comm_code -- this table not in sample given
where comm.comm_code IS NULL -- use instead of NOT EXISTS
)
SELECT DISTINCT
t.comm_code
, ca.id as fk_classification_id_a
, cb.id as fk_classification_id_b
, t.comm_max_year
FROM transCTE t
LEFT JOIN comm_info_a mia ON mia.comm_code = t.comm_code
LEFT JOIN classifications ca ON mia.classification_code = ca.classification_code
AND t.comm_max_year BETWEEN ca.valid_from AND ca.valid_to
LEFT JOIN comm_info_b mib ON mib.comm_code = t.comm_code
LEFT JOIN classifications cb ON mib.classification_code = cb.classification_code
AND t.comm_max_year BETWEEN cb.valid_from AND cb.valid_to
ORDER BY
t.comm_code
;
GO
comm_code | fk_classification_id_a | fk_classification_id_b | comm_max_year
:--------- | :--------------------- | :--------------------- | :------------
0100100000 | 10 | 5 | 2017
5002310010 | 5 | 2 | 2014
8090010000 | 3 | null | 2018
Demo at dbfiddle here
CREATE TABLE transactions (
[comm_code] [varchar](50) NULL,
[year] [varchar](255) NULL
--Rest of constraints...
)
insert into transactions (comm_code, year) values
('0100100000', 2013),
('0100100000', 2015),
('0100100000', 2017),
('8090010000', 2009),
('8090010000', 2010),
('8090010000', 2011),
('8090010000', 2015),
('8090010000', 2017),
('8090010000', 2018),
('5002310010', 2008),
('5002310010', 2014)
;
GO
11 rows affected
CREATE TABLE classifications (
[id] [bigint] IDENTITY(1,1) NOT NULL,
[classification_code] [varchar](20) NOT NULL,
[description] [varchar](255) NULL,
[valid_from] [int] NULL,
[valid_to] [int] NULL
--Rest of constraints...
)
insert into classifications (classification_code, description, valid_from, valid_to)
values ('05012','Classification Number 1',2007,2012),
('05012','Classification Number 1',2013,2016),
('05012','Classification Number 1',2017,2020),
('12043','Classification Number 2',2007,2010),
('12043','Classification Number 2',2011,2020),
('12345','Classification Number 3',2013,2015),
('12345','Classification Number 3',2016,2020),
('54321','Classification Number 4',2007,2009),
('54321','Classification Number 4',2010,2013),
('54321','Classification Number 4',2014,2020)
;
GO
10 rows affected
CREATE TABLE comm_info_a (
[id] [bigint] IDENTITY(1,1) NOT NULL,
[comm_code] [nchar](10) NOT NULL, /*should be unique*/
[classification_code] [nchar](6) NULL,
[thing] [nchar](6) NULL
--Rest of constraints...
);
GO
✓
insert into comm_info_a (comm_code, classification_code)
values ('0100100000','54321'),
('8090010000','05012'),
('5002310010','12043'),
('0987654321','54321')
;
GO
4 rows affected
CREATE TABLE comm_info_b (
[id] [bigint] IDENTITY(1,1) NOT NULL,
[comm_code] [nchar](10) NOT NULL, /*should be unique*/
[classification_code] [nchar](6) NULL
--Rest of constraints...
);
GO
✓
insert into comm_info_b (comm_code, classification_code)
values ('0100100000','12043'),
('8090010000','00000'),
('5002310010','05012'),
('1234567890','12345');
GO
4 rows affected

SQL Server query plan using hash vs streamaggregate

I have three tables
CREATE TABLE [dbo].[caja]
(
[orden] [int] IDENTITY(1,1) NOT NULL,
[ejercicio] [int] NOT NULL,
[numero] [int] NOT NULL,
[tipo] [char](1) NOT NULL CONSTRAINT [DF_caja_tipo] DEFAULT ('N'),
[inicial] [int] NOT NULL,
[final] [int] NOT NULL,
[total] [int] NOT NULL,
CONSTRAINT [PK_caja]
PRIMARY KEY CLUSTERED ([orden] ASC),
CONSTRAINT [IX___caja__ejercicio_numero]
UNIQUE NONCLUSTERED ([ejercicio] ASC, [numero] ASC, [tipo] ASC),
CONSTRAINT [IX___caja__tipo_inicial]
UNIQUE NONCLUSTERED ([tipo] ASC, [inicial] ASC)
) ON [PRIMARY]
CREATE TABLE [dbo].[holograma]
(
[orden] [int] IDENTITY(1,1) NOT NULL,
[taller] [int] NOT NULL,
[tipo] [nchar](1) NOT NULL,
[inicial] [int] NOT NULL,
[final] [int] NOT NULL,
[total] [int] NOT NULL,
[fecha] [smalldatetime] NOT NULL,
CONSTRAINT [PK_holograma]
PRIMARY KEY CLUSTERED ([tipo] ASC, [inicial] ASC)
)
CREATE TABLE [dbo].[Tally]
(
[N] [int] IDENTITY(1,1) NOT NULL
CONSTRAINT [PK_Tally_N]
PRIMARY KEY CLUSTERED ([N] ASC)
)
Tally table contains one million records from N=1 to 1,000,000
Caja table contains a list of valid values to insert into holograma table,
Example:
orden | ejercicio | numero | tipo | inicial | final
888 | 2015 | 74 | R | 50144001 | 50144660
889 | 2015 | 75 | R | 50144661 | 50146660
and holograma:
taller | tipo | inicial | final | total | fecha
160 | A | 50144651 | 50144750 | 100 | 15/04/2015 <--values of two caja's
missing data
49 | A | 50144826 | 50145025 | 200 | 15/04/2015
I'm trying to get the missing data. Using the example must show me from 50144751 to 50144825, counting 75 numbers.
The problem is with the count aggregate, it's taking to much time when I delimit the values. This is my query
declare #tipo nchar(1)
,#numero int
,#ejercicio int
,#largo int
;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
select #tipo='A' , #numero=176;
select #ejercicio=2015, #largo=10000000;
with
c as ( /* La caja a buscar */
select tipo, numero, inicial / #largo as serie, inicial as cInicial, final as cFinal, total, entregados, orden as cOrden
from caja
where 1=1
and (tipo=#tipo)
and (numero=#numero)
and ejercicio=#ejercicio
)
, h as ( /* Los hologramas que corresponden a esa caja */
select serie, inicial - (serie*#largo) as hInicial, final - (serie*#largo) as hFinal
, h1.orden as hOrden
, cOrden, cInicial, cFinal
from holograma as h1
inner join c
on h1.tipo=c.tipo
and h1.inicial>=40000000
and (h1.inicial between c.cInicial and c.cFinal or h1.final between c.cInicial and c.cFinal)
)
, t2 as ( /* se usa para corregir */
select n
from tally
inner join c on (n between cInicial- (serie*#largo) and cFinal - (serie*#largo))
)
, t as ( /* Generar los números individuales según la ENTREGA de hologramas */
select serie,n as nHolograma
, hOrden, h.cOrden
from t2
inner join h on (n between hInicial and hFinal)
)
, e as ( /* cuantos hologramas por caja se han entregado. este se usa para corregir tabla caja */
select cOrden, COUNT(nHolograma) as totalG
from t
group by cOrden
)
select * from e
And the query plan is
https://pastee.org/kpt3t
but if I change in the "t" subquery from "t2" table to "tally" table
, t as ( /* Generar los números individuales según la ENTREGA de hologramas */
select serie,n as nHolograma
, hOrden, cOrden
from tally, h
where 1=1
and (n between hInicial and hFinal)
)
the query is almost instant. this is the query plan
https://pastee.org/hfpnz
The difference is the StreamAggregate using tally table to Hash using t2 subquery (line 196) .
I use the "t2"-subquery to delimit to the values of the current caja.
Why does the aggregate change? It's one minute of difference. 1 second with delimiting the numbers to 1:02 minutes delimiting.

Resources