I created a table managers :
create table managers(
ManagerId int identity(1,1) not for replication not null,
M_name varchar(20),
Salary varchar(20),
joining_year varchar(20),
city varchar(20),
IdCode int
)
then insert some data into this table:
ManagerId | M_name | Salary | joining_year | city | IdCode
----------------------------------------------------------------------
1 | riva | 50000 | 1998 | pune | 4
2 | tanmay | 48500 | 1990 |gurgaon | 2
3 | david | 49000 | 2001 | goa | 2
4 | null | null | null | null | null
5 | null | null | null | null | null
6 | dannial | 52185 | 2010 | kanpur | 6
And have second table managerEmp
create table managerEmp(
employeId int identity(1,1) not for replication not null,
family_member varchar(20),
wife_name varchar(20),
age int
)
I have some data in that table:
employeId | family_member |wife_name | age
--------------------------------------------
1 | 6 |mrs.kapoor| 31
2 | 5 |mrs.mishra| 25
3 | null |nll | null
4 | 2 |mrs.khan | 21
5 | 4 |mrs.bajaj | 22
Now, I want to select uncommon data from that table. The result would be:
M_name | Salary | city | wife_name | age
-----------------------------------------
null | null | null | mrs.khan | 21
null | null | null | mrs.bajaj | 22
dannial| 52185 |kanpur| null |null
Query based on your output:
SELECT M_name,Salary,city,wife_name,age FROM managers LEFT JOIN managerEmp
ON managers.ManagerId =managerEmp.employeId
WHERE M_name is NULL OR employeId IS NULL
Link to SQL fiddle http://sqlfiddle.com/#!6/e1776/1
Related
I have a SQL Server table as follows. I would like to group by name and place of test taken, order by date ascending as partition based on above mentioned grouping.
now a configurable window of eg:4 days is provided. In below table if first test taken date is
02/01/2019 (1st Feb) - its score is taken, and any other test score which has been retaken within the next 4 day window shall not be considered. If record also falls within 4 day window of already excluded item example row id - 4 , that also shall be excluded.
Any SQL statements for this logic is much appreciated.
CREATE TABLE test(
[recordid] int IDENTITY(1,1) PRIMARY KEY,
[name] [nvarchar](25) NULL,
[testcentre] [nvarchar](25) NULL,
[testdate] [smalldatetime] NOT NULL,
[testscore] [int],
[Preferred_Output] [int],
[Result] [nvarchar](75) NULL
)
GO
INSERT INTO test
(
[name],
[testcentre],
[testdate],
[testscore],
[Preferred_Output],
[Result] )
VALUES
('George','bangalore',' 02/01/2019',1,1,'Selected as first item -grouped by name and location'),
('George','bangalore',' 02/02/2019',0,0,'ignore as within 4 days'),
('George','bangalore',' 02/04/2019',1,0,'ignore as within 4 days'),
('George','bangalore',' 02/06/2019',3,0,'ignore as within 4 days from already ignored item -04-02-2019'),
('George','bangalore',' 02/15/2019',2,2,'Selected as second item -grouped by name and location'),
('George','bangalore',' 02/18/2019',5,0,'ignore as within 4 days of previous'),
('George','Pune',' 02/15/2019',4,3,'Selected as third item'),
('George','Pune',' 02/18/2019',6,0,'ignore as within 4 days of previous'),
('George','Pune',' 02/19/2019',7,0,'ignore as within 4 days of previous'),
('George','Pune',' 02/20/2019',8,0,'ignore as within 4 days of previous')
GO
select * from test
GO
+----------+--------+------------+------------+-----------+------------------+
| recordid | name | testcentre | testdate | testscore | Preferred_Output |
+----------+--------+------------+------------+-----------+------------------+
| 1 | George | bangalore | 02/01/2019 | 1 | 1 |
| 2 | George | bangalore | 02/02/2019 | 0 | 0 |
| 3 | George | bangalore | 02/04/2019 | 1 | 0 |
| 4 | George | bangalore | 02/06/2019 | 3 | 0 |
| 5 | George | bangalore | 02/15/2019 | 2 | 2 |
| 6 | George | bangalore | 02/18/2019 | 5 | 0 |
| 7 | George | Pune | 02/15/2019 | 4 | 3 |
| 8 | George | Pune | 02/18/2019 | 6 | 0 |
| 9 | George | Pune | 02/19/2019 | 7 | 0 |
| 10 | George | Pune | 02/20/2019 | 8 | 0 |
+----------+--------+------------+------------+-----------+------------------+
I don't think that a recursive query is required for this. You want to compare the dates across consecutive records, so this is a kind of gaps-and-island problem, where want to identify the start of each island.
Window functions can do that:
select t.*,
case when lag_testdate is null or testdate > dateadd(day, 4, lag_testdate)
then testscore
else 0
end new_core
from (
select t.*, lag(testdate) over(partition by name, testcentre order by testdate) lag_testdate
from test t
) t
Demo on DB Fiddle
I want to log any field changes in table Item to a log table called Events.
CREATE TABLE [dbo].[Items]
(
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](100) NULL,
[Description] [nvarchar](max) NULL,
[ParentId] [int] NULL,
[EntityStatusId] [int] NOT NULL,
[ItemTypeId] [int] NOT NULL,
[StartDate] [datetimeoffset](7) NULL,
[DueDate] [datetimeoffset](7) NULL,
[Budget] [decimal](18, 2) NULL,
[Cost] [decimal](18, 2) NULL,
[Progress] [int] NULL,
[StatusTypeId] [int] NULL,
[ImportanceTypeId] [int] NULL,
[PriorityTypeId] [int] NULL,
[CreatedDate] [datetimeoffset](7) NULL,
[HideChildren] [bit] NOT NULL,
[TenantId] [int] NOT NULL,
[OwnedBy] [int] NOT NULL,
[Details] [nvarchar](max) NULL,
[Inserted] [datetimeoffset](0) NOT NULL,
[Updated] [datetimeoffset](0) NOT NULL,
[InsertedBy] [int] NULL,
[UpdatedBy] [int] NULL,
)
For each changed column, I want to add a row to this table. This table will hold changes for the Item table, but later it will hold changes for other tables as well. I would like the trigger to be as dynamic as possible, so the same basic trigger can be used for other tables as well. If columns are added/removed to a table, the SP should discover that and not break.
CREATE TABLE [dbo].[Events]
(
[Id] [int] IDENTITY(1,1) NOT NULL,
[RecordId] [int] NOT NULL, -- Item.Id
[EventTypeId] [int] NOT NULL, -- Always 2
[EventDate] [datetimeoffset](0) NOT NULL, --GetUTCDate()
[ColumnName] [nvarchar](50) NULL, --The column name that changed
[OriginalValue] [nvarchar](max) NULL, --The original Value
[NewValue] [nvarchar](max) NULL, --The New Value
[TenantId] [int] NOT NULL, --Item.TentantId
[AppUserId] [int] NOT NULL, --Item.ModifiedBy
[TableName] [int] NOT NULL --The Name of the Table (Item in this case, but later there will be others)
)
I am trying to write an Update trigger, but am finding it difficult.
I know there is are Inserted and Deleted tables that hold the new and old values.
So how do I actually achieve that? It seems that it ought to be dynamic so that if columns are added, it doesn't break anything.
If I were writing this in C#, I would get all the column names and loop through them and find the changed fields, then create an Event for each of them. But I am don't see how to do this with SQL.
UPDATE TO RESPOND TO ANSWER:
This answer works when editing in SSMS. However, in practice, the app uses EntityFramework and it appears to be doing something strange, as this is what gets logged. Note that only one column actually had different values in Original/New. Thus I was trying to check that the values were actually different before doing the insert.
+----+----------+-------------+----------------------------+------------------+----------------------------+----------------------------+----------+-----------+---------+-----------+
| Id | RecordId | EventTypeId | EventDate | ColumnName | OriginalValue | NewValue | TenantId | AppUserId | TableId | TableName |
+----+----------+-------------+----------------------------+------------------+----------------------------+----------------------------+----------+-----------+---------+-----------+
| 21 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | Name | Task 2 | Task 2A | 8 | 11 | NULL | Item |
| 22 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | Description | NULL | NULL | 8 | 11 | NULL | Item |
| 23 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | ParentId | 238 | 238 | 8 | 11 | NULL | Item |
| 24 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | EntityStatusId | 1 | 1 | 8 | 11 | NULL | Item |
| 25 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | ItemTypeId | 3 | 3 | 8 | 11 | NULL | Item |
| 26 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | StartDate | NULL | NULL | 8 | 11 | NULL | Item |
| 27 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | DueDate | NULL | NULL | 8 | 11 | NULL | Item |
| 28 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | Budget | NULL | NULL | 8 | 11 | NULL | Item |
| 29 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | Cost | NULL | NULL | 8 | 11 | NULL | Item |
| 30 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | Progress | NULL | NULL | 8 | 11 | NULL | Item |
| 31 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | StatusTypeId | 1 | 1 | 8 | 11 | NULL | Item |
| 32 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | ImportanceTypeId | NULL | NULL | 8 | 11 | NULL | Item |
| 33 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | PriorityTypeId | NULL | NULL | 8 | 11 | NULL | Item |
| 34 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | OwnedBy | 11 | 11 | 8 | 11 | NULL | Item |
| 35 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | Details | <p><span></span></p> | <p><span></span></p> | 8 | 11 | NULL | Item |
| 36 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | Inserted | 0001-01-01 00:00:00 +00:00 | 0001-01-01 00:00:00 +00:00 | 8 | 11 | NULL | Item |
| 37 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | Updated | 0001-01-01 00:00:00 +00:00 | 0001-01-01 00:00:00 +00:00 | 8 | 11 | NULL | Item |
| 38 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | InsertedBy | 11 | 11 | 8 | 11 | NULL | Item |
| 39 | 397 | 2 | 2018-04-22 15:42:16 +00:00 | UpdatedBy | 11 | 11 | 8 | 11 | NULL | Item |
+----+----------+-------------+----------------------------+------------------+----------------------------+----------------------------+----------+-----------+---------+-----------+
Here's one way using COLUMNS_UPDATED. Trigger does not depend on column names, so you can add or remove columns without problem. I have added some comments in the query
create trigger audit on Items
after update
as
begin
set nocount on;
create table #updatedCols (Id int identity(1, 1), updateCol nvarchar(200))
--find all columns that were updated and write them to temp table
insert into #updatedCols (updateCol)
select
column_name
from
information_schema.columns
where
table_name = 'Items'
and convert(varbinary, reverse(columns_updated())) & power(convert(bigint, 2), ordinal_position - 1) > 0
--temp tables are used because inserted and deleted tables are not available in dynamic SQL
select * into #tempInserted from inserted
select * into #tempDeleted from deleted
declare #cnt int = 1
declare #rowCnt int
declare #columnName varchar(1000)
declare #sql nvarchar(4000)
select #rowCnt = count(*) from #updatedCols
--execute insert statement for each updated column
while #cnt <= #rowCnt
begin
select #columnName = updateCol from #updatedCols where id = #cnt
set #sql = N'
insert into [Events] ([RecordId], [EventTypeId], [EventDate], [ColumnName], [OriginalValue], [NewValue], [TenantId], [AppUserId], [TableName])
select
i.Id, 2, GetUTCDate(), ''' + #columnName + ''', d.' + #columnName + ', i.' + #columnName +', i.TenantId, i.UpdatedBy, ''Item''
from
#tempInserted i
join #tempDeleted d on i.Id = d.Id and isnull(Cast(i.' + #columnName + ' as varchar), '''') <> isnull(Cast(d.' +#columnName + ' as varchar), '''')
'
exec sp_executesql #sql
set #cnt = #cnt + 1
end
end
I have changed data type of TableName column of Events table to nvarchar.
You could query the catalog (sys.columns, sys.tables, sys.schemas, etc.) to get the columns of the current table into a cursor. Then iterate over that cursor and build your single inserts to the log table as string. Then execute them with EXECUTE or sp_executesql or similar.
(Note, that the linked documentation does not necessarily match the version of your DBMS and is just meant as a first hint.)
By the way, you might want to change the datatype of [TableName] and [ColumnName] to sysname which is also used in the catalog for such columns.
I have two tables:
- #CAMERC
- #CAMERC_LOG
I have to update column #CAMERC.MERC_LPR with values from column #CAMERC_LOG.MERC_LPR.
Records must match on MERC_KEY, but only one record must be taken from #CAMERC_LOG - with highest MERC_KEY_LOG, and #CAMERC_LOG.MERC_LPR must not be null or 0.
My problem is updating one table based on results from second table. I don't know how to properly make such an update?
Table #CAMERC:
+----------+----------+
| MERC_KEY | MERC_LPR |
+----------+----------+
| 1 | 0.0000 |
| 2 | NULL |
| 3 | 0.0000 |
| 4 | 0.0000 |
+----------+----------+
Table #CAMERC_LOG:
+----------+--------------+----------+
| MERC_KEY | MERC_KEY_LOG | MERC_LPR |
+----------+--------------+----------+
| 1 | 1 | 1.1000 |
| 1 | 2 | 2.3000 |
| 2 | 3 | 3.4000 |
| 2 | 4 | 4.4000 |
| 1 | 5 | 7.8000 |
| 1 | 6 | NULL |
| 2 | 7 | 0.0000 |
| 2 | 8 | 12.4000 |
| 3 | 1 | 12.1000 |
| 3 | 2 | 42.3000 |
| 3 | 3 | 43.4000 |
| 3 | 4 | 884.4000 |
| 4 | 5 | 57.8000 |
| 4 | 6 | NULL |
| 4 | 7 | 0.0000 |
| 4 | 8 | 412.4000 |
+----------+--------------+----------+
Code for table creation:
DECLARE #CAMERC TABLE
(
MERC_KEY INT,
MERC_LPR DECIMAL(10,4)
)
DECLARE #CAMERC_LOG TABLE
(
MERC_KEY INT,
MERC_KEY_LOG INT,
MERC_LPR DECIMAL(10,4)
)
INSERT INTO #CAMERC(MERC_LPR, MERC_KEY) VALUES(0, 1),(NULL,2),(0,3),(0,4)
INSERT INTO #CAMERC_LOG(MERC_LPR, MERC_KEY, MERC_KEY_LOG) VALUES(1.1, 1,1),(2.3,1,2),(3.4,2,3),(4.4,2,4),(7.8, 1,5),(NULL,1,6),(0,2,7),(12.4,2,8),
(12.1, 3,1),(42.3,3,2),(43.4,3,3),(884.4,3,4),(57.8, 4,5),(NULL,4,6),(0,4,7),(412.4,4,8)
Try this:
WITH DataSource AS
(
SELECT MERC_KEY
,ROW_NUMBER() OVER (PARTITION BY MERC_KEY ORDER BY MERC_KEY_LOG DESC) AS [RowID]
,MERC_LPR
FROM #CAMERC_LOG
WHERE MERC_LPR IS NOT NULL
AND MERC_LPR <> 0
)
UPDATE #CAMERC
SET MERC_LPR = B.[MERC_LPR]
FROM #CAMERC A
INNER JOIN DataSource B
ON A.[MERC_KEY] = B.[MERC_KEY]
AND B.[RowID] = 1
SELECT *
FROM #CAMERC
The idea is to eliminated the invalid records from the #CAMER_LOG and then using ROW_NUMBER to order the rows by MERC_KEY_LOG. After that, we are performing UPDATE by only where RowID = 1.
I made a INNER JOIN in stored procedure, but I don't know what to put to my WHERE clause to filter those column with null values and only shows those rows who has not null on a particular column.
CREATE PROCEDURE [dbo].[25]
#param1 int
AS
SELECT c.Name, c.Age, c2.Name, c2.Country
FROM Cus C
INNER JOIN Cus2 C2 ON c.id = c2.id
WHERE c2.country is not null and c2.id = #param1
Order by c2.Country
RETURN 0
ID 1
+-----+----+---------+---------+
| QID | ID | Name | Country |
+-----+----+---------+---------+
| 1 | 1 | Null | PH |
| 2 | 1 | Null | CN |
| 3 | 1 | Japhet | USA |
| 4 | 1 | Abegail | UK |
| 5 | 1 | Norlee | Ger |
+-----+----+---------+---------+
ID 2
+-----+----+----------+---------+
| QID | ID | Name | Country |
+-----+----+----------+---------+
| 1 | 2 | Null | PH |
| 2 | 2 | Null | CN |
| 3 | 2 | Reynaldo | USA |
| 4 | 2 | Abegail | UK |
| 5 | 2 | Norlee | Ger |
+-----+----+----------+---------+
ID 3
+-----+----+----------+---------+
| QID | ID | Name | Country |
+-----+----+----------+---------+
| 1 | 3 | Gab | PH |
| 2 | 3 | Null | CN |
| 3 | 3 | Reynaldo | USA |
| 4 | 3 | Abegail | UK |
| 5 | 3 | Norlee | Ger |
+-----+----+----------+---------+
I want when I choose any of the user in the C Table it will display the C child table data and remove the null name rows and remain the rows with not null name column.
Desired Result:
C Table (Parent)
+----+---------+-----+
| ID | Name | Age |
+----+---------+-----+
| 3 | Abegail | 31 |
+----+---------+-----+
C2 Table (Child)
+-----+----+----------+---------+
| QID | ID | Name | Country |
+-----+----+----------+---------+
| 1 | 3 | Gab | PH |
| 3 | 3 | Reynaldo | USA |
| 4 | 3 | Abegail | UK |
| 5 | 3 | Norlee | Ger |
+-----+----+----------+---------+
WHERE column IS NOT NULL is the syntax to filter out NULL values.
Solution 1: test not null value
Example:
WHERE yourcolumn IS NOT NULL
Solution 2: test comparaison value in your where clause (comparaison substract null values)
Examples:
WHERE yourcolumn = value
WHERE yourcolumn <> value
WHERE yourcolumn in ( value)
WHERE yourcolumn not in ( value)
WHERE yourcolumn between value1 and value2
WHERE yourcolumn not between value1 and value2
I have the following table. How do I select the first non-null value of reviewers and voting column if they have the same product_id? The first here mean the first row sorting by created_at
+------------+-----------+--------+---------------------+
| product_id | reviewers | voting | created_at |
+------------+-----------+--------+---------------------+
| B0021ZFV9M | null | null | 2015-03-20 00:34:09 |
| B0021ZFV9M | 4 | 3 | 2015-03-24 00:34:09 |
| B0021ZFV9M | null | null | 2015-04-13 00:55:51 |
| B0021ZFV9M | 30 | 4 | 2015-04-15 00:44:38 |
| B00JKO4CHO | null | null | 2015-09-17 00:41:40 |
| B00JKO4CHO | null | null | 2015-09-19 00:41:47 |
| B00JKO4CHO | 50 | 1 | 2015-09-21 00:41:31 |
+------------+-----------+--------+---------------------+
Expected
+------------+-----------+--------+---------------------+
| product_id | reviewers | voting | created_at |
+------------+-----------+--------+---------------------+
| B0021ZFV9M | 4 | 3 | 2015-03-20 00:34:09 |
| B0021ZFV9M | 4 | 3 | 2015-03-24 00:34:09 |
| B0021ZFV9M | 30 | 4 | 2015-04-13 00:55:51 |
| B0021ZFV9M | 30 | 4 | 2015-04-15 00:44:38 |
| B00JKO4CHO | 50 | 1 | 2015-09-17 00:41:40 |
| B00JKO4CHO | 50 | 1 | 2015-09-19 00:41:47 |
| B00JKO4CHO | 50 | 1 | 2015-09-21 00:41:31 |
+------------+-----------+--------+---------------------+
Try this:
select
product_id,
case
when reviewers is null then (
select reviewers from test
where product_id = a.product_id
and created_at > a.created_at
and reviewers is not null
limit 1)
else reviewers
end as reviewers,
case
when voting is null then (
select voting from test
where product_id = a.product_id
and created_at > a.created_at
and voting is not null
limit 1)
else voting
end as voting,
created_at
from test a;
Example: http://sqlfiddle.com/#!9/546dff/3
create table test (
product_id varchar(20),
reviewers int,
voting int,
created_at datetime
);
insert into test values
('B0021ZFV9M',null , null ,'2015-03-20 00:34:09')
,('B0021ZFV9M',4 , 3 ,'2015-03-24 00:34:09')
,('B0021ZFV9M',null , null ,'2015-04-13 00:55:51')
,('B0021ZFV9M',30 , 4 ,'2015-04-15 00:44:38')
,('B00JKO4CHO',null , null ,'2015-09-17 00:41:40')
,('B00JKO4CHO',null , null ,'2015-09-19 00:41:47')
,('B00JKO4CHO',50 , 1 ,'2015-09-21 00:41:31');
Result:
| product_id | reviewers | voting | created_at |
|------------|-----------|--------|-----------------------------|
| B0021ZFV9M | 4 | 3 | March, 20 2015 00:34:09 |
| B0021ZFV9M | 4 | 3 | March, 24 2015 00:34:09 |
| B0021ZFV9M | 30 | 4 | April, 13 2015 00:55:51 |
| B0021ZFV9M | 30 | 4 | April, 15 2015 00:44:38 |
| B00JKO4CHO | 50 | 1 | September, 17 2015 00:41:40 |
| B00JKO4CHO | 50 | 1 | September, 19 2015 00:41:47 |
| B00JKO4CHO | 50 | 1 | September, 21 2015 00:41:31 |
EDIT:
To update old data, you could do this:
-- create a duplicate empty table
create table test1 like test;
-- insert good data into this duplicate table
insert into test1
select
product_id,
case
when reviewers is null then (
select reviewers from test
where product_id = a.product_id
and created_at > a.created_at
and reviewers is not null
limit 1)
else reviewers
end as reviewers,
case
when voting is null then (
select voting from test
where product_id = a.product_id
and created_at > a.created_at
and voting is not null
limit 1)
else voting
end as voting,
created_at
from test a;
-- remove data from original table
truncate table test;
-- re-insert good data into original table
insert into test select * from test1;
-- drop the duplicate table
drop table test1;
Make a backup of test (original) table before you try this.
select distinct on (a.product_id, a.created_at)
a.product_id,
coalesce(a.reviewers, b.reviewers) reviewers,
coalesce(a.voting, b.voting) voting,
a.created_at
from a_table a
left join a_table b
on a.product_id = b.product_id
and b.reviewers notnull
and b.created_at > a.created_at
order by 1, 4;
SqlFiddle.
Note: it is assumed that if reviewers is not null then voting is not null too.