I have some tables and want to populate a database attribute based on other table interval values.
The base idea is to populate the 'eye-age' attribute with the values: young, pre-prebyotic, or prebyotic depending on patient's age.
I have the patient table birthdate, and need to populate last attribute with a value from BirthToEyeAge based on patient birthdate, inferring its age.
How can I do this, or which documentation should I read to learn these types of things.
INSERT INTO BirthToEyeAge(bId, minAge , maxAge , eyeAge)
VALUES(1, 0, 28 , 'young')
VALUES(2, 29, 59, 'probyotic')
VALUES(3, 60, 120, 'pre-probyotic')
INSERT INTO Patient( patId, firstName, lastName, birthDate )
VALUES( 1, 'Ark', 'May', '1991-7-22' );
INSERT INTO Diagnostic( diagId, date, tear_rate, consId_Consulta, eyeAge )
VALUES( 1, '2019-08-10', 'normal', 1, ??? );
You can join table Patient with BirthToEyeAge, taking advantage of handy postgres function age() to compute the age of the patient at the time he was diagnosed. Here is an an insert query based on this logic:
insert into Diagnostic( diagId, date, tear_rate, consId_Consulta, eyeAge )
select d.*, b.bId
from
(select 1 diagId, '2018-08-10'::date date, 'normal' tear_rate, 1 consId_Consulta ) d
inner join patient p
on d.consId_Consulta = p.patId
inner join BirthToEyeAge b
on extract(year from age(d.date, p.birthDate)) between b.minAge and b.maxAge;
In this demo on DB Fiddle, after creating the tables, initializing their content, and running the above query, the content of Diagnostic is:
| diagid | date | tear_rate | consid_consulta | eyeage |
| ------ | ------------------------ | --------- | --------------- | ------ |
| 1 | 2018-08-10T00:00:00.000Z | normal | 1 | 1 |
Related
This is my first post here. I'm still a novice SQL user at this point though I've been using it for several years now. I am trying to find a solution to the following problem and am looking for some advice, as simple as possible, please.
I have this 'recordTable' with the following columns related to transactions; 'personID', 'recordID', 'item', 'txDate' and 'daySupply'. The recordID is the primary key. Almost every personID should have many distinct recordID's with distinct txDate's.
My focus is on one particular 'item' for all of 2017. It's expected that once the item daySupply has elapsed for a recordID that we would see a newer recordID for that person with a more recent txDate somewhere between five days before and five days after the end of the daySupply.
What I'm trying to uncover are the number of distinct recordID's where there wasn't an expected new recordID during this ten day window. I think this is probably very simple to solve but I am having a lot of difficulty trying to create a query for it, let alone explain it to someone.
My thought thus far is to create two temp tables. The first temp table stores all of the records associated with the desired items and I'm just storing the personID, recordID and txDate columns. The second temp table has the personID, recordID and the two derived columns from the txDate and daySupply; these would represent the five days before and five days after.
I am trying to find some way to determine the number of recordID's from the first table that don't have expected refills for that personID in the second. I thought a simple EXCEPT would do this but I don't think there's anyway of getting around a recursive type statement to answer this and I have never gotten comfortable with recursive queries.
I searched Stackoverflow and elsewhere but couldn't come up with an answer to this one. I would really appreciate some help from some more clever data folks. Here is the code so far. Thanks everyone!
CREATE TABLE #temp1 (personID VARCHAR(20), recordID VARCHAR(10), txDate
DATE)
CREATE TABLE #temp2 (personID VARCHAR(20), recordID VARCHAR(10), startDate
DATE, endDate DATE)
INSERT INTO #temp1
SELECT [personID], [recordID], txDate
FROM recordTable
WHERE item = 'desiredItem'
AND txDate > '12/31/16'
AND txDate < '1/1/18';
INSERT INTO #temp2
SELECT [personID], [recordID], (txDate + (daySupply - 5)), (txDate +
(daySupply + 5))
FROM recordTable
WHERE item = 'desiredItem'
AND txDate > '12/31/16'
AND txDate < '1/1/18';
I agree with mypetlion that you could have been more concise with your question, but I think I can figure out what you are asking.
SQL Window Functions to the rescue!
Here's the basic idea...
CREATE TABLE #fills(
personid INT,
recordid INT,
item NVARCHAR(MAX),
filldate DATE,
dayssupply INT
);
INSERT #fills
VALUES (1, 1, 'item', '1/1/2018', 30),
(1, 2, 'item', '2/1/2018', 30),
(1, 3, 'item', '3/1/2018', 30),
(1, 4, 'item', '5/1/2018', 30),
(1, 5, 'item', '6/1/2018', 30)
;
SELECT *,
ABS(
DATEDIFF(
DAY,
LAG(DATEADD(DAY, dayssupply, filldate)) OVER (PARTITION BY personid, item ORDER BY filldate),
filldate
)
) AS gap
FROM #fills
ORDER BY filldate;
... outputs ...
+----------+----------+------+------------+------------+------+
| personid | recordid | item | filldate | dayssupply | gap |
+----------+----------+------+------------+------------+------+
| 1 | 1 | item | 2018-01-01 | 30 | NULL |
| 1 | 2 | item | 2018-02-01 | 30 | 1 |
| 1 | 3 | item | 2018-03-01 | 30 | 2 |
| 1 | 4 | item | 2018-05-01 | 30 | 31 |
| 1 | 5 | item | 2018-06-01 | 30 | 1 |
+----------+----------+------+------------+------------+------+
You can insert the results into a temp table and pull out only the ones you want (gap > 5), or use the query above as a CTE and pull out the results without the temp table.
This could be stated as follows: "Given a set of orders, return a subset for which there is no order within +/- 5 days of the expected resupply date (defined as txDate + DaysSupply)."
This can be solved simply with NOT EXISTS. Define the range of orders you wish to examine, and this query will find the subset of those orders for which there is no resupply order (NOT EXISTS) within 5 days of either side of the expected resupply date (txDate + daysSupply).
SELECT
gappedOrder.personID
, gappedOrder.recordID
, gappedOrder.item
, gappedOrder.txDate
, gappedOrder.daysSupply
FROM
recordTable as gappedOrder
WHERE
gappedOrder.item = 'desiredItem'
AND gappedOrder.txDate > '12/31/16'
AND gappedOrder.txDate < '1/1/18'
--order not refilled within date range tolerance
AND NOT EXISTS
(
SELECT
1
FROM
recordTable AS refilledOrder
WHERE
refilledOrder.personID = gappedOrder.personID
AND refilledOrder.item = gappedOrder.item
--5 days prior to (txDate + daysSupply)
AND refilledOrder.txtDate >= DATEADD(day, -5, DATEADD(day, gappedOrder.daysSupply, gappedOrder.txDate))
--5 days after (txtDate + daysSupply)
AND refilledOrder.txtDate <= DATEADD(day, 5, DATEADD(day, gappedOrder.daysSupply, gappedOrder.txtDate))
);
I'm trying to figure out how to do a multi-row insert as one statement in SQL Server, but where one of the columns is a column computer based on the data as it stands after every insert row.
Let's say I run this simple query and get back 3 records:
SELECT *
FROM event_courses
WHERE event_id = 100
Results:
id | event_id | course_id | course_priority
---+----------+-----------+----------------
10 | 100 | 501 | 1
11 | 100 | 502 | 2
12 | 100 | 503 | 3
Now I want to insert 3 more records into this table, except I need to be able to calculate the priority for each record. The priority should be the count of all courses in this event. But if I run a sub-query, I get the same priority for all new courses:
INSERT INTO event_courses (event_id, course_id, course_priority)
VALUES (100, 500,
(SELECT COUNT (id) + 1 AS cnt_event_courses
FROM event_courses
WHERE event_id = 100)),
(100, 501,
(SELECT COUNT (id) + 1 AS cnt_event_courses
FROM event_courses
WHERE event_id = 1))
Results:
id | event_id | course_id | course_priority
---+----------+-----------+-----------------
10 | 100 | 501 | 1
11 | 100 | 502 | 2
12 | 100 | 503 | 3
13 | 100 | 504 | 4
14 | 100 | 505 | 4
15 | 100 | 506 | 4
Now I know I could easily do this in a loop outside of SQL and just run a bunch of insert statement, but that's not very efficient. There's got to be a way to calculate the priority on the fly during a multi-row insert.
Big thanks to #Sean Lange for the answer. I was able to simplify it even further for my application. Great lead! Learned 2 new syntax tricks today ;)
DECLARE #eventid int = 100
INSERT event_courses
SELECT #eventid AS event_id,
course_id,
course_priority = existingEventCourses.prioritySeed + ROW_NUMBER() OVER(ORDER BY tempid)
FROM (VALUES
(1, 501),
(2, 502),
(3, 503)
) courseInserts (tempid, course_id) -- This basically creates a temp table in memory at run-time
CROSS APPLY (
SELECT COUNT(id) AS prioritySeed
FROM event_courses
WHERE event_id = #eventid
) existingEventCourses
SELECT *
FROM event_courses
WHERE event_id = #eventid
Here is an example of how you might be able to do this. I have no idea where your new rows values are coming from so I just tossed them in a derived table. I doubt your final solution would look like this but it demonstrates how you can leverage ROW_NUMBER for accomplish this type of thing.
declare #EventCourse table
(
id int identity
, event_id int
, course_id int
, course_priority int
)
insert #EventCourse values
(100, 501, 1)
,(100, 502, 2)
,(100, 503, 3)
select *
from #EventCourse
insert #EventCourse
(
event_id
, course_id
, course_priority
)
select x.eventID
, x.coursePriority
, NewPriority = y.MaxPriority + ROW_NUMBER() over(partition by x.eventID order by x.coursePriority)
from
(
values(100, 504)
,(100, 505)
,(100, 506)
)x(eventID, coursePriority)
cross apply
(
select max(course_priority) as MaxPriority
from #EventCourse ec
where ec.event_id = x.eventID
) y
select *
from #EventCourse
Actually I'm noob and stuck on this problem for a week. I will try explaining it.
I have table for USER,
and a table for product
I want to store data of every user for every product. Like if_product_bought, num_of_items, and all.
So only solution I can think of database within database , that is create a copy of products inside user named database and start storing.
If this is possible how or is there any other better solution
Thanks in advance
You actually don't create a database within a database (or a table within a table) when you use PostgreSQL or any other SQL RDBMS.
You use tables, and JOIN them. You normally would have an orders table, together with an items_x_orders table, on top of your users and items.
This is a very simplified scenario:
CREATE TABLE users
(
user_id INTEGER /* SERIAL */ NOT NULL PRIMARY KEY,
user_name text
) ;
CREATE TABLE items
(
item_id INTEGER /* SERIAL */ NOT NULL PRIMARY KEY,
item_description text NOT NULL,
item_unit text NOT NULL,
item_standard_price decimal(10,2) NOT NULL
) ;
CREATE TABLE orders
(
order_id INTEGER /* SERIAL */ NOT NULL PRIMARY KEY,
user_id INTEGER NOT NULL REFERENCES users(user_id),
order_date DATE NOT NULL DEFAULT now(),
other_data TEXT
) ;
CREATE TABLE items_x_orders
(
order_id INTEGER NOT NULL REFERENCES orders(order_id),
item_id INTEGER NOT NULL REFERENCES items(item_id),
-- You're supposed not to have the item more than once in an order
-- This makes the following the "natural key" for this table
PRIMARY KEY (order_id, item_id),
item_quantity DECIMAL(10,2) NOT NULL CHECK(item_quantity <> /* > */ 0),
item_percent_discount DECIMAL(5,2) NOT NULL DEFAULT 0.0,
other_data TEXT
) ;
This is all based in the so-called Relational Model. What you were thinking about is something else called a Hierarchical model, or a document model used in some NoSQL databases (where you store your data as a JSON or XML hierarchical structure).
You would fill those tables with data like:
INSERT INTO users
(user_id, user_name)
VALUES
(1, 'Alice Cooper') ;
INSERT INTO items
(item_id, item_description, item_unit, item_standard_price)
VALUES
(1, 'Oranges', 'kg', 0.75),
(2, 'Cookies', 'box', 1.25),
(3, 'Milk', '1l carton', 0.90) ;
INSERT INTO orders
(order_id, user_id)
VALUES
(100, 1) ;
INSERT INTO items_x_orders
(order_id, item_id, item_quantity, item_percent_discount, other_data)
VALUES
(100, 1, 2.5, 0.00, NULL),
(100, 2, 3.0, 0.00, 'I don''t want Oreo'),
(100, 3, 1.0, 0.05, 'Make it promo milk') ;
And then you would produce queries like the following one, where you JOIN all relevant tables:
SELECT
user_name, item_description, item_quantity, item_unit,
item_standard_price, item_percent_discount,
CAST(item_quantity * (item_standard_price * (1-item_percent_discount/100.0)) AS DECIMAL(10,2)) AS items_price
FROM
items_x_orders
JOIN orders USING (order_id)
JOIN items USING (item_id)
JOIN users USING (user_id) ;
...and get these results:
user_name | item_description | item_quantity | item_unit | item_standard_price | item_percent_discount | items_price
:----------- | :--------------- | ------------: | :-------- | ------------------: | --------------------: | ----------:
Alice Cooper | Oranges | 2.50 | kg | 0.75 | 0.00 | 1.88
Alice Cooper | Cookies | 3.00 | box | 1.25 | 0.00 | 3.75
Alice Cooper | Milk | 1.00 | 1l carton | 0.90 | 5.00 | 0.86
You can get all the code and test at dbfiddle here
I need to create a "rolled up" slash "grouped" view of Customer Data for our client.
A simplified explanation would be that data need to be grouped by geographical (ex. Country, Province, City etc.) data and rolled up by the amount of people that have an email address and/or a phone number.
The problem is that a person can be in more than one Cities (lowest level) and then are counted multiple times in any higher levels (ex Province).
Here is an example using GROUPING SETS:
DECLARE #Customer TABLE
(
CustomerId VARCHAR(50),
Phone BIT,
Email BIT,
ProvinceId VARCHAR(50),
CityId VARCHAR(50)
)
INSERT INTO #Customer(CustomerId, Phone, Email, ProvinceId, CityId) VALUES ('Customer A', 1, NULL, 'Province A', 'City A')
INSERT INTO #Customer(CustomerId, Phone, Email, ProvinceId, CityId) VALUES ('Customer A', 1, NULL, 'Province A', 'City B')
INSERT INTO #Customer(CustomerId, Phone, Email, ProvinceId, CityId) VALUES ('Customer B', 1, 1, 'Province A', 'City B')
SELECT COUNT(Phone) PersonWithPhoneCount, COUNT(Email) PersonWithEmailCount, ProvinceId, CityId FROM #Customer
GROUP BY GROUPING SETS ((ProvinceId), (ProvinceId, CityId))
and this is the result:
----------------------------------------------------------------------------
| PersonWithPhoneCount | PersonWithEmailCount | ProvinceId | CityId |
----------------------------------------------------------------------------
| 1 | 0 | Province A | City A |
| 2 | 1 | Province A | City B |
| 3 | 1 | Province A | NULL |
----------------------------------------------------------------------------
The result is correct for the lowest level (City) but for the Province level "Customer A" is counted twice. I understand why, but is there a way to not count "Customer A" twice?
Do I have to group all the different levels individually or is there a better way?
Performance is also a major issue as the live data adds up to 100+ million rows.
Thanks in advance.
Even though your data will be wrong, because there is no way Customer A can be in City A and City B, this sql will get you what you are asking for. I used the ROW_NUMBER() function so I only count the first occurrence of the customer.
SELECT COUNT(Phone) PersonWithPhoneCount, COUNT(Email) PersonWithEmailCount, ProvinceId, CityId
FROM (
SELECT *
,ROW_NUMBER() OVER(PARTITION BY CustomerId
ORDER BY ProvinceId, CityId) Row
FROM #Customer c1
) Tmp
Where Row = 1
GROUP BY GROUPING SETS ((ProvinceId), (ProvinceId, CityId))
I need to extract data from a third party system (I have no influence over its design). It's a SQL Server 2005 database uses bitmaps to store user privileges. It has five INT fields giving a maximum of 5 * 32 = 160 privileges. It stores a number of types of privilege and re-uses the bitmaps for each type. So in total there are 6 fields that drive privileges. Each privilege can be assigned to a specific item of a given type.
An example of a type is “table” so items in that context would be table names.
The privilege table looks like this:
ID | PRIVTYPE | USERNAME | ITEMNAME | BITMAP1 | BITMAP2 | BITMAP3 | BITMAP4 | BITMAP5
For example
123 | Table | Joe | Customers | 0x408 | 0x1 | 0x5c | 0x1000 | 0x0
Another table contains the privileges represented by each bit. It looks like this:
PRIVTYPE | BITMAP_ID | BITVALUE | PRIVILEGE_NAME
For example, entries relating to the above bitmaps would be:
Table | 1 |0x8 | View
Table | 1 |0x400 | Edit
Table | 2 |0x1 | Report
Table | 3 |0x4 | View Address Data
Table | 3 |0x8 | View Order Data
Table | 3 |0x10 | View Payment Data
Table | 3 |0x40 | View System Data
Table | 4 |0x1000| View Hidden Fields
I want to somehow parse the privilege table into a new table or view that will have one record per user per item privilege. Like this:
USERNAME | ITEMNAME |PRIVILEGE_NAME
Joe | Table | Customers | View
Joe | Table | Customers | Edit
Joe | Table | Customers | Report
Joe | Table | Customers | view Address Data
Joe | Table | Customers | view Order Data
Joe | Table | Customers | view Payment Data
Joe | Table | Customers | view System Data
Joe | Table | Customers | view Hidden Fields
I think I need to create a view by running a select statement that will return multiple rows for each row in the privilege table: one row for every set bit in a bitmask field. So, for example, a single row in the privilege table that has 3 bits set in the bitmasks will cause three rows to be returned.
I have searched for answers about breaking tables into multiple rows. I’ve looked at various joins and pivots but I can’t find something that will do what I need. Is the above possible? Any guidance appreciated…
You could unpivot the first table (called #UserPrivileges below) and join it to the second one (#Privileges) on privilege type, bitmap ID and the result of bitwise AND between the bitmap in the first table and BITVALUE in the second table.
Below is my implementation.
Setup:
DECLARE #UserPrivileges TABLE (
ID int,
PRIVTYPE varchar(50),
USERNAME varchar(50),
ITEMNAME varchar(50),
BITMAP1 int,
BITMAP2 int,
BITMAP3 int,
BITMAP4 int,
BITMAP5 int
);
INSERT INTO #UserPrivileges
(ID, PRIVTYPE, USERNAME, ITEMNAME, BITMAP1, BITMAP2, BITMAP3, BITMAP4, BITMAP5)
SELECT 123, 'Table', 'Joe', 'Customers', 0x408, 0x1, 0x5c, 0x1000, 0x0
;
DECLARE #Privileges TABLE (
PRIVTYPE varchar(50),
BITMAP_ID int,
BITVALUE int,
PRIVILEGE_NAME varchar(50)
);
INSERT INTO #Privileges (PRIVTYPE, BITMAP_ID, BITVALUE, PRIVILEGE_NAME)
SELECT 'Table', 1, 0x8 , 'View ' UNION ALL
SELECT 'Table', 1, 0x400 , 'Edit ' UNION ALL
SELECT 'Table', 2, 0x1 , 'Report ' UNION ALL
SELECT 'Table', 3, 0x4 , 'View Address Data ' UNION ALL
SELECT 'Table', 3, 0x8 , 'View Order Data ' UNION ALL
SELECT 'Table', 3, 0x10 , 'View Payment Data ' UNION ALL
SELECT 'Table', 3, 0x40 , 'View System Data ' UNION ALL
SELECT 'Table', 4, 0x1000, 'View Hidden Fields'
;
Query:
WITH unpivoted AS (
SELECT
ID,
PRIVTYPE,
USERNAME,
ITEMNAME,
RIGHT(BITMAP_ID, 1) AS BITMAP_ID, -- OR: STUFF(BITMAP_ID, 1, 6, '')
-- OR: SUBSTRING(BITMAP_ID, 7, 999)
-- OR: REPLACE(BITMAP_ID, 'BITMAP', '')
BITMAP_VAL
FROM UserPrivileges
UNPIVOT (
BITMAP_VAL FOR BITMAP_ID IN (
BITMAP1, BITMAP2, BITMAP3, BITMAP4, BITMAP5
)
) u
),
joined AS (
SELECT
u.USERNAME,
u.PRIVTYPE,
u.ITEMNAME,
p.PRIVILEGE_NAME
FROM unpivoted u
INNER JOIN Privileges p
ON u.PRIVTYPE = p.PRIVTYPE
AND u.BITMAP_ID = p.BITMAP_ID
AND u.BITMAP_VAL & p.BITVALUE <> 0
)
SELECT * FROM joined
Results:
USERNAME PRIVTYPE ITEMNAME PRIVILEGE_NAME
-------- -------- --------- ------------------
Joe Table Customers View
Joe Table Customers Edit
Joe Table Customers Report
Joe Table Customers View Address Data
Joe Table Customers View Order Data
Joe Table Customers View Payment Data
Joe Table Customers View System Data
Joe Table Customers View Hidden Fields
If this is a ONE OFF task or something that will only run RARELY, then CURSORS!
Otherwise, just do 6 distinct select statements that are unionized in your insert:
INSERT INTO FOO (myName, myValue)
SELECT myName, myCol1
From BAR
UNION
SELECT myName, myCol2
FROM BAR
UNION
SELECT myName, myCol3
FROM BAR