SQL Server - Multiple Identity Ranges in the Same Column - sql-server

Yesterday, I was asked the same question by two different people. Their tables have a field that groups records together, like a year or location. Within those groups, they want to have a unique ID that starts at 1 and increments up sequentially. Obviously, you could search for MAX(ID), but if these applications have a lot of traffic, they'd need to lock the entire table to ensure the same ID wasn't returned multiple times. I thought about using sequences but that would mean dynamically creating a sequence for each group.
Example 1:
Records created during the year should increment by one and then restart at 1 at the beginning of the next year.
| Year | ID |
|------|----|
| 2016 | 1 |
| 2016 | 2 |
| 2017 | 1 |
| 2017 | 2 |
| 2017 | 3 |
Example 2:
A company has many locations and they want to generate a unique ID for each customer, combining a the location ID with a incrementing ID.
| Site | ID |
|------|----|
| XYZ | 1 |
| ABC | 1 |
| XYZ | 2 |
| XYZ | 3 |
| DEF | 1 |
| ABC | 2 |

One trick that is often under-used is to create a clustered index on Site / ID or Year / ID - BUT Change the order of the ID column to Desc rather than ASC.
This way when you need to scan the CI to get the Next ID value it only needs to check 1 row in the clustered index. I've used this on Multi-Billion Record tables and it runs quite quickly. You can get even better performance by partitioning the table by Site or Year then you'll get the added benefit of partition elimination when you run your MAX(ID) queries.

Related

Postgresql - Find overlapping time ranges for different users in the same session and present them as pairs

I have a table which has records of sessions a players have played in a group music play. (music instruments)
so if a user joins a session, and leaves, there is one row created. If they join even the same session 2x, then two rows are created.
Table: music_sessions_user_history
| Column | Type | Default|
| --- | --- | ---|---
| id | character varying(64) | uuid_generate_v4()|
| user_id | user_id | |
| created_at | timestamp without time zone | now()|
| session_removed_at | timestamp without time zone | |
| max_concurrent_connections | integer |
| music_session_id|character varying(64)|
This table is basically the amount of time a user was in a given session. So you can think of it as a timerange or tsrange in PG. The max_concurrent_connections which is a count of the number of users who were in the session at once.
so the query at it's heart needs to find overlapping time ranges for different users in the same session; and to then count them up as a pair that played together.
The query needs to do this: It tries to report each user that played in a music session with others - and who those users were
So for example, if a userA played with userB, and that's the only data in the database, then two rows would be returned like:
| User | Other users in the session |
| --- | --- |
|userA | [userB] |
|userB | [userA] |
But if userA played with both userB and UserC, then three rows would be like:
| User | Other users in the session |
| --- | --- |
|userA | [userB, userC]|
|userB | [userA, userC]|
|userC | [userA, userB]|
Any help of constructing this query is much appreciated.
update:
I am able to get overlapping records using this query.
select m1.user_id, m1.created_at, m1.session_removed_at, m1.max_concurrent_connections, m1.music_session_id
from music_sessions_user_history m1
where exists (select 1
from music_sessions_user_history m2
where tsrange(m2.created_at, m2.session_removed_at, '[]') && tsrange(m1.created_at, m1.session_removed_at, '[]')
and m2.music_session_id = m1.music_session_id
and m2.id <> m1.id);
Need to find a way to convert these results in to pairs.
create a cursor and for each fetched record determine which other records intersect using a between time of start and end time.
append the intersecting results into a temporary table
select the results of the temporary table

SQL Server - same table for multiple customers

I have a need to manage a dataset for multiple customers - each customer manages a small table to update procedure volumes for the next five years. The table is structured like so:
+-------------+--------+--------+--------+--------+--------+
| | Year 1 | Year 2 | Year 3 | Year 4 | Year 5 |
+-------------+--------+--------+--------+--------+--------+
| Procedure A | 5 | 10 | 14 | 12 | 21 |
+-------------+--------+--------+--------+--------+--------+
| Procedure B | 23 | 23 | 2 | 3 | 4 |
+-------------+--------+--------+--------+--------+--------+
| Procedure C | 5 | 6 | 7 | 8 | 12 |
+-------------+--------+--------+--------+--------+--------+
The values in this table will be managed by each customer via MS PowerApps.
This same structure exists for every single customer. What is the best way to put all of these in one dataset?
Should I just add a column for CUSTOMER ID and just put all the data in there?
The process:
Utilizing PowerApps, a new customer deal will be generated and a row will be added for them in the SQL DB in a customer records table.
Simultaneously, the blank template of the above table should be generated for them.
Now, the customer can interface with this SQL table within PowerApps and add their respective procedure volumes.
The question isn't explained well but:
I would assume all of the customer specific data has at least one column that is the same. For instance CustomerName. You could create your own table with CustomerId, CustomerName, (any other fields you would like to see). If there isn't a concept of CustomerId on the customer's tables, you would have to join them on CustomerName. You could populate your own CustomerId for the new table.
I would be happy to help more if you could clarify the question and show a few examples.

Using tables to categorise resources with

I'm trying to design a database that allows for filtering according to if a specific resource fills certain categories. I've gotten to the point where I can input data that seems to be how it should be filled out but I'm not sure how I should pull it out again.
The main resource table looks like this:
Table1 - resources
| resourceID | AutoNum |
| title | short text |
| author | short text |
| publish date | date |
| type | short text |
Table2 - Department Categories
| ID | AutoNum |
| 1 | Yes/No |
| 2 | Yes/No |
| fID| Number |
Table3 - Categories
| ID | AutoNum |
| cat | Yes/No |
| dog | Yes/No |
| bird | Yes/No |
| fID | Number |
I have built a form where you can fill in items to the resource ID, and at the same time check off the Yes/No boxes in tables 2 & 3.
I'm trying to use the primary key ID from table 1 and copy it into the table 2 & 3 with referential integrity to cascade deletes, updates. Which I think is the right way to do this.
Currently, I've learnt that I can implement a search function for the columns in table 1, this seems to work fine. However I am stuck with applying the relevant columns in table 2 and 3 as filters.
apply search>
[X] - Cats
Should only return records from table 1 where in table 3 the relevant column has a tick in the Yes/No box.
I hope I have explained this properly, very new to Access and databases so if you need clarity, don't mind offering.

How best to Calculate Nth subscription renewal?

So I have a transaction table (postgres) that inserts a new row whenever a user renews their subscription for our service. The table subscription looks like this:
+--------+--------+------------+
| userId | prodId | renew_date |
+--------+--------+------------+
| 1 | 1 | 2018-05-01 |
| 1 | 1 | 2018-06-01 |
| 1 | 1 | 2018-07-01 |
| 2 | 3 | 2017-04-16 |
| 2 | 3 | 2017-05-16 |
+--------+--------+------------+
If the analysts want to figure out the Nth renewal or latest renewal for a particular user or product, I have two solutions to give them that:
1.) During my ETL process, I truncate the DW warehouse target table and re-populate it with:
select *
, row_number() over (partition by userId, productId order by renew_date asc) as nth_renewal
from subscription
I can't think of a way where i can +1 to the previous renewal if I were to do incremental updates, what if this is the customers first renewal?
2.) I just copy the exact OLTP table over to the data warehouse and do incremental updates every day. This way, I let the analysts calculate the nth renewal themselves. (also as a follow up question: is it ever OK to have a duplicate copy of a transactional table in my data warehouse?)

What's the fastest way to perform large inserts with foreign key relationships and preprocessing?

I need to regularly import large (hundreds of thousands of lines) tsv files into multiple related SQL Server 2008 R2 tables.
The input file looks something like this (it's actually even more complex and the data is of a different nature, but what I have here is analogous):
January_1_Lunch.tsv
+-------+----------+-------------+---------+
| Diner | Beverage | Food | Dessert |
+-------+----------+-------------+---------+
| Nancy | coffee | salad_steak | pie |
| Joe | milk | soup_steak | cake |
| Pat | coffee | soup_tofu | pie |
+-------+----------+-------------+---------+
Notice that one column contains a character-delimited list that needs preprocessing to split it up.
The schema is highly normalized -- each record has multiple many-to-many foreign key relationships. Nothing too unusual here...
Meals
+----+-----------------+
| id | name |
+----+-----------------+
| 1 | January_1_Lunch |
+----+-----------------+
Beverages
+----+--------+
| id | name |
+----+--------+
| 1 | coffee |
| 2 | milk |
+----+--------+
Food
+----+-------+
| id | name |
+----+-------+
| 1 | salad |
| 2 | soup |
| 3 | steak |
| 4 | tofu |
+----+-------+
Desserts
+----+------+
| id | name |
+----+------+
| 1 | pie |
| 2 | cake |
+----+------+
Each input column is ultimately destined for a separate table.
This might seem an unnecessarily complex schema -- why not just have a single table that matches the input? But consider that a diner may come into the restaurant and order only a drink or a dessert, in which case there would be many null rows. Considering that this DB will ultimately store hundreds of millions of records, that seems like a poor use of storage. I also want to be able to generate reports for just beverages, just desserts, etc., and I figure those will perform much better with separate tables.
The orders are tracked in relationship tables like this:
BeverageOrders
+--------+---------+------------+
| mealId | dinerId | beverageId |
+--------+---------+------------+
| 1 | 1 | 1 |
| 1 | 2 | 2 |
| 1 | 3 | 1 |
+--------+---------+------------+
FoodOrders
+--------+---------+--------+
| mealId | dinerId | foodId |
+--------+---------+--------+
| 1 | 1 | 1 |
| 1 | 1 | 3 |
| 1 | 2 | 2 |
| 1 | 2 | 3 |
| 1 | 3 | 2 |
| 1 | 3 | 4 |
+--------+---------+--------+
DessertOrders
+--------+---------+-----------+
| mealId | dinerId | dessertId |
+--------+---------+-----------+
| 1 | 1 | 1 |
| 1 | 2 | 2 |
| 1 | 3 | 1 |
+--------+---------+-----------+
Note that there are more records for Food because the input contained those nasty little lists that were split into multiple records. This is another reason it helps to have separate tables.
So the question is, what's the most efficient way to get the data from the file into the schema you see above?
Approaches I've considered:
Parse the tsv file line-by-line, performing the inserts as I go. Whether using an ORM or not, this seems like a lot of trips to the database and would be very slow.
Parse the tsv file to data structures in memory, or multiple files on disk, that correspond to the schema. Then use SqlBulkCopy to import each one. While it's fewer transactions, it seems more expensive than simply performing lots of inserts, due to having to either cache a lot of data or perform many writes to disk.
Per How do I bulk insert two datatables that have an Identity relationship and Best practices for inserting/updating large amount of data in SQL Server 2008, import the tsv file into a staging table, then merge into the schema, using DB functions to do the preprocessing. This seems like the best option, but I'd think the validation and preprocessing could be done more efficiently in C# or really anything else.
Are there any other possibilities out there?
The schema is still under development so I can revise it if that ends up being the sticking point.
You can import you file in the table of the following structure: Diner, Beverage, Food, Dessert, ID (identity, primary key NOT CLUSTERED - for performance issues).
After this simply add the following columns: Dinner_ID, Beverage_ID, Dessert_ID and fill them according to your separate tables (it's simple to group each of the columns and to add the missing data to lookup tables as Beverages, Desserts, Meals and, after this, to fix the imported table with the IDs for existent and newly added records).
The situation with Food table is more complex because of ability to combine the foods, but the same trick can be used: you can also add the data to your lookup table and, among this, store the combinations of foods in the additional temp table (with the unique ID) and separation on the single dishes.
When the parcing will be finished, you will have 3 temp tables:
table with all your imported data and IDs for all text columns
table with distinct food lists (with IDs)
table with IDs of food per food combination
From the above tables you can perform the insertion of the parsed values to either structure as you want.
In this case only 1 insert (bulk) will be done to the DB from the code side. All other data manipulations will be performed in the DB.

Resources