SQL Database Constraint | Multi-table Constraint - sql-server

I need to make 2 database constraints that connect two different tables at one time.
1. The total score of the four quarters equals the total score of the game the quarters belong to.
2. The total point of all the players equals to the score of the game of that team.
Here is what my tables look like.
quarter table
+------+--------+--------+--------+
| gNum | Period | hScore | aScore |
+------+--------+--------+--------+
| 1 | 1 | 13 | 18 |
| 1 | 2 | 12 | 19 |
| 1 | 3 | 23 | 31 |
| 1 | 4 | 32 | 18 |
| | | Total | Total |
| | | 80 | 86 |
+------+--------+--------+--------+
Game Table
+-----+--------+--------+--------+
| gID | hScore | lScore | tScore |
+-----+--------+--------+--------+
| 1 | 86 | 80 | 166 |
+-----+--------+--------+--------+
Player Table
+-----+------+--------+--------+
| pID | gNum | Period | Points |
+-----+------+--------+--------+
| 1 | 1 | 1 | 20 |
| | | 2 | 20 |
| | | 3 | 20 |
| | | 4 | 20 |
+-----+------+--------+--------+
So Virtually I need to use CHECK I think to make sure that players points = score of their team ie (hScore, aScore) and also make sure that the hScore and aScore = the total score in the Game table.
I was thinking of creating a foreign key variable on one of the tables and setting up constraints on that would this be the best way of going about it?
Thanks

Related

Best way of storing enumerated fields with ability to change order Postgres

What is the best way for storing enumerated fields with ability to change its order?
Lets say my database looks like this:
| Table |
|---------------------|
| id | name | order|
| 1 | 1st | 1 |
| 2 | 2nd | 2 |
| 3 | 3rd | 3 |
| 4 | 4th | 4 |
Now, when user change order in such a away
| Table |
|---------------------|
| id | name | order|
| 1 | 1st | 1 |
| 4 | 4nd | 2 |
| 2 | 2nd | 3 |
| 3 | 3rd | 4 |
Here I would have to update all rows in this table.
I consider 2 solutions
Solution 1)
When inserting row X between for example order 2 and order 3, I would change row's X order field to 3.5, So I would choose number in the middle between adjacent orders.
Above table would look like this
| Table |
|---------------------|
| id | name | order|
| 1 | 1st | 1 |
| 4 | 4nd | 2.5 |
| 2 | 2nd | 2 |
| 3 | 3rd | 3 |
Then, after for example 16 changes I would update table and normalize all order fields, so table after normalization would be like this:
| Table |
|---------------------|
| id | name | order|
| 1 | 1st | 1 |
| 4 | 4nd | 2 |
| 2 | 2nd | 3 |
| 3 | 3rd | 4 |
Solution 2)
I also consider adding fields "next" (or "next" and "prev") to each row, but it looks for me like waste of memory.
I really dont want to update whole table every time somebody change order. What is the best way of solving this problem?

Multi-dimensional data structure management in R

I have a concern about data organisation and the best approach to simplify some multi-layered data. Simply, I have a 10 replicates of small wood beams (BeamID, ~10) subjected to a 10 different treatment (TreatID, ~10), and each beam is load tested which produces a series data of a Load with consequent Displacement (ranging from 10 to 50 rows per test; I have code that corrects for disparities in row length). Each wood beam is tested multiple times (Rep, ~10).
My plan was to lump all this data into a 5-D array:
Array[Load, Deflection, BeamID, TreatID, Rep]
This way, I should be able to plot the load~deflection curves for a given BeamID, TreatID, for all Reps by using Array[ , ,1,1, ], right? So the hypothetical output for Array[ , ,1,1,1], would be:
+------------+--------+-----+
| Deflection | Load | Rep |
+------------+--------+-----+
| 0 | 0 | 1 |
| 6.35 | 10.5 | 1 |
| 12.7 | 20.8 | 1 |
| 19.05 | 45.3 | 1 |
| 25.4 | 75.2 | 1 |
+------------+--------+-----+
And Array[ , ,1,1,2] would be:
+------------+--------+-----+
| Deflection | Load | Rep |
+------------+--------+-----+
| 0 | 0 | 2 |
| 7.3025 | 12.075 | 2 |
| 14.605 | 23.92 | 2 |
| 21.9075 | 52.095 | 2 |
| 29.21 | 86.48 | 2 |
+------------+--------+-----+
Or I think I could keep it as a simpler, 'melted' dataframe, which would have columns for Load and Deflection, and BeamID, TreatID, and Rep would be repeated for each row of the test output.
+------------+--------+-----+--------+---------+
| Deflection | Load | Rep | BeamID | TreatID |
+------------+--------+-----+--------+---------+
| 0 | 0 | 1 | 1 | 1 |
| 6.35 | 10.5 | 1 | 1 | 1 |
| 12.7 | 20.8 | 1 | 1 | 1 |
| 19.05 | 45.3 | 1 | 1 | 1 |
| 25.4 | 75.2 | 1 | 1 | 1 |
| 0 | 0 | 2 | 1 | 1 |
| 7.3025 | 12.075 | 2 | 1 | 1 |
| 14.605 | 23.92 | 2 | 1 | 1 |
| 21.9075 | 52.095 | 2 | 1 | 1 |
| 29.21 | 86.48 | 2 | 1 | 1 |
+------------+--------+-----+--------+---------+
However, with the latter, I'm not sure how I could easily and discretely pull out all the Rep test values for a specific BeamID and TreatID, especially since I use a linear model to fit a 3rd order polynomial for an specific test to extract the slope of the curves. Having it as a continuous dataframe means I'd have to specify starting and stopping points to start the linear model, correct?
Thoughts, suggestions? Am I headed in the right direction in using a 5-D array? R is a new programming language for me, so please pardon my misunderstandings.

Database schema for product prices which can change daily

I am looking for suggestions to design a database schema based on following requirements.
Product can have variant
each variant can have different price
Price can be different for specific day of a week
Price can be different for a specific time of the day
all prices will have validity for specific dates only
Prices can be defined for peak, high or medium seasons
Suppliers offering any product can define their own prices and above rules still applies
what could be best possible schema where data is easy to retrieve without impacting performance?
Thanks in advance for your suggestions.
Regards
Harmeet
SO isn't really a forum for suggestions as much as answers. No answer anyone gives can be definitively correct. With that said, I would keep the tables as granular as possible to allow for easy changes across products.
In regards to #5, I would place start and end dates on products. If the price is no-longer valid, the product should no longer be available.
This includes relations for different seasonal prices, however you would either need to hardcode the seasons or create another table to define those.
For prices, if this is more than 1 region you may want a regions table in which case a currency column would be appropriate.
This is operational data, not temporal data. If you want it available for historical analysis of pricing you would need to create temporal tables as well.
Product Table
+-------------+-----------+------------------+----------------+
| ProductName | ProductID | ProductStartDate | ProductEndDate |
+-------------+-----------+------------------+----------------+
| Product1 | 1 | 01/01/2017 | 01/01/2018 |
| Product2 | 2 | 01/01/2017 | 01/01/2018 |
+-------------+-----------+------------------+----------------+
Variant Table
+-----------+-----------+-------------+---------------+-------------+-------------+
| ProductID | VariantID | VariantName | NormalPriceID | HighPriceID | PeakPriceID |
+-----------+-----------+-------------+---------------+-------------+-------------+
| 1 | 1 | Blue | 1 | 3 | 5 |
| 1 | 2 | Black | 2 | 4 | 5 |
+-----------+-----------+-------------+---------------+-------------+-------------+
Price Table
+---------+-----+-----+-----+-----+-----+-----+-----+
| PriceID | Mon | Tue | Wed | Thu | Fri | Sat | Sun |
+---------+-----+-----+-----+-----+-----+-----+-----+
| 1 | 30 | 30 | 30 | 30 | 35 | 35 | 35 |
| 2 | 35 | 35 | 35 | 35 | 40 | 40 | 40 |
| 3 | 33 | 33 | 33 | 33 | 39 | 39 | 39 |
| 4 | 38 | 38 | 38 | 38 | 44 | 44 | 44 |
| 5 | 40 | 40 | 40 | 40 | 50 | 50 | 50 |
+---------+-----+-----+-----+-----+-----+-----+-----+

Generate variables that move information between rows in hierarchical data with spss syntax

I was wondering if you can help me with the following problem in spss syntax.
My dataset has nested structure.
Data are nested in companies, then each company has 1 or 2 bosses, but in this case I care only about boss 1. At a previous stage in time the boss graded the workers (not all of them). Now, the ID and the grade of the workers is on the row each worker.
I would like to move the information that was obtained during worker's assessment and create new sets of variables for each (worker ID and grade) on the line/row of the boss.
+---------+------+--------+--------------+---------+---------+--------+---------+
| company | boss |workerID|worker's grade|N:workID1|N:grade1 |N:work2 |N:grade2 |
+---------+------+--------+--------------+---------+---------+--------+---------+
| A | 1 | 1 | | 3 | A | 4 | A |
| A | 2 | 2 | | | |
| A | 0 | 3 | A | | |
| A | 0 | 4 | A | | |
| A | 0 | 5 | | | |
| B | 1 | 1 | | 3 | B | 4 | A |
| B | 0 | 2 | | | |
| B | 0 | 3 | B | | |
| B | 0 | 4 | A | | |
| C | 1 | 1 | | 2 | D | -1 | -1 |
| C | 0 | 2 | D | | |
I would like to move the worker's id and the grade that to the row of the boss in the NEW variables, without loosing the existing variables on workerID and worker's grade.
Basically, I will need to feed forward the information into the new variables and to the row of boss EQ 1 separately for each company.
I have no idea how to proceed with this. I assume that I need a loop that creates new variable for each worker ID that has a valid grade and then feeds forward the information from the worker's row to the boss' newly generated variables.
Any suggestions are very wellcome :-)
Take a look at VARSTOCASES (Data > Restructure)

Transform ranged data in an Access table

I have a table in Access database as below;
Name | Range | X | Y | Z
------------------------------
A | 100-200 | 1 | 2 | 3
A | 200-300 | 4 | 5 | 6
B | 100-200 | 10 | 11 | 12
B | 200-300 | 13 | 14 | 15
C | 200-300 | 16 | 17 | 18
C | 300-400 | 19 | 20 | 21
I have trying write a query that convert this into the following format.
Name | X_100_200 | Y_100_200 | Z_100_200 | X_200_300 | Y_200_300 | Z_200_300 | X_300_400 | Y_300_400 | Z_300_400
A | 1 | 2 | 3 | 4 | 5 | 6 | | |
B | 10 | 11 | 12 | 13 | 14 | 15 | | |
C | | | | 16 | 17 | 18 | 19 | 20 | 21
After trying for a while the best method I could come-up with is to write bunch of short queries that selects the data for each Range and then put them together again using a Union query. The problem is that for this example I have shown 3 columns (X, Y and Z), but I actually have much more. Access is starting to strain with the amount of SQL I have come up with.
Is there a better way to achieve this?
The answer was simple. Just use Access Pivotview. Finding it hard to export the results to Excel though.

Resources