DB Design: Sort Order for Lookup Tables - database

I have an application where the database back-end has around 15 lookup tables. For instance there is a table for Counties like this:
CountyID(PK) County
49001 Beaver
49005 Cache
49007 Carbon
49009 Daggett
49011 Davis
49015 Emery
49029 Morgan
49031 Piute
49033 Rich
49035 Salt Lake
49037 San Juan
49041 Sevier
49043 Summit
49045 Tooele
49049 Utah
49051 Wasatch
49057 Weber
The UI for this app has a number of combo boxes in various places for these lookup tables, and my client has asked that the boxes list in this case:
CountyID(PK) County
49035 Salt Lake
49049 Utah
49011 Davis
49057 Weber
49045 Tooele
'The Rest Alphabetically
The best plan I have for accomplishing this is to add a column to each lookup table for SortOrder(numeric). I had a colleague tell me he thought that would cause the tables to violate 3rd-Normal-Form, but I think the sort order still depends on the key and only the key (even though the rest of the list is alphabetical).
Is adding the SortOrder column the best way to do this, or is there a better way I am just not seeing?

I agree with #cletus that a sort order column is a good way to go and it does not violate 3NF (because, as you said, the sort order column entries are functionally dependent on the candidate keys of the table).
I'm not sure I agree that alphanumeric is better than numeric. In the specific case of counties, there are seldom new ones created. But there is no requirement that the numbers assigned are sequential; you can allocate them with numbers that are a multiple of a hundred, for example, leaving ample room for insertions.

Yes I agree a sort order column is the best solution when the requirements call for a custom sort order like the one you cite. I wouldn't go with a numeric column however. If the data is alphanumeric, the sort order should be alphanumeric. That way you can seed the value with whatever is in the county field.
If you use a numeric field you'll have to resequence the entire table (potentially) whenever you add a new entry. So:
Columns: ID, County, SortOrder
Seed:
UPADTE County SET SortOrder = CONCAT('M-', County)
and for the special cases:
UPDATE County
SET SortOrder = CONCAT('E-' . County)
WHERE County IN ('Salt Lake', 'Utah', 'Davis', 'Weber', 'Tooele')
Arguably you may want to put another marker column in to indicate those entries are special.

I went with numeric and large multiples.
Even with the CONCAT('E-'.. example, I don't get the required sort order. That would give me Davis, SL, Tooele... and Salt Lake needs to be first.
I ended up using multiples of 10 and assigned the non-special-sort entries a value like 10000. That way the view for each lookup can have
ORDER BY SortOrder ASC, OtherField ASC
Another programmer suggested using DECODE in Oracle, or CASE statements in SQL Server, but this is a more general solution. YMMV.

Related

modeling correct star schema for ssas tabular

I'm using ssas tabular (powerpivot) and need to design a data-model and write some DAX.
I have 4 tables in my relational database-model:
Orders(order_id, order_name, order_type)
Spots (spot_id,order_id, spot_name, spot_time, spot_price)
SpotDiscount (spot_id, discount_id, discount_value)
Discounts (discount_id, discount_name)
One order can include multiple spots but one spot (spot_id 1) can only belong to one order.
One spot can include different discounts and every discount have one discount_value.
Ex:
Order_1 has spot_1 (spot_price 10), spot_2 (spot_price 20)
Spot_1 has discount_name_1(discount_value 10) and discount_name_2 (discount_value 20)
Spot_2 has discount_name_1(discount_value 15) and discount_name_3 (discount_value 30)
I need to write two measures: price(sum) and discount_value(average)
How do I correctly design a star schema with fact table (or maybe two fact tables) so that I in my powerpivot cube can get:
If i choose discount_name_1 I should get
order_1 with spot_1 and spot_2 and price on order_1 level will have value 50 and discount_value = 12,5
If I choose discount_name_3 I should get
order_1 with only spot_2 and price on order level = 20 and discount_value = 30
Fact(OrderKey, SpotKey, DiscountKey, DateKey, TimeKey Spot_Price, Discount_Value,...)
DimOrder, DimSpot, DimDiscount, etc....
TotalPrice:=
SUMX(
SUMMARIZE(
Fact
,Fact[OrderKey]
,Fact[SpotKey]
,Fact[Spot_Price]
)
,Fact[Spot_Price]
)
AverageDiscount:=
AVERAGE(Fact[Discount_Value])
Fact table is denormalized and you end up with the simplest star schema you can have.
First measure deserves some explanation. [Spot_Price] is duplicated for any spot with multiple discounts, and we would get wrong results with a simple SUM(). SUMMARIZE() does a group by on all the columns passed to it, following relationships (if necessary, we're looking at a single table here so nothing to follow).
SUMX() iterates over this table and accumulates the value of the expression in its second argument. The SUMMARIZE() has removed our duplicate [Spot_Price]s so we accumulate the unique ones (per unique combination of [OrderKey] and [SpotKey]) in a sum.
You say
One order can include multiple spots but one spot (spot_id 1) can only
belong to one order.
That's is not supported in the table definitions you give just above that statement. In the table definitions, one order has only one spot but (unless you've added a unique index to Orders on spot_id) each Spot can have multiple orders. Each Spot can also have multiple discounts.
If you want to have the relationship described in your words, the table definitions should be:
Orders(order_id, order_name, order_type)
OrderSpot(order_id, spot_id) -- with a Unique index on spot_id)
Spots (spot_id, spot_name, spot_time, price)
or:
Orders(order_id, order_name, order_type)
Spots (spot_id, spot_name, spot_time, order_id, price)
You can create the ssas cube with Order as the fact table, with one dimention in the Spot Table. If you then add the SpotDiscount and Discount tables with their relations (SpotDiscount to Spot, Discount to SpotDiscount) you have a 1 dimentional.
EDIT as per comments
Well, the Fact table would have order_id, order_name, order_type
The Dimension would be made up of the other 3 tables and have the columns you're interested in: probably spot_name, spot_time, spot_price, discount_name, discount_value.

How to enforce unique 2-tuple on oracle table?

I am trying to enforce the property that table Match should have all unique tuples (Team 1, Team 2). However, let Team 1 = Detroit Pistons and Team 2 = Chicago Bulls. I do not want to allow (Detroit Pistons, Chicago Bulls) to be inserted into the table if (Chicago Bulls, Detroit Pistons) already exists.
How can I enforce this constraint?
A) The tuples are semantically identical. (I think this is your case.)
That means the tuple {Chicago Bulls, Detroit Pistons} means exactly the same thing as the tuple {Detroit Pistons, Chicago Bulls}. Use a CHECK constraint to impose an order on the two columns.
CHECK (column_1 < column_2)
That kind of constraint would allow {Chicago Bulls, Detroit Pistons}, but it would reject {Detroit Pistons, Chicago Bulls}. This is kind of like imposing a canonical form on otherwise free-form data.
B) The tuples are semantically distinct.
That means the tuple {Chicago Bulls, Detroit Pistons} means one thing, and the tuple {Detroit Pistons, Chicago Bulls} means something else. For example, the first attribute might mean "home team", and the second might mean "visiting team". In this case, all you need is some kind of unique constraint on the pair of columns.
You can create a unique function-based index:
CREATE UNIQUE INDEX unq_match ON match ( LEAST(team1,team2), GREATEST(team1,team2) );
LEAST() will get the "lesser" of the two teams (whether by ID or name, it doesn't matter) while GREATEST will get the "greater" of the two. Unfortunately this particular solution doesn't scale up to 3-or-more-tuples.

How to select first record prior/after a given timestamp in KDB?

I am currently just pulling in all records 1min leading up to the timestamp (e.g. if the timestamp I'm interested in is 2014.04.14T09:30):
select from Prices where timestamp within 2014.04.14T09:29 2014.04.14T09:30, stock=`GOOG
However, this is clearly not very robust. Sometimes the previous record may be at 09:25am and then the query returns nothing. Sometimes the query may return hundreds of records if there have been a lot of price changes, even though all I need is the last record returned.
I know this can be done with an asof join, but want to avoid it for the time being as Prices is simply too big at present.
I am also interested in doing the same, but in finding the first record after a given timestamp.
Note also that Prices is a splayed table
Select last record before the given timestamp:
q)select from Price where stock=`GOOG,i=last i,timestamp<2014.04.14T09:30
Select first record after the given timestamp:
q)select from Price where stock=`GOOG,i=first i,timestamp>2014.04.14T09:30
Use asof or aj to get the performance kdb+ is known for. The bigger Prices is, the more reason for doing so.
I would question your logic for avoiding aj. aj and asof use the bin operator which is binary search and hence more performant than scanning the timestamp column.
Let's create your table and run the solution from the other answer:
Prices:([]stock:`g#1000000?`GOOG,9?`4;timestamp:asc 2014.04.14+1000000?0t;price:1000000?100f,size:1000000?100j)
q)\t do[1000;select from Prices where timestamp<2014.04.14T09:30,stock=`GOOG,i=last i]
10205
We can make this a lot better by reordering the constraints:
q)\t do[1000;select from Prices where stock=`GOOG,timestamp<2014.04.14T09:30,i=last i]
2030
But nothing will beat this:
q)\t do[1000;Prices asof `stock`timestamp!(`GOOG;2014.04.14D09:30)]
9
By the way, you are using datetime in your question, which is deprecated, so I've replaced it with timestamp. This has no impact on performance.
Few more things to remember while using aj:
in-memory prices - the table should be `g#sym and time sorted within sym
on-disk prices - `p#sym and time sorted within sym
Also in case of partitioned/splayed tables, using the where constraints (except the date in the date-partitioned table) can severely impact the performance.

Is there any contradiction against too many values in a table (database)?

I was wondering if there's any contradiction or futur problems against a table in a database which contains about 80 columns. There will be only VARCHARs, few INT and maybe 1 or 2 MESSAGE. I did some research on the net but there's nothing really talking about that kind of problem...In other terms, is this okay or even 'normal' to put that much of values inside a table??
Thanks in advance!
You shouldn't have any real problems if the fields are mostly integers. Most DBMSes have a limit on row length, so a bunch of long columns can cause issues...but unless the varchar columns are very long, you're probably OK.
I've honestly never even needed to think about that, though -- with a properly normalized database, it's quite rare to ever need that many columns in a table.
More columns you have, more memory server needs to process the records.
I recomend to use the "multiple to one" relation scheme in this case.
Example of tables:
customer
id
name
email
...
ins_app_form (Insurance application form)
id
customer_id (relation with customer)
date
... (here comes some other data if you need)
ins_app_item (Insurance application form items/fields)
id
ins_app_form_id (relation with Insurance application form)
question (the name of a question in application form)
answer (customer's answer)
So to show the application form with this scheme you will need to run a query:
SELECT
iaf.id AS application_id,
iaf.date AS `date`,
iai.question,
iai.answer
FROM ins_app_form AS iaf
LEFT JOIN ins_app_item AS iai ON iai.ins_app_form_id=iaf.id
WHERE iaf.customer_id=<ID of a customer>
This query will bring you something like this:
id date question answer
1 2014-03-31 "Year" "2008"
1 2014-03-31 "Car make" "Audi"
1 2014-03-31 "Car model" "Q7"
...

How to find the minimum fields needed to identify a unique row in a set of data

Say I have a bunch of data on some people. This could include Name, DOB, Address, Email, etc... Assume there are no unique identifiers (like an id column) on this data, but also assume that there are no repeating rows. I need to figure out the minimum set of fields I can use to query that data and return a unique row.
An example of a solution would be: "I can make a query that specifies a first name, dob, email, and zip, and that would return exactly one or zero rows."
Did I ask that in a way that makes sense? I am looking for a technique, algorithm, or software package that would solve this problem for a given set of data. Anything that could provide an answer would work. Thanks!
EXAMPLE DATA (the real stuff is much more complex):
FNAME LNAME DOB ZIP email
John Smith 1/1/12 77777 dude#fake.com
Sean Smith 1/2/08 77777 dude#fake.com
Sean William 4/2/07 77789 stuff#fake.com
Richard Ross 1/1/12 78989 foo#fake.com
The solution for this set of data would be (FNAME, LNAME) or (EMAIL, DOB) or (EMIAL, FNAME).
i think you will need an iterative approach.
perhaps you can begin with each column, and attempt to create a unique index.
if you have success, then done.
if you are unable to create unique index then add another column and try again.
do this for all columns until you can successfully make the index.

Resources