PostgreSQL multidimensional array search - arrays

I am a newbie to Postgresql and was trying with it.
I have created a simple table:
CREATE table items_tags (
ut_id SERIAL Primary KEY,
item_id integer,
item_tags_weights text[]
);
where:
item_id - Item Id with these tags are associated
item_tags_weights - Tags associated with Itm including weight
Example entry:
--------------------
ut_id | item_id | item_tags_weights
---------+---------+-------------------------------------------------------------------------------------------------------------------------------
3 | 2 | {{D,1},{B,9},{W,3},{R,18},{F,9},{L,15},{G,12},{T,17},{0,3},{I,7},{E,14},{S,2},{O,5},{M,4},{V,3},{H,2},{X,14},{Q,9},{U,6},{P,16},{N,11},{J,1},{A,12},{Y,15},{C,15},{K,4},{Z,17}}
1000003 | 3 | {{Q,4},{T,19},{P,15},{M,14},{O,20},{S,3},{0,6},{Z,6},{F,4},{U,13},{E,18},{B,14},{V,14},{X,10},{K,18},{N,17},{R,14},{J,12},{L,15},{Y,3},{D,20},{I,18},{H,20},{W,15},{G,7},{A,11},{C,14}}
4 | 4 | {{Q,2},{W,7},{A,6},{T,19},{P,8},{E,10},{Y,19},{N,11},{Z,13},{U,19},{J,3},{O,1},{C,2},{L,7},{V,2},{H,12},{G,19},{K,15},{D,7},{B,4},{M,9},{X,6},{R,14},{0,9},{I,10},{F,12},{S,11}}
5 | 5 | {{M,9},{B,3},{I,6},{L,12},{J,2},{Y,7},{K,17},{W,6},{R,7},{V,1},{0,12},{N,13},{Q,2},{G,14},{C,2},{S,6},{O,19},{P,19},{F,4},{U,11},{Z,17},{T,3},{E,10},{D,2},{X,18},{H,2},{A,2}}
(4 rows)
where:
{D,1} - D = tag, 1 = tag weight
Well, I just wanted to list the items_id where tags = 'U' according tag weight.
On way is to select ALL the tags from database and do the processing in high-level language with sort and use the result set.
For this, I can do the following:
1) SELECT * FROM user_tags WHERE 'X' = ANY (interest_tags_weights)
2) Extract and sort the information and display.
But considering that multiple items can be associated with a single 'TAG', and assuming
10 million entry, this method will be surely sluggish.
Any idea to list as needed with CREATE function or so?
Any pointers will be helpfull.
Many thanks.

Have you considered normalization, i.e. moving the array field into another table? Apart from being easy to query and extend, it's likely to have better performance on larger databases.

Related

Can anyone suggest a method of versioning ATTRIBUTE (rather than OBJECT) data in DB

Taking MySQL as an example DB to perform this in (although I'm not restricted to Relational flavours at this stage) and Java style syntax for model / db interaction.
I'd like the ability to allow versioning of individual column values (and their corresponding types) as and when users edit objects. This is primarily in an attempt to drop the amount of storage required for frequent edits of complex objects.
A simple example might be
- Food (Table)
- id (INT)
- name (VARCHAR(255))
- weight (DECIMAL)
So we could insert an object into the database that looks like...
Food banana = new Food("Banana",0.3);
giving us
+----+--------+--------+
| id | name | weight |
+----+--------+--------+
| 1 | Banana | 0.3 |
+----+--------+--------+
if we then want to update the weight we might use
banana.weight = 0.4;
banana.save();
+----+--------+--------+
| id | name | weight |
+----+--------+--------+
| 1 | Banana | 0.4 |
+----+--------+--------+
Obviously though this is going to overwrite the data.
I could add a revision column to this table, which could be incremented as items are saved, and set a composite key that combines id/version, but this would still mean storing ALL attributes of this object for every single revision
- Food (Table)
- id (INT)
- name (VARCHAR(255))
- weight (DECIMAL)
- revision (INT)
+----+--------+--------+----------+
| id | name | weight | revision |
+----+--------+--------+----------+
| 1 | Banana | 0.3 | 1 |
| 1 | Banana | 0.4 | 2 |
+----+--------+--------+----------+
But in this instance we're going to be storing every single piece of data about every single item. This isn't massively efficient if users are making minor revisions to larger objects where Text fields or even BLOB data may be part of the object.
What I'd really like, would be the ability to selectively store data discretely, so the weight could possible be saved in a separate DB in its own right, that would be able to reference the table, row and column that it relates to.
This could then be smashed together with a VIEW of the table, that could sort of impose any later revisions of individual column data into the mix to create the latest version, but without the need to store ALL data for each small revision.
+----+--------+--------+
| id | name | weight |
+----+--------+--------+
| 1 | Banana | 0.3 |
+----+--------+--------+
+-----+------------+-------------+-----------+-----------+----------+
| ID | TABLE_NAME | COLUMN_NAME | OBJECT_ID | BLOB_DATA | REVISION |
+-----+------------+-------------+-----------+-----------+----------+
| 456 | Food | weight | 1 | 0.4 | 2 |
+-----+------------+-------------+-----------+-----------+----------+
Not sure how successful storing any data as blob to then CAST back to original DTYPE might be, but thought since I was inventing functionality here, why not go nuts.
This method of storage would also be fairly dangerous, since table and column names are entirely subject to change, but hopefully this at least outlines the sort of behaviour I'm thinking of.
A table in 6NF has one CK (candidate key) (in SQL a PK) and at most one other column. Essentially 6NF allows each pre-6NF table's column's update time/version and value recorded in an anomaly-free way. You decompose a table by dropping a non-prime column while adding a table with it plus an old CK's columns. For temporal/versioning applications you further add a time/version column and the new CK is the old one plus it.
Adding a column of time/whatever interval (in SQL start time and end time columns) instead of time to a CK allows a kind of data compression by recording longest uninterupted stretches of time or other dimension through which a column had the same value. One queries by an original CK plus the time whose value you want. You dont need this for your purposes but the initial process of normalizing to 6NF and the addition of a time/whatever column should be explained in temporal tutorials.
Read about temporal databases (which deal both with "valid" data that is times and time intervals but also "transaction" times/versions of database updates) and 6NF and its role in them. (Snodgrass/TSQL2 is bad, Date/Darwen/Lorentzos is good and SQL is problematic.)
Your final suggested table is an example of EAV. This is usually an anti-pattern. It encodes a database in to one or more tables that are effectively metadata. But since the DBMS doesn't know that you lose much of its functionality. EAV is not called for if DDL is sufficient to manage tables with columns that you need. Just declare appropriate tables in each database. Which is really one database, since you expect transactions affecting both. From that link:
You are using a DBMS anti-pattern EAV. You are (trying to) build part of a DBMS into your program + database. The DBMS already exists to manage data and metadata. Use it.
Do not have a class/table of metatdata. Just have attributes of movies be fields/columns of Movies.
The notion that one needs to use EAV "so every entity type can be
extended with custom fields" is mistaken. Just implement via calls
that update metadata tables sometimes instead of just updating regular
tables: DDL instead of DML.

Can a 'skinny table' design be compensated with a view?

Context: simple webapp game for personal learning purposes, using postgres. I can design it however I want.
2 tables 1 view (there are additional tables view references that aren't important)
Table: Research
col: research_id (foreign key to an outside table)
col: category (integer foreign key to category table)
col: percent (integer)
constraint (unique combination of the three columns)
Table: Category
col: category_id (primary key auto inc)
col: name(varchar(255))
notes: this table exists to capture the 4 categories of research I want in business logic and which I assume is not best practice to hardcode as columns in the db
View: Research_view
col: research_id (from research table)
col: foo1 (one of the categories from category table)
col: foo2 (etc...)
col: other cols from other joins
notes:has insert/update/delete statements that uses above tables appropriately
The research table itself I worry qualifies as a "Skinny Table" (hadn't heard the term until I just saw it in the Ibatis manning book). For example test data within it looks like:
| research_id | percent | category |
| 1 | 25 | 1 |
| 1 | 25 | 2 |
| 1 | 25 | 3 |
| 1 | 25 | 4 |
| 2 | 20 | 1 |
| 2 | 30 | 2 |
| 2 | 25 | 3 |
| 2 | 25 | 4 |
1) Does it make sense to have all columns in a table collectively define unique entries?
2) Does this 'smell' to you?
Couple of notes to start:
constraint (unique combination of the three columns)
It makes no sense to have a unique constraint that includes a single-column primary key. Including that column will cause every row to be unique.
notes: this table exists to capture the 4 categories of research I want in business logic and which I assume is not best practice to hardcode as columns in the db
If a research item/entity is required to have all four categories defined for it to be valid, they should absolutely be columns in the research table. I can't tell definitively from your statement whether this is the case or not, but your assumption is faulty if looked at in isolation. Let your model reflect reality as closely as possible.
Another factor is whether it's a requirement that additional categories may be added to the system post-deployment. Whether the categories are intended to be flexible vs. fixed should absolutely influence the design.
1) Does it make sense to have all columns in a table collectively
define unique entries?
I would say it's not common, but can imagine there are situations where it might be appropriate.
2) Does this 'smell' to you?
Hard to say without more details.
All that said, if the intent is to view and add research items with all four categories, I would say (again) that you should consider whether the four categories are semantically attributes of the research entity.
As a random example, things like height and weight might be considered categories of a person, but they would likely be stored flat on the person table, and not in a separate table.

Virtual fields In CakePHP 2.- using multiple tables

I Seem to really be struggling to find any information actually covering the use of Virtual Fields in CakePHP. Yes I know there is official documentation on the CakePHP site, however it does not cover the use of separate tables.
Eg
Table: Products
ID | PRODUCT | PRICE | QUANTITY
1 | Butter | 2.50 | 250
2 | Flour | 6.00 | 16000
3 | Egg | 0.99 | 6
Table: Products_Recipes
Product_ID | Recipe_ID | Quantity | COST ("VIRTUAL FIELD")
1 | 1 | 200 |= SUM (Products.Price/Products.Quantity) * Products_Recipes.Quantity
2 | 1 | 400
3 | 1 | 3
Table: Recipes
ID | RECIPE | COST ("Virtual Field")
1 | Pastry | Sum of Costs where Recipe_id is 1
Bit of a newbie to Mysql however I think this is the way I should be doing it. How do I then use Virtual Fields to access this information? I can get it to work in one model but not to access other models?
Doug.
I assume your tables are such:
Products (id, product, price, quantity)
Products_Recipes (product_id,
recipe_id, quantity)
Recipes (id, recipe)
(The missing fields are those you are trying to create.)
I find it easier to create in MySQL, then when it works decide if MySQL or CakePHP is the right place for production implementation. (And, you require a bit more complexity than the CakePHP examples.)
To create a working solution in MySQL:
Create a select view for the products_recipes
CREATE VIEW vw_recipe_products AS
SELECT products_recipes.product_id, products_recipes.recipe_id, products_recipes.quantity,
,(Products.Price/Products.Quantity) * Products_Recipes.Quantity as COST
FROM products_recipes
JOIN products
ON products_recipes.product_id = products.product_id
(I don't think you should have the SUM operator, since you there is no need for a GROUP BY clause)
Then the recipes
CREATE VIEW vw_recipes_with_cost
SELECT recipes.id, recipes.recipe, SUM(vw_recipes_products.cost) as cost
FROM recipes
JOIN vw_recipes_products
ON recipes.id = vw_recipes_products.recipe_id
GROUP BY recipes.id, recipes.recipe
Ultimately, I think you should be implementing the solution in MySQL due to the GROUP BY clause and use of the intermediary view. (Obviously, there are other solutions, but take advantage of MySQL. I find the CakePHP implementation of virtual fields to be used for simple solutions (e.g., concatenating two field)) {Once you create the views, you will need to create new models or change the useTable variable of existing tables to use the views instead.}

Database design - storing a sequence

Imagine the following: there is a "recipe" table and a "recipe-step" table. The idea is to allow different recipe-steps to be reused in different recipes. The problem I'm having relates to the fact that in the recipe context, the order in which the recipe-steps show up is important, even if it does not follow the recipe-step table primary-key order, because this order will be set by the user.
I was thinking of doing something like:
recipe-step table:
id | stepName | stepDescription
-------------------------------
1 | step1 | description1
2 | step2 | description2
3 | step3 | description3
...
recipe table:
recipeId | step
---------------
1 | 1
1 | 2
1 | 3
...
This way, the order in which the steps show up in the step column is the order I need to maintain.
My concerns with this approach are:
if I have to add a new step between two existing steps, how do I query it? What if I just need to switch the order of two steps already in the sequence?
how do I make sure the order maintains its consistency? If I just insert or update something in the recipe table, it will pop up at the end of the table, right?
Is there any other way you would think of doing this? I also thought of having a previous-step and a next-step column in the recipe-step table, but I think it would be more difficult to make the recipe-steps reusable that way.
In SQL, tables are not ordered.
Unless you are using an ORDER BY clause, database engines are allowed to return records in any order they feel is fastest (for example, a covering index might have the data in a different order, and sometimes even SQLite creates temporary covering indexes automatically).
If the steps have a specific order in a specific recipe, then you have to store this information in the database.
I'd suggest to add this to the recipe table:
recipeId | step | stepOrder
---------------------------
1 | 1 | 1
1 | 2 | 2
1 | 3 | 3
2 | 4 | 1
2 | 2 | 2
Note:
The recipe table stores the relationship between recipes and steps, so it should be called recipe-step.
The recipe-step table is independent of recipes, so it should be called step.
You probably need a table that stores recipe information that is independent of steps; this table should be called recipe.

Retrieving data from 2 tables that have a 1 to many relationship - more efficient with 1 query or 2?

I need to selectively retrieve data from two tables that have a 1 to many relationship. A simplified example follows.
Table A is a list of events:
Id | TimeStamp | EventTypeId
--------------------------------
1 | 10:26... | 12
2 | 11:31... | 13
3 | 14:56... | 12
Table B is a list of properties for the events. Different event types have different numbers of properties. Some event types have no properties at all:
EventId | Property | Value
------------------------------
1 | 1 | dog
1 | 2 | cat
3 | 1 | mazda
3 | 2 | honda
3 | 3 | toyota
There are a number of conditions that I will apply when I retrieve the data, however they all revolve around table A. For instance, I may want only events on a certain day, or only events of a certain type.
I believe I have two options for retrieving the data:
Option 1
Perform two queries: first query table A (with a WHERE clause) and store data somewhere, then query table B (joining on table A in order to use same WHERE clause) and "fill in the blanks" in the data that I retrieved from table A.
This option requires SQL Server to perform 2 searches through table A, however the resulting 2 data sets contain no duplicate data.
Option 2
Perform a single query, joining table A to table B with a LEFT JOIN.
This option only requires one search of table A but the resulting data set will contain many duplicated values.
Conclusion
Is there a "correct" way to do this or do I need to try both ways and see which one is quicker?
Ex
Select E.Id,E.Name from Employee E join Dept D on E.DeptId=D.Id
and a subquery something like this -
Select E.Id,E.Name from Employee Where DeptId in (Select Id from Dept)
When I consider performance which of the two queries would be faster and why ?
would EXPECT the first query to be quicker, mainly because you have an equivalence and an explicit JOIN. In my experience IN is a very slow operator, since SQL normally evaluates it as a series of WHERE clauses separated by "OR" (WHERE x=Y OR x=Z OR...).
As with ALL THINGS SQL though, your mileage may vary. The speed will depend a lot on indexes (do you have indexes on both ID columns? That will help a lot...) among other things.
The only REAL way to tell with 100% certainty which is faster is to turn on performance tracking (IO Statistics is especially useful) and run them both. Make sure to clear your cache between runs!
More REF

Resources