I have a thread table with replies in the same table named posts:
ID | PARENT_ID | CATEGORY_ID | CREATED_AT | UPDATED_AT
If the "PARENT_ID" not null then is a thread otherwise is a reply.
With a "CATEGORY_ID=3" i want to get all threads with pagination ordered by "UPDATED_AT" of last reply if there is one.
I think the best solution is not to go in this direction but rather update the parent when creating a child using the method:
$parent->touch();
with this solution we don't need to look for the children to know the order of the parents
Related
I am using SQL Server 2017.
What I want to achieve: The database contains tables for customers, coupon codes and suppliers. A customer is a company. A customer may be an independent company, it may belong to another company, and it may have "child companies" (the connection between child and parent companies are established through a ParentID column in the customers table).
Coupon codes are related to customers. When a "parent company" activates access a coupon code, the code is supposed to become available for all the parent company's child companies. Likewise, if a parent company deactivates access to a coupon code, child companies should also lose access. As of today, this has to be fixed manually. I am writing a trigger that will take over this manual task.
A table CompanyCodes stores the relationship between companies and coupon codes. Column "Customer" stores the company ID number, column "Supplier" stores the ID of the supplier of the coupon code and column "Code" stores the coupon code.
Whenever a row is inserted, updated or deleted into/from the CompanyCodes table, the trigger first checks whether or not the relevant customer has any child companies. If it does, the trigger is supposed to insert/update/delete a row for each of the company's child companies, so that the changes are reflected in all of them.
E.g. customer 1 is the parent of customer 2 and customer 3. Customer 1 activates access to a new coupon code for supplier X, and a new row is inserted into CompanyCodes with the relevant information. In this case, the trigger is supposed to insert two new rows into the table, one for customer 2 and one for customer 3, where all the information (except the company ID) is the same as what was inserted for customer 1.
Where I am stuck: How do I make this work in cases where the customer has several child companies?
Here's the relevant part of my query as it currently stands (this is all within a IF statement that is executed when the trigger-firing action is "INSERTED")
DECLARE #insertedSupplier AS INT
DECLARE #insertedCode AS VARCHAR(50)
SET #insertedSupplier = (SELECT Supplier FROM inserted)
SET #insertedCode = (SELECT Code FROM inserted)
INSERT INTO dbo.CompanyCodes (Customer, Supplier, Code)
VALUES ((SELECT id FROM Customers WHERE Customers.ParentID = (SELECT Customer FROM inserted))
,#insertedSupplier, #insertedCode)
If there was a maximum of one possible child company, the above should work. But when there are several, "(SELECT id FROM Customers WHERE Customers.ParentID = (SELECT Customer FROM inserted)" will return several rows.
What would be simplest (and preferably best practice?) way of doing what I want to do here?
Edit with example showing desired output:
Customers table
| Customer | ID | ParentID |
|----------|----|----------|
|Customer1 | 1 | NULL |
|Customer2 | 2 | 1 |
|Customer3 | 3 | 1 |
Customer1 then proceeds to activate access to a new coupon code, and the CompanyCodes table changes to the following:
| Customer | Supplier | Code |
|----------|----------|----------|
|Customer1 | 123456 | abcdefgh |
The trigger then fires, and adds one row for each of Customer1's two child companies (where all the info, except for the Customer name, is the same as for Customer1):
| Customer | Supplier | Code |
|----------|----------|----------|
|Customer1 | 123456 | abcdefgh |
|Customer2 | 123456 | abcdefgh |
|Customer3 | 123456 | abcdefgh |
Edit 2: Also, the way our systems work, there is never more than one single insert/update/delete of a row in the CompanyCodes table (no batch jobs)
I'm currently designing my tables. i have three types of user which is, pyd, ppp and ppk. Which is better? inserting data in one row or in multiple row?
which is better?
or
or any suggestion? thanks
I would go for 3 tables:
user_type
typeID | typeDescription
Main_table
id_main_table | id_user | id_type
table_bhg_i
id_bhg_i | id_main_table | data1 | data2 | data3
Although I see you are inserting IDs for each user , I don't quite understand how are are you going to differentiate between the users , had I designed this DB , I would have gone for tables like
tableName: UserTypes
this table would contain two field first would be ID and second would be type of user
like
UsertypeID | UserType
the UsertypeID is a primary key and can be auto increment , while UserType would be your users pyd ,ppk or so on . Designing in this way would give you flexibility of adding data later on in the table without changing the schema of the table ,
the next you can edit a table for generating multiple users of a particular type, this table would refer the userID of the previous table , this will help you adding new user easily and would remove redundancy
tableName:Users
this table would again contain two fields, the first field would be the id call and the secind field would be the usertypeId try
UserId |UserName | UserTypeID
the next thing you can do is make a table to insert the data , let the table be called DataTable
tableName: DataTable
this table will contain the data of the users and this will reference then easily
DataTabID | DataFields(can be any in number) | UserID(refrences Users table)
these tables would be more than sufficient .If doubts as me in chatbox
I'm designing the database for a solution. I'm facing the following scenario:
The user can add a product. This product will belong to a specific operation: "SELL", "BUY", etc.
Another user can mark the product as interested. So, I'll have a table to generate the users which are interested in something.
I'm struggling to decide which approach to go:
I can create one table for each operation, something like "ProductSell", "ProductBuy", etc. The same for interested users ("InterestedProductSell", "InterestedProductBuy", etc).
```
User ProductSell ProductBuy InterestBuy InterestSell
____________ ___________ __________ ___________ ____________
Id Id Id ProductId (ProductBuy PK) ProductId (ProductSell PK)
Name Title Title UserId UserId
Username UserId UserId Date Date
```
I can create one table for all operations, with a column named "Operation". Same for interested users.
```
User Operation Product Interest
____________ _________ ___________ __________
Id Id Id ProductId (ProductBuy or ProductSell PK)
Name Name (Buy, sell, etc) Title UserId
Username UserId Date
Operation
```
Can you give me your opinions about these two approach, or even a third approach that I didn't realize? Things like performance, optimization, maintenance, coding... I need another options other than my sight about this.
If it's matter, I'm working with SQL Server.
your 2nd approach of having a separate column for Operation looks good
user Table
uid
name
product Table
pid
name
userproduct Table
uid
pid
operation
time
I have a pair of tables in an Oracle database with a one-to-one parent-child relationship. Unfortunately the foreign key is defined in the parent, not the child:
----------------- -----------------
| messages | | payloads |
----------------- -----------------
| id | | id |
| payload_id |------->| content |
| creation_date | -----------------
-----------------
The relationship from messages.payload_id to payloads.id is enforced by a non-deferrable foreign key.
We have a query that deletes all messages and payloads where message creation date is after a certain time. Unfortunately, due to the backwards foreign key, the current query looks like this:
DELETE FROM messages WHERE creation_date < deletion_date;
DELETE FROM payloads WHERE id NOT IN (SELECT payload_id FROM messages);
The second nasty delete statement is the problem, as it takes more than an hour when we have ~50 million records in each table.
Is there a better way to delete all messages and payloads?
Note that unfortunately the schema is beyond our control...
You could log the id's that you're going to delete into a global temporary table and then issue the deletes, optimising the delete from "messages" by storing the rowid as well
insert into my_temp_table (messages_rowid, payload_id)
select rowid, payload_id
from messages
where creation_date < deletion_date;
delete from messages
where rowid in (select messages_rowid from my_temp_table);
delete from payload
where id in (select payload_id from my_temnp_table);
commit;
How about
DELETE FROM payloads WHERE id IN
( SELECT payload_id FROM messages WHERE creation_date < deletion_date)
This needs to run before deleting from messages, of course.
I have 2 models that need to be linked by a habtm relationship, having this table-structure:
CATEGORIES:
id | name | ..
-----------------------
1 | test | ..
POSTS:
id | name | other_id | ..
---------------------------------
1 | test | 5 | ..
CATEGORIES_POSTS:
id | category_id | other_id
--------------------------------
1 | 1 | 5
I need to get the posts from the category side, but don't seem to be able to set the habtm relation correctly. The important thing, that I didn't mention so far, is that the id used in the Post-model is not id but other_id. This is what I tried so far (all in the Category-model):
set the associationForeignKey to 'other_id'
in the sql-query it has: CategoriesPost.other_id = Post.id fragment -> wrong relation (should be CategoriesPost.other_id = Post.other_id
set the associationForeignKey to false and add a condition CategoriesPost.other_id = Post.other_id
now the sql fragment is CategoriesPost. = Post.id --> sql error
set the associationForeignKey to CategoriesPost.other_id = Post.other_id
well .. this is an error as well, as Cake takes the input as 1 field: CategoriesPost.other_id = Post.other_id = Post.id
I know I could achieve to relation through 2 hasMany links, but that gives me a lot of queries instead of 1
Thanks in advance!
just change the post model primaryKey on the fly for some operations you need to....
To do so, just need to do $this->primaryKey = 'other_id' OR in a controller $this->Post->primaryKey= 'other_id'
that would do the trick.
But remember, if you are retrieving data from all associations and you have more associations than this one then the other associations if they use Post.id are going to fail since primary key is Post.other_id
you should do a find function in your post models for when you are using this union, something like this:
function otherFind ($type, $options){
$this->primaryKey = 'other_id';
$result = $this->find($type, $options);
$this->primaryKey = 'id';
return $result;
}
if you need to join it with other models gets a little more tricky i will recommend to use joins for that (try looking at the linkable bhaviour code to see how)
I strongly suggest to use only ONE primary key since it's not really helpfull a second one. A primary key should be unique anyway and you can associate anything with just one.
Cake can't customise the primary key to use on the join when doing a normal find.
You could use a custom join, if you really want: http://book.cakephp.org/view/1047/Joining-tables
Why exactly do you need two ids? You are trying to join a post to a category, the ids will be unique anyway; as far as relating the two, the primary should work just fine.