how to modify already defined composite key in liquibase - database

I need to alter a table to modify the order of the indexes created from the composite key for the below mentioned changeset.
<changeSet author="demo (generated)" id="demo-11">
<createTable tableName="customersalesdata">
<column name="id" type="BIGINT">
<constraints unique="true" primaryKey="true" primaryKeyName="customersalesdata_pkey"/>
</column>
<column name="customerid" type="NVARCHAR(255)">
<constraints primaryKey="true" unique="true" primaryKeyName="customersalesdata_pkey"/>
</column> </createTable>
</changeSet>
The reason for altering is that ordering of the columns in an index makes a big difference. Since the customerid is the second column, it will not be used. The query is performing an index scan because of this. Since the table has two indexes that start with id, having the id, customerid in this order is a waste (in most cases).
So I need to change the column order to customerid and id. And the another problem is customerid which is the composite key, is referred as the foreign key in the another table.
My question is should I need to drop FK first and then drop Composite Keys and then form the composite key in the order as shown below
<changeSet id="2">
<addPrimaryKey columnNames="customerid, id"
constraintName="customersalesdata_pkey"
tableName="customersalesdata"
validate="true"/>
</changeSet>
Or just create a another index on top of composite key by combining both of the fields as shown below
<changeSet author="demo" id="demo-id">
<createIndex tableName="customersalesdata" indexName="idxn_customer_id_id">
<column name="customer_id"/>
<column name="id"/>
</createIndex>
</changeSet>
Also in both the cases will there be any chances of data loss? Can you please suggest the best approach here.

A similar question was asked a while back on this post.
I'll paraphrase the two most popular answers from that thread.
#1
You can read the Liquibase Documentation and there is also a similar problem to reference here. In the case of the situation presented in "Adding composite unique constraint in Liquibase" linked above, the solution is
<changeSet author="liquibase-docs" id="addUniqueConstraint->example">
<addUniqueConstraint
columnNames="product_id, tournament_id"
constraintName="your_constraint_name"
tableName="person"
/>
</changeSet>
#2
I am pretty certain that:
#1 You can't do it inside the createTable tag itself, but you can do it within the same changeset as when the table is created.
#1 It does create a composite unique constraint on the two columns. One way you can check is to run liquibase with the command to generate the SQL for an update rather than running the update command and checking what it does for your database. On the command line, rather than running liquibase update, you would run liquibase updateSQL.

Related

Delete all records of a table which are not referenced in any other table; dozens of foreign tables; dynamic solution

I am looking for a solution to detect and delete all records of a table "UniqueKeys" which are not referenced anymore from records in any other table. As my question seems not clear, I have rephrased it.
Challenge:
If there is a table called "UniqueKeys" which consists of an ID and a uniqueIdentifier column and if there are dozens of tables which references the ID field of the "UniqueKeys" table - and now there are some records in the "UniqueKeys" table whose ID are not used in any of these other tables' references, I want to be able to detect and delete them with a SQL query without hard-code the joins to all of these other tables.
The found solutions so far included explicitly writing joins with each of the "other" tables which I want to avoid here.
Like this: Other SO answer
The goal: should be a generic solution so that at any time Devs can add additional foreign tables and this solution should continually be able (without modification) to detect any references to table "X" (and avoid the deletion of such affected records).
I know that I simply could programmatically (in the programming language of my choice) iterate through all given records of table "UniqueKeys" and use exception handling to simply continue when a given record cannot be deleted because of an active constraint.
This is what I am currently doing - and it yields the desired result - but imho this is a very ugly approach.
As I am no SQL expert, show me how to better re-phrase the above if that will help a better understanding of what I am trying to achieve.

how to create sql server trigger and call it in mvc3 model

I have four tables ie.
custAddress
custCompany
custContact and
custInfo
All of the table has 'cId' field common.
I have a interface in MVC3 view from which I will take inputs for all field except 'cId'. When I take input from the interface all of the above tables must be filled.
Also when I delete data, once I delete data from one table all the data from the other three table should be deleted.
I don't know how to use trigger for this. Please explain how can I do this using trigger or any other way. Any help is appreciated.
To me it sounds more like a database design problem. You do not need trigger here. Keep one of your table as primary (may be CustInfo) and have other table dependent(foreign key relationships) on this. Use cascade delete constraints on dependent tables. When you delete data from CustInfo, cascade delete constraint will take care of deleting corresponding data from dependent tables.

hibernate - mapping PK to 2 colums

i search the web for answer, but all the answers refer to composite id PK.
i want two map two columns of type long to PK.
one should be regular generated id, and the other should be regular long field.
i have the following mapping:
<class name="com.company.MyTable" table="My_Table">
<id name="id" column="id">
<generator class="assigned"/>
</id>
<property name="jobId" column="job_id" type="long" index="oes_job_id_idx" />
<property name="serverId" column="server_id" type="long"/>
</class>
i want to add to the PK the job_id column.
how do i do that?
Primary keys, by definition should be the unique key with the fewest columns possible:
your can't have multiple primary keys
you shouldn't use an additional column for the primary key, if the first column is already unique
It doesn't really give you a benefit to create a separate index either - so stick to the generated field as primary key. So, hibernate doesn't support that because it is a wrong thing to do.
Please look at this questions there is a full solution for composite Primary keys:
Mapping same class relation
And then
Mapping same class relation - continuation
Hope it helps.

Cascade Delete Use Case

I am pretty new to Business Analysis. I have to write requirements that show both (for now) cascade delete (for two tables) and the rest of the tables will delete explicitly.
I need some guidance for how to write the requirements for cascade deletion.
Delete child entities on parent
deletion.
Delete collection members if collection entity is deleted.
Actually it is hard to understand the task without context and also it smells like university/colledge homework (we had one very similar to this).
Use the ON DELETE CASCADE option to specify whether you want rows deleted in a child table when corresponding rows are deleted in the parent table. If you do not specify cascading deletes, the default behavior of the database server prevents you from deleting data in a table if other tables reference it.
If you specify this option, later when you delete a row in the parent table, the database server also deletes any rows associated with that row (foreign keys) in a child table. The principal advantage to the cascading-deletes feature is that it allows you to reduce the quantity of SQL statements you need to perform delete actions.
For example, the all_candy table contains the candy_num column as a primary key. The hard_candy table refers to the candy_num column as a foreign key. The following CREATE TABLE statement creates the hard_candy table with the cascading-delete option on the foreign key:
CREATE TABLE all_candy
(candy_num SERIAL PRIMARY KEY,
candy_maker CHAR(25));
CREATE TABLE hard_candy
(candy_num INT,
candy_flavor CHAR(20),
FOREIGN KEY (candy_num) REFERENCES all_candy
ON DELETE CASCADE)
Because ON DELETE CASCADE is specified for the dependent table, when a row of the all_candy table is deleted, the corresponding rows of the hard_candy table are also deleted. For information about syntax restrictions and locking implications when you delete rows from tables that have cascading deletes, see Considerations When Tables Have Cascading Deletes.
Source: http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.sqls.doc/sqls292.htm
You don't write use cases for functionality - that is the reason why it is hard to properly answer your question - we don't know the actor who interacts with the system and of course we know nothing about the system, so we cannot tell you how to write description of their interactions.
You should write your use cases first and from them derive the functionality.

SQL Server Mgmt Studio messing up my Database!

This has effectively ruined my day. I have a larger number of tables with many FK relationships in between. One of the tables (lets call it table A) has a computed column, which is computed via a UDF with schemabinding and is also fulltext indexed.
If I edit any table (lets call it table B) that in any way is related (e.g via FK) to the table with the fulltext indexed computed column (table A), and I save it, the following happens:
Changes to the table (table B) are saved
I get the error: "Column 'abcd' is no fulltext indexed." regarding table A which I didn't even edit, and then "User canceled out of save dialog"
All FK relationships to ALL TABLES from Table B are DELETED
What the hell is going on??? Can someone explain to me how this can happen?
I've had the same kind of problem. As Will A said, the management studio will do the following steps to update a table and its foreign keys:
Create a new table called temp_
Copy contents from old table into new
Drop all constraints, indexes and foreign keys
Drop old table
Rename new table to be = old table
Recreate the foreign keys, indexes and constraints
I may have the first 3 in the wrong order but you get the idea.
In my case I've lost entire tables not just the foreign keys. Personally I don't like the way it does it as it can be VERY time consuming to have to recreate indexes on a table with lots of data in. If its a small change I usually do it myself in T-SQL.
Review the change script before it executes it, make sure it looks sensible.
#OMGPonies, why can't you drop a foreign key if there is data in the table? Of course you can. There are only restrictions on creating foreign keys on tables with data but that is only if it breaks the constraint. However even that can be avoided by using the WITH NOCHECK option when creating the key. Yes I know it'll break when you try to update a broken result set.

Resources