This has effectively ruined my day. I have a larger number of tables with many FK relationships in between. One of the tables (lets call it table A) has a computed column, which is computed via a UDF with schemabinding and is also fulltext indexed.
If I edit any table (lets call it table B) that in any way is related (e.g via FK) to the table with the fulltext indexed computed column (table A), and I save it, the following happens:
Changes to the table (table B) are saved
I get the error: "Column 'abcd' is no fulltext indexed." regarding table A which I didn't even edit, and then "User canceled out of save dialog"
All FK relationships to ALL TABLES from Table B are DELETED
What the hell is going on??? Can someone explain to me how this can happen?
I've had the same kind of problem. As Will A said, the management studio will do the following steps to update a table and its foreign keys:
Create a new table called temp_
Copy contents from old table into new
Drop all constraints, indexes and foreign keys
Drop old table
Rename new table to be = old table
Recreate the foreign keys, indexes and constraints
I may have the first 3 in the wrong order but you get the idea.
In my case I've lost entire tables not just the foreign keys. Personally I don't like the way it does it as it can be VERY time consuming to have to recreate indexes on a table with lots of data in. If its a small change I usually do it myself in T-SQL.
Review the change script before it executes it, make sure it looks sensible.
#OMGPonies, why can't you drop a foreign key if there is data in the table? Of course you can. There are only restrictions on creating foreign keys on tables with data but that is only if it breaks the constraint. However even that can be avoided by using the WITH NOCHECK option when creating the key. Yes I know it'll break when you try to update a broken result set.
Related
I want to delete all the rows from a table in the Oracle DB i.e.Table Name:Address.
The table is used as the foreign key in other tables for example in Customers.
what i want is , When i delete all the rows of the table Address, All rows of other tables which are referencing these records should also be deleted.
NOTE I have not provided "on delete cascade" at the time of creating table.
Any help is appreciated.
That really depends on what you mean.
By your description you probably mean a cascading delete.
But that makes no sense, since your table is a foriegn key, so every "customer" would have an AddressID (int) column, and probably a NOT NULL column as well. so deleting all addresses would be ... deleting the entire customer table? or maybe DELETE FROM customer WHERE AddressID IS NOT NULL ? either way, that does not make sense.
Oh, I get it now. you are testing the boundaries of your ability. That actually makes sense in DEV environment. But make sure you don't do stuff like that in production. A couple of principles which I have found very good practice -
Don't delete. If you want to "delete" - simply add a
column IsDeleted bit NOT NULL DEFAULT (0)
because once a row is gone, it is gone forever. changing the row from IsDeleted=0 to IsDeleted=1 is always easily reversable.
Seperate in your mind with a clear line DML (data manipulation language - the act of changing data) with DDL (data definition language - the act of changing the table definitions etc. , aka - changing the schema). To delete all lines is to "reset" the table. Use Truncate TABLE Customers or DROP AND CREATE syntax. (but really - don't try this at home kids).
I've read a lot of tips and tutorials about normalization but I still find it hard to understand how and when we need normalization. So right now I need to know if this database design for an electricity monitoring system needs to be normalized or not.
So far I have one table with fields:
monitor_id
appliance_name
brand
ampere
uptime
power_kWh
price_kWh
status (ON/OFF)
This monitoring system monitors multiple appliances (TV, Fridge, washing machine) separately.
So does it need to be normalized further? If so, how?
Honestly, you can get away without normalizing every database. Normalization is good if the database is going to be a project that affects many people or if there are performance issues and the database does OLTP. Database normalization in many ways boils down to having larger numbers of tables themselves with fewer columns. Denormalization involves having fewer tables with larger numbers of columns.
I've never seen a real database with only one table, but that's ok. Some people denormalize their database for reporting purposes. So it isn't always necessary to normalize a database.
How do you normalize it? You need to have a primary key (on a column that is unique or a combination of two or more columns that are unique in their combined form). You would need to create another table and have a foreign key relationship. A foreign key relationship is a pair of columns that exist in two or more tables. These columns need to share the same data type. These act as a map from one table to another. The tables are usually separated by real-world purpose.
For example, you could have a table with status, uptime and monitor_id. This would have a foreign key relationship to the monitor_id between the two tables. Your original table could then drop the uptime and status columns. You could have a third table with Brands, Models and the things that all models have in common (e.g., power_kWh, ampere, etc.). There could be a foreign key relationship to the first table based on model. Then the brand column could be eliminated (via the DDL command DROP) from the first table as this third table will have it relating from the model name.
To create new tables, you'll need to invoke a DDL command CREATE TABLE newTable with a foreign key on the column that will in effect be shared by the new table and the original table. With foreign key constraints, the new tables will share a column. The tables will have less information in them (fewer columns) when they are highly normalized. But there will be more tables to accommodate and store all the data. This way you can update one table and not put a lock on all the other columns in a denormalized database with one big table.
Once new tables have the data in the column or columns from the original table, you can drop those columns from the original table (except for the foreign key column). To drop columns, you need to invoke DDL commands (ALTER TABLE originalTable, drop brand).
In many ways, performance will be improved if you try to do many reads and writes (commit many transactions) on a database table in a normalized database. If you use the table as a report, and want to present all the data as it is in the table normally, normalized the database will hurt the peformance.
By the way, normalizing the database can prevent redundant data. This can make the database consume less storage space and use less memory.
It is nice to have our database normalize.It helps us to have a efficient data because we can prevent redundancy here and also saves memory usages. On normalizing tables we need to have a primary key in each table and use this to connect to another table and when the primary key (unique in each table) is on another table it is called the foreign key (use to connect to another table).
Sample you already have this table :
Table name : appliances_tbl
-inside here you have
-appliance_id : as the primary key
-appliance_name
-brand
-model
and so on about this appliances...
Next you have another table :
Table name : appliance_info_tbl (anything for a table name and must be related to its fields)
-appliance_info_id : primary key
-appliance_price
-appliance_uptime
-appliance_description
-appliance_id : foreign key (so you can get the name of the appliance by using only its id)
and so on....
You can add more table like that but just make sure that you have a primary key in each table. You can also put the cardinality to make your normalizing more understandable.
From some mystical reason I starter the database design with the inbuilt Database Diagrams GUI designer (Server Management Studio), actually I only did the first 2 tables (users and product) there rest were done using query commands.
It turns out that at the end there’s something I didn’t expect between:
users (table)
product(table)
I’ve created a foreign key column (“users_id”) in the “product” table pointing to the “users” table (column “users_id”).
Instead of having a one to many relation It seem to be a one to one relation?
Users table is referencing the product table and I don’t want this.
What is the problem?
edit: 4-sep-2014 10:48
I've droped the FK_product_TO_users constraint and created a new one, but still the results are the same.
ALTER TABLE product
DROP CONSTRAINT FK_product_TO_users
GO
ALTER TABLE product
ADD
CONSTRAINT FK_product_TO_users
FOREIGN KEY (users_id)
REFERENCES users (users_id)
edit: 4-sep-2014 12:51
I've rebuilt the database, by using just Queries with no GUI help in the table design. The problem related to FK_product_TO_users was fixed, still I don't know why.
It comes out that after the resolution the same issue is present in two other tables with 2 FK relations.
Besides this, inputting data in those tables seems to work fine.
I'm wondering if this is just a bug of the GUI in the Database Diagram?
This is really interesting one.
You can do one thing: just delete the key FK_product_to_users and rebuild the key.
You do NOT need to delete users_id from product table.
My company has an application with a bunch of database tables that used to use a sequence table to determine the next value to use. Recently, we switched this to using an identity property. The problem is that in order to upgrade a client to the latest version of the software, we have to change about 150 tables to identity. To do this manually, you can right click on a table, choose design, change (Is Identity) to "Yes" and then save the table. From what I understand, in the background, SQL Server exports this to a temporary table, drops the table and then copies everything back into the new table. Clients may have their own unique indexes and possibly other things specific to the client, so making a generic script isn't really an option.
It would be really awesome if there was a stored procedure for scripting this task rather than doing it in the GUI (which takes FOREVER). We made a macro that can go through and do this, but even then, it takes a long time to run and is error prone. Something like: exec sp_change_to_identity 'table_name', 'column name'
Does something like this exist? If not, how would you handle this situation?
Update: This is SQL Server 2008 R2.
This is what SSMS seems to do:
Obtain and Drop all the foreign keys pointing to the original table.
Obtain the Indexes, Triggers, Foreign Keys and Statistics of the original table.
Create a temp_table with the same schema as the original table, with the Identity field.
Insert into temp_table all the rows from the original table (Identity_Insert On).
Drop the original table (this will drop its indexes, triggers, foreign keys and statistics)
Rename temp_table to the original table name
Recreate the foreign keys obtained in (1)
Recreate the objects obtained in (2)
First, I want to talk a little about the Foreign key constraint rule and how helpful it is. Suppose I have two tables, a primary table with the primary column called ID, the other table is the foreign one which also has a primary column called ID. This column in the foreign table refers to the ID column in the primary table. If we don't establish any Foreign key relation/constraint between those tables, we may fall foul of many problems related to integrity.
If we create the foreign key relation for them, any changes to the ID column in primary table will 'auto' reflect to the ID column in the foreign table, changes here can be made by DELETE, UPDATE queries. Moreover, any changes to the ID in the foreign table should be constrained by the ID column in the primary table, for example there shouldn't any new value inserted or updated in the ID column of the foreign table unless it does exist in the ID column of the primary table.
I know that SQLite doesn't support foreign key constraint (with full functions as detailed above) and I have to use TRIGGER to work around this problem. I have used TRIGGER to work around successfully in one way (Any changes to the ID column in the primary table will refect to the ID column in the foreign table) but the reverse way (should throw/raise any error if there is a confict occurs, for example, there are only values 1,2,3 in the ID column of the primary table, but the value 2 in the ID column of the foreign table is updated to 4 -> not exist in the primary table -> should throw error) is not easy. The difficult is SQLite doesn't also support IF statement and RAISERROR function. If these features were supported, I could work around easily.
I wonder how you can use SQLite if it doesn't support some important features? Even working around by using TRIGGER is not easy and I think it's impossible, except that you don't care about the reverse way. (In fact, the reverse way is not really necessary if you set up your SQL queries carefully, but who can make sure? Raising error is a mechanism reminding us to fix and correct and making it work exactly without corrupting data and the bugs can't be invisible.
If you still don't know what I want, I would like to have some last words, my purpose is to achieve the full functionality of the Foreign key constraint which is not supported in SQLite (even you can create such a relationship but it's fake, not real as you can benefit from it in SQL Server, SQL Server Ce, MS Access or MySQL).
Your help would be highly appreciated.
PS: I really like SQLite because it is file-based, easy to deploy, supports large file size (an advantage over SQL Server Ce) but some missing features have made me re-think many times, I'm afraid if going for it, my application may be unreliable and corrupt unpredictably.
To answer the question that you have skillfully hidden in your rant:
SQLite allows the RAISE function inside triggers; because of the lack of control flow statements, this must be used with a SELECT:
CREATE TRIGGER check_that_id_exists_in_parent
BEFORE UPDATE OF id ON child_table
FOR EACH ROW
BEGIN
SELECT RAISE(ABORT, 'parent ID does not exist')
WHERE NOT EXISTS (SELECT 1
FROM parent_table
WHERE id = NEW.id);
END;