Our company uses Sybase and we are planning on setting up a Mobilink system (data replication system). We therefore need to change from using autoincrement columns to global autoincrememnt columns.
My question is what steps do I need to take to get this working properly. There is already thousands of rows of data that used the regular autoincrement default.
I'm thinking I need to create a new column with a default of global autoincrement, fill it with data (number(*)), switch the PK to it, drop the old FK's, drop the old column, rename the new column to the old one, then re-apply the FK's.
Is there an easier way to accomplish what I need here?
thanks!
That's generally the way to go about it. But there are some specific statements you make that cause me concern. Also the sequence. I am not sure of your experience level, the terms you use may or may not be accurate.
For each table ...
... switch the PK to it
What about the FK values in the child tables ? Or do you mean you will change them as well ?
... drop the old FK's
Ok, that's the constraint.
... drop the old column, rename the new column to the old one, then re-apply the FK's.
What exactly do you mean by that ? Add the FK constraint back in ? That won't change the existing data, it will apply to any new rows added.
Hope you see what I mean by the sequence of your tasks is suspect. Before you drop the old_PK_column in the parent, you need to:
Add the dropped FK constraints in each child table.
For each child table: UPDATE all the FK values to the new_PK_column.
Then drop the old_PK_column.
you're just changing the way PK values are generated, so it's enough to:
ALTER TABLE <table>
modify <column> default global autoincrement (1000000);
to use a partition size of 1,000,0000
Also make sure you set the global database identifier in each db, for example:
SET OPTION PUBLIC.global_database_id = 10;
So the next PK that will be generated is 10,000,001
Related
I want to delete all the rows from a table in the Oracle DB i.e.Table Name:Address.
The table is used as the foreign key in other tables for example in Customers.
what i want is , When i delete all the rows of the table Address, All rows of other tables which are referencing these records should also be deleted.
NOTE I have not provided "on delete cascade" at the time of creating table.
Any help is appreciated.
That really depends on what you mean.
By your description you probably mean a cascading delete.
But that makes no sense, since your table is a foriegn key, so every "customer" would have an AddressID (int) column, and probably a NOT NULL column as well. so deleting all addresses would be ... deleting the entire customer table? or maybe DELETE FROM customer WHERE AddressID IS NOT NULL ? either way, that does not make sense.
Oh, I get it now. you are testing the boundaries of your ability. That actually makes sense in DEV environment. But make sure you don't do stuff like that in production. A couple of principles which I have found very good practice -
Don't delete. If you want to "delete" - simply add a
column IsDeleted bit NOT NULL DEFAULT (0)
because once a row is gone, it is gone forever. changing the row from IsDeleted=0 to IsDeleted=1 is always easily reversable.
Seperate in your mind with a clear line DML (data manipulation language - the act of changing data) with DDL (data definition language - the act of changing the table definitions etc. , aka - changing the schema). To delete all lines is to "reset" the table. Use Truncate TABLE Customers or DROP AND CREATE syntax. (but really - don't try this at home kids).
I have a tool which uses SQL scripts to apply changes to a customer database. Often this invloves changing a column definition (datatype etc). The problem is that often there are primary keys applied by the user that we don't know about (and they don't remember), which trips up the process (eg when changing columns belonging to the indexes or primary keys).
The requirement given to me is that this update process should be 'seamless', with no human involvement to prepare the ground. I have also researched this on this forum, and as far as I can see my particular question has not yet been asked.
I know how to disable and then later rebuild all indexes on a database, and even those only in certain tables, but if the index is on a primary key I still can't change any column that is part of the primary key unless I explicitly drop the PK by name, and later recreate it explicitly, which means I have to know about it at code-time. I can probably write a query to find the name of the primary key on a table if one is there, but how to know how to recreate it?
How can I, using Transact-SQL (or PL/SQL), detect, drop and then recreate the primary keys on given tables, without knowing at code time what they are or what columns belong to them? The key is that the tool cannot know in advance what the primary keys are are on any given table, nor what they comprise. The SQL code must handle this itself.
Better still would be to detect if a known column belongs to a primary key, then drop and later recreate that after I have changed the column.
This needs to be done in both Oracle and Sql Server, ideally purely with SQL code.
TIA
I really don't understand why would a customer define his own primary keys for the tables? Moreover, I don't understand why would you let them? In my world, if customer changes schema in any way, this automatically means end of support for them.
I will strongly advise against dropping and recreating primary keys on production database. Any number of bad things can happen, leading to data loss.
And it's not just the PKs, you will have to drop the foreign key constraints first. And FKs may reference not only the PKs but the unique constraints as well, so yao have to deal with those as well.
Your best bet would be to create a new table with the required schema, copy the data, drop original table and rename the new one. Of course, you will have to handle the FKs, but it's easier. Check this link an example:
http://sqlblog.com/blogs/john_paul_cook/archive/2009/09/17/script-to-create-all-foreign-keys.aspx
I'm inserting new rows into a SQLite table, but I don't want to insert duplicate rows.
I also don't want to specify every column in the database if possible.
I don't even know if this is possible.
I should be able to take my values and create a new row with them, but if they duplicate another row they should either overwrite the existing row or do nothing.
This is one of the very first steps in database design and normalization. You have to be able to explicitly define what you mean by a duplicate row, and then place a primary key constraint, (or a unique constraint), on the columns in your table that represent that definition.
Before you can define what duplicate means, you have to define (or decide) exactly what the table is to contain,. i.e., what real-world business domain entity or abstraction each row in the table represents, or will hold data for...
Once you have done this, the PK or unique constraint will stop you from inserting duplicate rows... The same PK will help you find the duplicate row when it does exist, and update it with the values of the non-duplicate-defining (non-PK) columns that are different from the values in the existing duplicate row. Only after all this has been done, can an insert or replace (as defined by SQL Lite) process help. This command checks whether a duplicate row (*as dedined by yr PK constraint) exists, and if it does, instead of inserting a new row, it updates the non-PK defined columns in that row with the values spplied by your Replace query.
Your desires appear mutually contradictory. While Andrey's insert or replace answer will get you close to what say you want, what should probably clarify for yourself what you really want.
If you don't want to specify every column, and you want a (presumably) partial row to update rather than insert, you should probably look at the unique constraint and know that the ambiguity of your requirements was also made by the SQL92 Committee.
http://www.sqlite.org/lang_insert.html
insert or replace might interest you
I have quick question for you SQL gurus. I have existing tables without primary key column and Identity is not set. Now I am trying to modify those tables by making existing integer column as primary key and adding identity values for that column. My question is should I first copy all the records from the table to a temp table before making those changes . Do I loose all the previous records if I ran the T-SQL commnad to make primary key and add identity column on those tables. What are the approaches should I take such as
1) Create temp table to copy all the records from the table to be modified
2) Load all the records to the temptable
3) Make changes on the table schema
4) Finally load the records from the temp table to the original table.
Or
there are better ways that this? I really appreciate your help
Thanks
Tools>Options>Designers>Table and Database Designers
Uncheck "Prevent saving changes that require table re-creation"
[Edit] I've tried this with populated tables and I didn't lose data, but I don't really know much about this.
Hopefully you don't have too many records in the table. What happens if you use Management studio to change an existing field to identity is that it creates another table with the identity field set. it turns identity insert on and inserets the records from the original table, then turns identity insert off. Then it drops the old table and renames the table it just created. This can be quite a lengthy process if you have many records. If so I would script this out and then do it in a job that runs during the off hours because the table will be completely locked while you do this.
just do all of your changes in management studio, copy/paste the generated script into a file. DON'T SAVE CHANGES at this point. Look over and edit that script as necessary, it will probably do almost exactly what you are thinking (it will drop the original table and rename the temp one to the original's name), but handle all constraints and FKs as well.
If your existing integer column is unique and suitable, there should be no problem converting it to a PK.
Another alternative, if you don't want to use the existing column, you can add a new PK columns to the main table, populate it and seed it, then run update statements to update all other tables with new PK.
Whatever way you do it, make sure you do a back-up first!!
You can always add the IDENTITY column after you have finished copying your data around. You can also then reset the IDENTITY seed to the max integer + 1. That should solve your problems.
DBCC CHECKIDENT ('MyTable', RESEED, n)
Where n is the number you want the identity to start at.
I want to learn the answer for different DB engines but in our case;
we have some records that are not unique for a column and now we want to make that column unique which forces us to remove duplicate values.
We use Oracle 10g. Is this reasonable? Or is this something like goto statement :) ? Should we really delete? What if we had millions of records?
To answer the question as posted: No, it can't be done on any RDBMS that I'm aware of.
However, like most things you can work around it, by doing the following.
Create a composite key, with a new column and the existing column
You can make it unique without deleting anything by adding a new column, call it PartialKey.
For existing rows you set PartialKey to a unique value (starting at Zero).
Create a unique constraint on the existing column and PartialKey (you can do this because each of these rows will now be unique).
For new rows, only use a default value of Zero for PartialKey (because zero has already been used), this will force the existing column to have unqiue values in the table.
IMPORTANT EDIT
This is weak - if you delete a row with partial key 0. Now another row can be added with a value that is already in the existing column, because the 0 in partial key will guarentee uniqueness.
You would need to ensure that either
You never delete the row with
partial key 0
You always have a dummy row with
partial key 0, and you never delete
it (or you immediately reinsert it automatically)
Edit: Bite the bullet and clean the data
If as you said you've just realised that the column should be unique, then you should (if possible) clean up the data. The above approach is a hack, and you'll find yourself writing more hacks when accessing the table (you may find you've two sets of logic for dealing with queries against that table, one for where the column IS unique, and one where it's NOT. I'd clean this now or it'll come back and bite you in the arse a thousand times over.
This can be done in SQL Server.
When you create a check constraint,
you can set an option to apply it
either to new data only or to existing
data as well. The option of applying
the constraint to new data only is
useful when you know that the existing
data already meets the new check
constraint, or when a business rule
requires the constraint to be enforced
only from this point forward.
for example
ALTER TABLE myTable
WITH NOCHECK
ADD CONSTRAINT myConstraint CHECK ( column > 100 )
You can do this using NOVALIDATE ENABLE constraint state, but deleting is much more preferred way.
You have to set your records straight before adding the constraints.
In Oracle you can put a constraint in a enable novalidate state. When a constraint is in the enable novalidate state, all subsequent statements are checked for conformity to the constraint. However, any existing data in the table is not checked. A table with enable novalidated constraints can contain invalid data, but it is not possible to add new invalid data to it. Enabling constraints in the novalidated state is most useful in data warehouse configurations that are uploading valid OLTP data.