I want to do horizontal fragmentation on a table say table employee(attributes) which is in 'Employee' database on 'server1' and such fragments are need to store in table on other databases on same server. For doing this I am currently using rules so that changes can be replicated. For example if some insert happens on one of the fragment then those inserted values should also must get stored on main table. So I need to write cross database referencing queries for this. Can anyone tell me how to write such queries in pgAdmin , specially for creating rules for ex:
CREATE RULE employee1 AS
ON INSERT TO employee WHERE
( dno BETWEEN 1 AND 10 )
DO INSTEAD
INSERT INTO employee1 VALUES ( NEW.eno, NEW.ename, NEW.title, NEW.dno );
where table employee and employee1 resides on different database but on same server.
Related
I have multiple environments for an application, like DEV, TEST, UAT, PROD.
I would need to copy some objects from the database created in UAT environment into PROD environment. The object is stored in the DB spreaded to multiple tables. Most of the tables have PK as IDENTITY (autogenerated). I don't have access to PROD db data (it is sensitive data in general).
What I need is to generate SQL script for inserting the object that does not preserve the values of Ids but uses the Ids assigned in target environment to related records.
Example: let's say object Order composed of [Order] and list of [OrderItem] rows. I would need to select one specific row in [Order] table, specify that also related rows from [OrderItem] should be included and generate script that would insert new row for [Order], get the value of assigned Order.Id, keep it in a variable and use it for inserting [OrderItem] rows. This is trivial example, my object is spreaded to many more tables, but the concept is the same.
Is there any tool for doing this? All scripting out utilities I tried preserve values of Identity columns.
I think you would need to write custom code to achieve the need of first loading to parent table, followed by loading to child table based on SCOPE_IDENTITY() value.
Instead, if you don't have huge number of rows, I would suggest you to follow below steps:
Load data from Production to another environment using SSIS package or another means
Add the foreign key constraint in the child table to have UPDATE CASCADE
ALTER TABLE ChildTable ADD CONSTRAINT FK_OrderDetail_Order FOREIGN KEY([OrderID])
REFERENCES [dbo].[Order] ([OrderID])
ON UPDATE CASCADE
Now, Update the identity value to another value using some logic
set identity_insert Order ON
UPDATE Order SET OrderID = OrderID + 1000000 -- Have some other logic for random generation
set identity_Insert Order OFF
Now, automatically the child table will get updated with new OrderID.
UPDATE
This SO link talks about automating the code generation for INSERT scripts: What is the best way to auto-generate INSERT statements for a SQL Server table?
Also, I would suggest you to first script out parent tables, followed by child tables.
You can identify parent,child tables using the below script. You need to generate scripts now as per the parent, child depency.
select object_name(parent_object_id) as childTable, object_name(referenced_object_id) as parentTable
from sys.foreign_keys
WHERE object_name(parent_object_id) IN 'Your comma separated list of tables'
I have a unique requirement - I have a data list which is in excel format and I import this data into SQL 2008 R2., once every year, using SQL's import functionality. In the table "Patient_Info", i have a primary key set on the column "MemberID" and when i import the data without any duplicates, all is well.
But some times, when i get this data, some of the patient's info gets repeated with updated address / telephone , etc., with the same MemberID and since I set this as primary key, this record gets left out without importing into the database and thus, i dont have an updated record for that patient.
EDIT
I am not sure how to achieve this, to update some of the rows which might have existing memberIDs and any pointer to this is greatly appreciated.
examples below:
List 1:
List 2:
This is not a terribly unique requirement.
One acceptable pattern you can use to resolve this problem would be to import your data into "staging" table. The staging table would have the same structure as the target table to which you're importing, but it would be a heap - it would not have a primary key.
Once the data is imported, you would then use queries to consolidate newer data records with older data records by MemberID.
Once you've consolidated all same MemberID records, there will be no duplicate MemberID values, and you can then insert all the staging table records into the target table.
EDIT
As #Panagiotis Kanavos suggests, you can use a SQL MERGE statement to both insert new records and update existing records from your staging table to the target table.
Assume that the Staging table is named Patient_Info_Stage, the target table is named Patient_Info, and that these tables have similar schemas. Also assume that field MemberId is the primary key of table Patient_Info.
The following MERGE statement will merge the staging table data into the target table:
BEGIN TRAN;
MERGE Patient_Info WITH (SERIALIZABLE) AS Target
USING Patient_Info_Stage AS Source
ON Target.MemberId = Source.MemberId
WHEN MATCHED THEN UPDATE
SET Target.FirstName = Source.FirstName
,Target.LastName = Source.LastName
,Target.Address = Source.Address
,Target.PhoneNumber = Source.PhoneNumber
WHEN NOT MATCHED THEN INSERT (
MemberID
,FirstName
,LastName
,Address
,PhoneNumber
) Values (
Source.MemberId
,Source.FirstName
,Source.LastName
,Source.Address
,Source.PhoneNumber
);
COMMIT TRAN;
*NOTE: The T-SQL MERGE operation is not atomic, and it is possible to get into a race condition with it. To insure it will work properly, do these things:
Ensure that your SQL Server is up-to-date with service packs and patches (current rev of SQL Server 2008 R2 is SP3, version 10.50.6000.34).
Wrap your MERGE in a transaction (BEGIN TRAN;, COMMIT TRAN;)
Use SERIALIZABLE hint to help prevent a potential race condition with the T-SQL MERGE statement.
Is there any way to do the following :
I have an empty sqlite database and another sqlite database with one table A. Table A has some records with unpredictable rowid's because of several deletes and inserts in the past )
I like to copy/clone table A to the empty database ( for example with an ATTACH DB and create Table as Select ) . But also I like to preserve the old rowid's of the original Table A, so that in the copied/cloned Table A are the same rowid's for each row.
Is there any way to do that ? ( No backup/rollback tools. )
You can copy over the values in the rowid columns like those in any other one:
INSERT INTO db2.MyTable(rowid, Name, whatever)
SELECT rowid, Name, whatever FROM MyTable;
Is there a natural option to establish a relationship between table and view or i should use trigger as a workaround to check that the data consistency?
I have a lookup view (for some reason i need it to be view and not a table).
I want to insert records to a different table. one of the values of the record i want to insert MUST be one of the ids from the lookup view.
For example:
ViewCities (CityId, CityName) -- This is the lookup View. the table behind the view located on a different database.
now i want to insert new row to tblUsers. one of the row columns is CityId. I want that not one will be able to insert a row to tblUsers that includes cityid that not exists on ViewCities.
You have two options that I am aware of to maintain referential integrity. You cannot use a foreign key constraint because you said that the tables are in two separate databases. The options are:
1. Use triggers, as you had mentioned.
2. Use a check constraint which references a user defined function which does the check.
For example:
Let's say I have a database named test, and another database is the Northwind database. In my test database I want to create a table which records names of users. The check I want to enforce is that the user name must be one of the LastName's of a user in the Northwind database. I first create a UDF like so:
create function chk_name (#name varchar(50))
returns bit
as
begin
declare #name_found bit=0
if exists(select * from Northwind..Employees where LastName=#name)
begin
set #name_found=1
end
return #name_found
end
Then, I create the table with a check constraint like so:
create table tst
(name varchar(50) check ( dbo.chk_name(name)=1 )
)
Now, if you try to insert a row into the tst table, it must be one of the Last Names of the Employees table in the Northwind database.
I have situation where I need to change the order of the columns/adding new columns for existing Table in SQL Server 2008.
Existing column
MemberName
MemberAddress
Member_ID(pk)
and I want this order
Member_ID(pk)
MemberName
MemberAddress
I got the answer for the same ,
Go on SQL Server → Tools → Options → Designers → Table and Database Designers and unselect Prevent saving changes that require table re-creation
2- Open table design view and that scroll your column up and down and save your changes.
It is not possible with ALTER statement. If you wish to have the columns in a specific order, you will have to create a newtable, use INSERT INTO newtable (col-x,col-a,col-b) SELECT col-x,col-a,col-b FROM oldtable to transfer the data from the oldtable to the newtable, delete the oldtable and rename the newtable to the oldtable name.
This is not necessarily recommended because it does not matter which order the columns are in the database table. When you use a SELECT statement, you can name the columns and have them returned to you in the order that you desire.
If your table doesn't have any records you can just drop then create your table.
If it has records you can do it using your SQL Server Management Studio.
Just click your table > right click > click Design then you can now arrange the order of the columns by dragging the fields on the order that you want then click save.
Best Regards
I tried this and dont see any way of doing it.
here is my approach for it.
Right click on table and Script table for Create and have this on
one of the SQL Query window,
EXEC sp_rename 'Employee', 'Employee1' -- Original table name is Employee
Execute the Employee create script, make sure you arrange the columns in the way you need.
INSERT INTO TABLE2 SELECT * FROM TABLE1.
-- Insert into Employee select Name, Company from Employee1
DROP table Employee1.
Relying on column order is generally a bad idea in SQL. SQL is based on Relational theory where order is never guaranteed - by design. You should treat all your columns and rows as having no order and then change your queries to provide the correct results:
For Columns:
Try not to use SELECT *, but instead specify the order of columns in the select list as in: SELECT Member_ID, MemberName, MemberAddress from TableName. This will guarantee order and will ease maintenance if columns get added.
For Rows:
Row order in your result set is only guaranteed if you specify the ORDER BY clause.
If no ORDER BY clause is specified the result set may differ as the Query Plan might differ or the database pages might have changed.
Hope this helps...
This can be an issue when using Source Control and automated deployments to a shared development environment. Where I work we have a very large sample DB on our development tier to work with (a subset of our production data).
Recently I did some work to remove one column from a table and then add some extra ones on the end. I then had to undo my column removal so I re-added it on the end which means the table and all references are correct in the environment but the Source Control automated deployment will no longer work because it complains about the table definition changing.
The real problem here is that the table + indexes are ~120GB and the environment only has ~60GB free so I'll need to either:
a) Rename the existing columns which are in the wrong order, add new columns in the right order, update the data then drop the old columns
OR
b) Rename the table, create a new table with the correct order, insert to the new table from the old and delete from the old as I go along
The SSMS/TFS Schema compare option of using a temp table won't work because there isn't enough room on disc to do it.
I'm not trying to say this is the best way to go about things or that column order really matters, just that I have a scenario where it is an issue and I'm sharing the options I've thought of to fix the issue
SQL query to change the id column into first:
ALTER TABLE `student` CHANGE `id` `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT FIRST;
or by using:
ALTER TABLE `student` CHANGE `id` `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT AFTER 'column_name'