I have two fields (Utter and Misery) in Table Massconfusion in database A that I need to move to two fields (also named Utter and Misery) in a table also called Massconfusion database B. There are two keys (Primary and subkey) that keep this data sorted correctly with the rest of the information in Database A and B.
(basically we somehow lost most of the information in the two fields and are trying to get it from an old copy of our db and all of the easy methods of restoration have not worked.)
I am a total newbie at scripting in sql. So I am pleading, HELP! Thanks in advance.
UPDATE B.dbo.MassConfusion
SET Utter = (SELECT Utter FROM A.dbo.MassConfusion WHERE A.PrimeKey = B.PrimeKey)
UPDATE B.dbo.MassConfusion
SET Misery= (SELECT Misery FROM A.dbo.MassConfusion WHERE A.PrimeKey = B.PrimeKey)
You may be better off inserting into a new table depending on the number of records and how messed up they are, though....UPDATE can be slow and expensive depending on how many indexes you have, etc.
I wasn't clear on exactly what the primary key for your MassConfusion table was. The first version assumes that the primary key for MassConfusion is just Primary. If the primary key is actually a composite of Primary and SubKey, then use the second version.
Version 1: Primary key consists of one column
/* Just to make it clear that this is run from Database B */
Use B
go
update MCB
set Utter = MCA.Utter,
Misery = MCA.Misery
from MassConfusion MCB
inner join A.dbo.MassConfusion MCA
on MCB.Primary = MCA.Primary
Version 2: Primary key is a composite of two columns
/* Just to make it clear that this is run from Database B */
Use B
go
update MCB
set Utter = MCA.Utter,
Misery = MCA.Misery
from MassConfusion MCB
inner join A.dbo.MassConfusion MCA
on MCB.Primary = MCA.Primary
and MCB.SubKey = MCA.SubKey
Related
I have two tables, Engineering and Electrical. Work is done in the Engineering table, then the Electrical team starts work after that. They share some of the same columns. Those columns are
Tag
Service Description
Horsepower
RPM
Project Number
I want to create an after update trigger so that when the Tag column gets filled in the Electrical table and that data matches the data in one of the Tag columns in the Engineering table, the other four same columns in the Engineering table automatically are sent to the corresponding columns in the Electrical table.
Below is what I tried which obviously doesn't work:
CREATE TRIGGER [dbo].[tr_Electrial_Update]
ON [dbo].[ENGINEERING]
AFTER UPDATE
AS
BEGIN
INSERT INTO ELECTRICAL ([ICM_SERVICE_DESCRIPTION_],[PROJECT_NUMBER_], [ICM_POWER_HP_], [ICM_POWER_KW_], [ICM_RPM_])
SELECT
i.[ICM_SERVICE_DESCRIPTION_], i.[PROJECT_NUMBER_],
i.[ICM_POWER_HP_], i.[ICM_POWER_KW_], i.[ICM_RPM_]
FROM
ENGINEERING m
JOIN
inserted i ON i.[TAG_] = m.[TAG_]
END
I'm someone trying to teach myself SQL on the fly so be kind. As always I'm very appreciative of any help.
From your post, I'm assuming you already have an entry in the Electrical table, and it's column Tag gets updated from NULL to some other value. This syntax is for SQL Server - you didn't explicitly specify what RDBMS you're using, but it looks like SQL Server to me. If it's not - adapt as needed.
Assuming you have only a single row in Engineering that matches that Tag value, you can do something like this - it has to be an UPDATE statement since you already have a row in Electrical - you want to update some columns, not insert a completely new row:
CREATE TRIGGER [dbo].[tr_Electrical_Update]
ON [dbo].Electrical
AFTER UPDATE
AS
BEGIN
IF UPDATE(Tag)
UPDATE dbo.Electrical
SET [ICM_SERVICE_DESCRIPTION_] = eng.[ICM_SERVICE_DESCRIPTION_],
[PROJECT_NUMBER_] = eng.[PROJECT_NUMBER_],
[ICM_POWER_HP_] = eng.[ICM_POWER_HP_],
[ICM_POWER_KW_] = eng.[ICM_POWER_KW_],
[ICM_RPM_] = eng.[ICM_RPM_]
FROM Inserted i
INNER JOIN dbo.Engineering eng ON i.Tag = eng.Tag
WHERE Electrical.Tag = i.Tag;
END
I have an ETL process (CSV to SQL database) that runs daily, but the data in the source sometimes changes, so I want to have it run again the next day with an updated file.
How do I write a SQL statement to find all the differences?
For example, let's say Table_1 has a composite PRIMARY KEY consisting of FK_1, FK_2 and FK_3.
Do I do this in SQL or in the ETL process?
Thanks.
Edit
I realize now this question is too broad. Disregard.
You can use EXCEPT to find which are the IDs which are missing. For example:
SELECT FK_1, FK_2, FK_2
FROM new_data_table
EXCEPT
SELECT FK_1, FK_2, FK_2
FROM current_data_table;
It will be better (in performance prospective) to materialized these IDs and then to join this new table to the new_data_table in order to insert all of the columns.
If you need to do this in one query, you can use simple LEFT JOIN. For example:
INSERT INTO current_data_table
SELECT A.*
FROM new_data_table A
LEFT JOIN current_data_table B
ON A.FK_1 = B.FK_1
AND A.FK_2 = B.FK_2
AND A.FK_3 = B.FK_3
WHRE B.[FK_1] IS NULL;
The idea is to get all records in the new_data_table for which, there is no match in the current_data_table table (WHRE B.[FK_1] IS NULL).
I've been reading documentation and looking at FAQs and haven't found an answer for this one which probably means it can't be done. My actual situation is a little more complex, but I'll try to simplify it for this question. For each of the past years, I have a header/detail tables with a foreign key linking them. The year datum is in the header records! I want to be able to query all tables concatenated across years.
I have set up views that follows a 'SELECT + UNION ALL' format. I've also put check constraints on the header tables to restrict their values to their respective year. This allows the SQL server query optimizer to only query specific tables when running a query that is restricted with a WHERE clause. Awesome. Up to this point, this information can be found anywhere and everywhere by searching for Partitioned Views.
I want to do the same sort of query optimization with the detail tables but can't figure it out. There is nothing in the detail record that indicates what year it belongs to without joining with the header record; Meaning, the foreign key constraint is the only constraint I have to go off of.
The only solution I've thought of is adding a 'year' column to the detail tables and then adding another where sub clause to the queries. Is there any thing I can do to create a partitioned view of the detail tables using the existing foreign key constraint?
Here is some DDL for reference:
CREATE TABLE header2008 (
hid INT PRIMARY KEY,
dt DATE CHECK ('2008-01-01' <= dt AND dt < '2009-01-01')
)
CREATE TABLE header2009 (
hid INT PRIMARY KEY,
dt DATE CHECK ('2009-01-01' <= dt AND dt < '2010-01-01')
)
CREATE TABLE detail2008 (
did INT PRIMARY KEY,
hid INT FOREIGN KEY REFERENCES header2008(hid),
value INT
)
CREATE TABLE detail2009 (
did INT PRIMARY KEY,
hid INT FOREIGN KEY REFERENCES header2009(hid),
value INT
)
GO
CREATE VIEW headerAll AS
SELECT * FROM header2008 UNION ALL
SELECT * FROM header2009
GO
CREATE VIEW detailAll AS
SELECT * FROM detail2008 UNION ALL
SELECT * FROM detail2009
GO
--This only hits the header2008 table (GOOD)
SELECT *
FROM headerAll h
WHERE dt = '2008-04-04'
--This hits the header2008, detail2008, and detail 2009 tables. (BAD)
SELECT *
FROM headerAll h
INNER JOIN detailAll d ON h.hid = d.hid
WHERE dt = '2008-04-04'
Since you're not going for partitioned tables, I'm assuming you can't target 2005+ Enterprise Edition or higher.
Here is an alternative to adding a new physical column to your tables:
CREATE VIEW detailAll AS
SELECT 2008 AS Year, * FROM detail2008
UNION ALL
SELECT 2009, * FROM detail2009
then,
SELECT *
FROM headerAll h
INNER JOIN detailAll d ON h.hid = d.hid
WHERE dt = '2008-04-04' AND d.Year = 2008
Before you run off and implement this, there is a catch; well, two catches actually.
This solution, like the headerAll view as it's written, cannot accommodate parameters on the partitioning column and still do partition elimination. Using a search predicate of WHERE dt = #date AND d.Year = YEAR(#date) causes table scans across all tables in both views because the query optimizer assumes #date is an arbitrary value (and there's no way to fix that). This is a recipe for a performance disaster if the view is exposed publicly in your database API: there is no restriction on parameterization in queries, and most query authors and ORMs tend to use parameterized queries wherever possible (it's almost always a good thing!).
To get the views to do partition elimination in a real application, you will have to resort to dynamic string execution. How you accomplish this will depend on your business requirements, data requirements, and application architecture. It will be a bit trickier if you're grabbing data from multiple years.
Note also that using dynamic string execution would allow you to write queries directly against the base tables instead of introducing a UNIONed view for each "table". I don't think there's anything wrong with the latter, but this is an option you may not have considered.
I have a database infrastructure where we are regularly (at least once a day) replicating the full content of tables from a source database to approximately 20 target databases. Due to the replication code in use (we have to use regular oracle queries, no control or direct access to source database) - this results in 20 full-table sorts of the source table.
Is there any way to optimize for this in the query? I'm looking for something that would basically tell oracle "I'm going to be repeatedly sorting this entire table"? MySQL had an option with myisamchk where you could tell it to sort a table and keep it in sorted order, but obviously that wouldn't apply here for multiple reasons.
Currently, there are also some intermediate tables involved (sync from A to B, then from B to C.) We do have control over the intermediate tables, so if there are tuning options there, that would be useful as well.
Generally, the queries are almost all of the very simplistic form:
select a, b, c, d, e, ... z from tbl1 order by a, b, c, d, e, ... z;
I'm aware of streams, but as described above, the primary source tables are outside of our control, so we won't be able to use streams there. (Additionally, those source tables are rebuilt completely from a snapshot daily, so streams wouldn't really work anyway.)
you could look into the multi-table INSERT feature. It should perform a single FULL SCAN and will insert into multiple tables. Consider (10gR2):
SQL> CREATE TABLE t1 (ID NUMBER);
Table created
SQL> CREATE TABLE t2 (ID NUMBER);
Table created
SQL> INSERT ALL
2 INTO t1 VALUES (d_id)
3 INTO t2 VALUES (d_id)
4 /* your select goes here */
5 SELECT ROWNUM d_id FROM dual d CONNECT BY LEVEL <= 5;
10 rows inserted
SQL> SELECT COUNT(*) FROM t1;
COUNT(*)
----------
5
SQL> SELECT COUNT(*) FROM t2;
COUNT(*)
----------
5
You will have to check if it works over database links.
Some things that would help the sorting issue is to have indexes on the columns that you are sorting on (and also joining the tables on, if they're not there already). You could also create materialized views which are already sorted, and Oracle would keep the sorted results cached.
You don't say exactly how the replication is done or the data volumes involved (or why you are sorting the data).
If the aim is to minimise the impact on the source database, your best bet may be to extract into an intermediate file and load the file into the destination databases. The sort could be done on the intermediate file (if plain text), or as part of either the export or import into the destination databases.
In source database :
create table export_emp_info
organization external
( type oracle_datapump
default directory DATA_PUMP_DIR
location ('emp.dmp')
) as select emp_id, emp_name, dept_id from emp order by dept_id
/
Copy file then, import in dest database:
create table import_emp_info
(EMP_ID NUMBER(12),
EMP_NAME VARCHAR2(100),
DEPT_ID NUMBER)
organization external
( type oracle_datapump
default directory DATA_PUMP_DIR
location ('emp.dmp')
)
/
insert into emp_info select * from import_emp_info;
If you don't want or can't have the external table on the source db, you can use a straight expdp of the emp table (possibly using NETWORK_LINK if you have limited access to the source database directory structure) and QUERY to do the ordering.
You could load data from source table A to an intermediate table B and then do a partition exchange between B and destination table C. Exact replication, no sorting involved.
This I/U/D form of replication is what the MERGE command is there for. It's very doubtful that an expensive sort-merge would be required, and I'd expect to see hash joins instead. As long as the hash table can be stored in memory the hash join is barely more expensive than scanning the tables.
A handy optimisation is to store a hash value based on the non-key attributes, so that you can join between source and target tables on the key column(s) and compare small hash values instead of the full set of columns - change detection made easy.
Sorry if the title is poorly descriptive, but I can't do better right now =(
So, I have this master-detail scheme, with the detail being a tree structure (one to many self relation) with n levels (on SQLServer 2005)
I need to copy a detail structure from one master to the another using a stored procedure, by passing the source master id and the target master id as parameters (the target is new, so it doesn't has details).
I'm having troubles, and asking for your kind help in finding a way to keep track of parent id's and inserting the children without using cursors or nasty things like that...
This is a sample model, of course, and what I'm trying to do is to copy the detail structure from one master to other. In fact, I'm creating a new master using an existing one as template.
If I understand the problem, this might be what you want:
INSERT dbo.Master VALUES (#NewMaster_ID, #NewDescription)
INSERT dbo.Detail (parent_id, master_id, [name])
SELECT detail_ID, #NewMaster_ID, [name]
FROM dbo.Detail
WHERE master_id = #OldMaster_ID
UPDATE NewChild
SET parent_id = NewParent.detail_id
FROM dbo.Detail NewChild
JOIN dbo.Detail OldChild
ON NewChild.parent_id = OldChild.detail_id
JOIN dbo.Detail NewParent
ON NewParent.parent_id = OldChild.parent_ID
WHERE NewChild.master_id = #NewMaster_ID
AND NewParent.master_id = #NewMaster_ID
AND OldChild.master_id = #OldMaster_ID
The trick is to use the old detail_id as the new parent_id in the initial insert. Then join back to the old set of rows using this relationship, and update the new parent_id values.
I assumed that detail_id is an IDENTITY value. If you assign them yourself, you'll need to provide details, but there's a similar solution.
you'll have to provide create table and insert into statements for little sample data.
and expected results based on this sample data.