I am working with a database and was told the Table USERS has a single Primary Key - USERID. Using ALL_CONS_COLS gives me the following:
OWNER | TABLE_NAME | CONSTRAINT_NAME | COLUMN_NAME | POSITION
----------------------------------------------------------------
MONSTER | USERS | USER_ID_PK | USERREF | 2
MONSTER | USERS | USER_ID_NN | USERID
MONSTER | USERS | USER_ID_PK | USERID | 1
This suggests the PK is a composite comprised of: USERREF and USERID.
There are a few things I do not follow:
Why does USERID have both a NOT NULL constraint and also a PK constraint? PK by definition means NN.
Secondly, as USERID has a NN constraint, why does USERREF not have a NN constraint?
Thirdly, why is CONSTRAINT_NAME "suggesting" the PK is USER_ID (USERID), i.e., why is the constraint named after a single column; wouldn't USERS_PK be a better constraint name?
So, ultimately, is it a Composite PK or not?
So, ultimately, is it a Composite PK or not?
Yes. The primary key is defined on two columns together. The position will tell you the leading column. USER_ID_PK primary key constraint is defined on USERID as leading column followed by USERREF.
Why does USERID have both a NOT NULL constraint and also a PK constraint?
Because someone created a NOT NULL constraint on the primary key column explicitly.
Let's test and see.
Single primary key
SQL> create table t(a number primary key);
Table created.
SQL> SELECT a.table_name,
2 b.column_name,
3 a.constraint_type,
4 b.position
5 FROM user_constraints a
6 JOIN user_cons_columns b
7 ON a.owner = b.owner
8 AND a.constraint_name = b.constraint_name
9 AND a.table_name = b.table_name
10 AND a.constraint_type IN ('P', 'C');
TABLE_NAME COLUMN_NAM C POSITION
---------- ---------- - ----------
T A P 1
Composite primary key
SQL> drop table t purge;
Table dropped.
SQL> create table t(a number, b number);
Table created.
SQL> alter table t add constraint t_pk PRIMARY KEY(a,
Table altered.
SQL> SELECT a.table_name,
2 b.column_name,
3 a.constraint_type,
4 b.position
5 FROM user_constraints a
6 JOIN user_cons_columns b
7 ON a.owner = b.owner
8 AND a.constraint_name = b.constraint_name
9 AND a.table_name = b.table_name
10 AND a.constraint_type IN ('P', 'C');
TABLE_NAME COLUMN_NAM C POSITION
---------- ---------- - ----------
T A P 1
T B P 2
Primary key and NOT NULL
SQL> drop table t purge;
Table dropped.
SQL> create table t(a number primary key not null);
Table created.
SQL> SELECT a.table_name,
2 b.column_name,
3 a.constraint_type,
4 b.position
5 FROM user_constraints a
6 JOIN user_cons_columns b
7 ON a.owner = b.owner
8 AND a.constraint_name = b.constraint_name
9 AND a.table_name = b.table_name
10 AND a.constraint_type IN ('P', 'C');
TABLE_NAME COLUMN_NAM C POSITION
---------- ---------- - ----------
T A C
T A P 1
Related
I am new to this, so this can be silly, but..
I have 3 created tables:
create table faculty (id_faculty number(2) Primary Key, //id , 2 digits, primary key
faculty_name varchar2(30) constraint f_name Not Null, // name, up to 30 symbols, not null
dean varchar2(20), // dean, up to 20 symbols
telephone varchar2(8)); //number, up to 8 symbols
create table student (id_student number(3) Primary Key, //id, 3 digits, primary key
student_name varchar(20) constraint s_name Not Null, // name, up to 20 symbols, not null
student_surname varchar(20) constraint s_surname Not Null, // name, up to 20 symbols, not null
course_year number(1) constraint s_course_year Check(course_year>0 and course_year<=6), // number, 1 digit, in range 1-6
faculty_id number(2), // id, 2 digits
constraint s_fkey Foreign key (Faculty_ID) References faculty(ID_faculty)); // foreign key to table faculty to id_faculty
create table money (id_payout number(4) Primary Key,
student_id number(3),
constraint m_fkey Foreign key (student_ID) References student(ID_student),
payout_date date constraint p_date Not Null,
stipend number(5,2) Check(stipend<151),
compensation number(5,2));
Then I created view:
create or replace view stud_stip AS
Select f.faculty_name, student_surname, student_name, s.course_year,
m.stipend, m.payout_date
from
faculty f inner join student s on f.id_faculty=s.faculty_id
inner join money m on m.student_id=s.id_student
where
m.payout_date = ( select distinct max(payout_date)
from money
where money.student_id=money.student_id)
with check option;
This is how the view looks
Then I wanted to add information to a view:
insert into stud_stip values ('Partikas tehnologiju', 'Lapa', 'Jana', 2, 120, '03.12.1998');
This is the error I got:
ORA-01779: cannot modify a column which maps to a non key-preserved table
I have googled all over the internet and did not solve my error, please don't send me other topics about this question, because I probably did read it.
I will be very grateful for the answers. Thank you in advance.
The way to do it is to create an instead of trigger on a view which would - in turn - insert rows into appropriate tables.
As all ID columns are primary keys and you didn't explain how you populate them, I'll do it using a sequence, in the same instead of trigger.
SQL> create sequence seqs;
Sequence created.
SQL> create or replace trigger trg_stud_stip
2 instead of insert on stud_stip
3 for each row
4 begin
5 insert into faculty (id_faculty, faculty_name)
6 values (seqs.nextval, :new.faculty_name);
7
8 insert into student (id_student, student_name, student_surname, course_year)
9 values (seqs.nextval, :new.student_name, :new.student_surname, :new.course_year);
10
11 insert into money (id_payout, stipend, payout_date)
12 values (seqs.nextval, :new.stipend, :new.payout_date);
13 end;
14 /
Trigger created.
Testing (don't insert strings into DATE datatype columns!):
SQL> insert into stud_stip
2 (faculty_name, student_surname, student_name, course_year, stipend, payout_date)
3 values
4 ('Partikas tehnologiju', 'Lapa', 'Jana', 2, 120, date '1998-12-03');
1 row created.
SQL> select * from faculty;
ID_FACULTY FACULTY_NAME DEAN TELEPHON
---------- ------------------------------ -------------------- --------
1 Partikas tehnologiju
SQL> select * from student;
ID_STUDENT STUDENT_NAME STUDENT_SURNAME COURSE_YEAR FACULTY_ID
---------- -------------------- -------------------- ----------- ----------
2 Jana Lapa 2
SQL> select * from money;
ID_PAYOUT STUDENT_ID PAYOUT_DAT STIPEND COMPENSATION
---------- ---------- ---------- ---------- ------------
3 03.12.1998 120
SQL>
It works. Now that you know how, improve it if necessary.
Table1
Columns PK_Table1 Name | DoYouGoToSchool |DoYouhaveACar |DoYouWorkFullTime | DoYouWorkPartTime | Score
1 joe Yes Yes No Yes
2 amy No Yes Yes No
Table2
Columns Pk_Table2 |Question | Answer(Bit Column) |Value
1 DoYouGoToSchool True 3
2 DoYouhaveACar True 2
3 DoYouWorkFullTime True 4
4 DoYouWorkPartTime True 2
Based on the information from Table2 What i need to do is UPDATE Table1 ColumnName Score by summing up the Value from Table2 with the information he has provided.
for example i expect the Score column in table1 to be 7 for record 1
and 5 for record 2
Here is a query to play with
IF OBJECT_ID('tempdb..#Table2') IS NOT NULL DROP TABLE #Table2
GO
IF OBJECT_ID('tempdb..#Table1') IS NOT NULL DROP TABLE #Table1
GO
create table #Table1
(
PK_Table1 int,
Name Varchar(50),
DoYouGoToSchool Varchar(8),
DoYouhaveACar Varchar(8),
DoYouWorkFullTime Varchar(8),
DoYouWorkPartTime Varchar(8),
Score INT NULL,
)
create table #Table2
(
PK_Table2 int,
Questions Varchar(50),
Answer BIT NOT NULL DEFAULT(0),
VALUE INT NULL
)
INSERT INTO #Table1 (Name,DoYouGoToSchool,DoYouhaveACar,DoYouWorkFullTime,DoYouWorkPartTime)
VALUES ('joe','Yes','Yes','No','Yes'), ('amy','NO','Yes','Yes','No')
INSERT INTO #Table2(Questions,Answer,VALUE)
VALUES ('DoYouGoToSchool','True',3 ),('DoYouhaveACar','True',2 ),('DoYouWorkFullTime','True',4 ),('DoYouWorkPartTime','True',2 )
This is what is missing from answer below that tells you to create new FK contraint to the Table2 --Inserting Data into the table with the new FK Column
insert into #Table2 (FK_Table1, Questions, Answer) select t.PK_Table1, t1.cols, colsval from #Table1 t cross apply (values (PK_Table1,'DoYouGoToSchool', DoYouGoToSchool), (PK_Table1,'DoYouhaveACar', DoYouhaveACar), (PK_Table1,'DoYouWorkFullTime', DoYouWorkFullTime), (PK_Table1,'DoYouWorkPartTime', DoYouWorkPartTime) ) t1 (PK_Table1,cols, colsval);
First create a relation between these two tables and add Primary key of Table1 in Table2 as a foreign key so your Table2 becomes:
Table2 Columns:
FK_Table1 |Pk_Table2 |Question | Answer(Bit Column) |Value
1 1 DoYouGoToSchool True 3
1 2 DoYouhaveACar True 2
1 3 DoYouWorkFullTime True 4
1 4 DoYouWorkPartTime True 2
You can add in table by using this Query:
ALTER TABLE Table2
ADD FK_Table1 INTEGER,
ADD CONSTRAINT FOREIGN KEY(FK_Table1) REFERENCES Table1(PK_Table1)
means that it is only for that person whose PK_Table1 = 1
Then you can extract his score from this query:
SELECT Sum(Value) FROM Table2 WHERE FK_Table1 = 1;
And then update query:
UPDATE Table1
SET score = (enter here the returned score from above query)
WHERE PK_Table1 = 1;
Or you can do in a single query like this:
UPDATE Table1
SET score = (SELECT Sum(Value) FROM Table2 WHERE FK_Table1 = 1)
WHERE PK_Table1 = 1;
You will need to add another table. This table will be your relational table. It can be called Table1_Table2 with three columns. The first column will be the primary key for the table. The next column will be the primary key of Table1 and the third column will be the primary key for Table 2.
When an instance of Table2 occurs that relates with Table1, insert a record into Table1_Table2 that relates the two tables together with each others primary key. Then a query can be done on the relational table, Table1_Table2 that allows you to sum the relationships.
|Table1_Table2 |
| PK | PK_Table1 | PK_Table2 |
| 1 | 1 | 1 |
| 2 | 1 | 3 |
| 3 | 2 | 1 |
| 4 | 2 | 4 |
As we can see, we can now perform an update on Table1
UPDATE TABLE1 A SET A.SCORE = (Select SUM(B.Value) FROM Table2 B, Table1_Table2 C WHERE C.PK_Table2 = B.PK_Table2 AND C.PK_Table1 = A.PK_Table1);
I'm trying to add values in a junction table of a many to many relationship.
Tables look like these (all IDs are integers):
Table A
+------+----------+
| id_A | ext_id_A |
+------+----------+
| 1 | 100 |
| 2 | 101 |
| 3 | 102 |
+------+----------+
Table B is conceptually similar
+------+----------+
| id_B | ext_id_B |
+------+----------+
| 1 | 200 |
| 2 | 201 |
| 3 | 202 |
+------+----------+
Tables PK are id_A and id_B, as columns in my junction table are FK to those columns, but I have to insert values having only external ids (ext_id_A, ext_id_B).
External IDs are unique columns, (and therefore in a 1:1 with table id itself), so having ext_id I can lookup the exact row and get the id need to insert into junction table.
This is an example of what I've done so far, but doesn't look like an optimized sql statement:
-- Example table I receive with test values
declare #temp as table (
ext_id_a int not null,
ext_id_b int not null
);
insert into #temp values (100, 200), (101, 200), (101, 201);
--Insertion - code from my sp
declare #final as table (
id_a int not null,
id_b int not null
);
insert into #final
select a.id_a, b.id_b
from #temp as t
inner join table_a a on a.ext_id_a = t.ext_id_a
inner join table_b b on b.ext_id_b = t.ext_id_b
merge into junction_table as jt
using #final as f
on f.id_a = jt.id_a and f.id_b = tj.id_b
when not matched by target then
insert (id_a, id_b) values (id_a, id_b);
I was thinking about a MERGE statement since my stored procedure receives data in a Table Value Parameters parameter and I also have to check for already existing references.
Is anything I can do to improve insertion of these values?
No need to use the #final table variable:
; with cte as (
select tA.id_A, tB.id_B
from #temp t
join table_A tA on t.ext_id_a = tA.ext_id_A
join table_B tB on t.ext_id_B = tB.ext_id_B
)
merge into junction_table
using cte
on cte.id_A = junction_table.id_A and cte.id_B = junction_table.id_B
when not matched by target then
insert (id_A, id_B) values (cte.id_A, cte.id_B);
Consider this datatable :
word wordCount documentId
---------- ------- ---------------
Ball 10 1
School 11 1
Car 4 1
Machine 3 1
House 1 2
Tree 5 2
Ball 4 2
I want to insert these data into two tables with this structure :
Table WordDictionary
(
Id int,
Word nvarchar(50),
DocumentId int
)
Table WordDetails
(
Id int,
WordId int,
WordCount int
)
FOREIGN KEY (WordId) REFERENCES WordDictionary(Id)
But because I have thousands of records in initial table, I have to do this just in one transaction (batch query) for example using bulk insert can help me doing this purpose.
But the question here is how I can separate this data into these two tables WordDictionary and WordDetails.
For more details :
Final result must be like this :
Table WordDictionary:
Id word
---------- -------
1 Ball
2 School
3 Car
4 Machine
5 House
6 Tree
and table WordDetails :
Id wordId WordCount DocumentId
---------- ------- ----------- ------------
1 1 10 1
2 2 11 1
3 3 4 1
4 4 3 1
5 5 1 2
6 6 5 2
7 1 4 2
Notice :
The words in the source can be duplicated so I must check word existence in table WordDictionary before any insert record in these tables and if a word is found in table WordDictionary, the just found Word ID must be inserted into table WordDetails (please see Word Ball)
Finally the 1 M$ problem is: this insertion must be done as fast as possible.
If you're looking to just load the table the first time without any updates to the table over time you could potentially do it this way (I'm assuming you've already created the tables you're loading into):
You can put all of the distinct words from the datatable into the WordDictionary table first:
SELECT DISTINCT word
INTO WordDictionary
FROM datatable;
Then after you populate your WordDictionary you can then use the ID values from it and the rest of the information from datatable to load your WordDetails table:
SELECT WD.Id as wordId, DT.wordCount as WordCount, DT.documentId AS DocumentId
INTO WordDetails
FROM datatable as DT
INNER JOIN WordDictionary AS WD ON WD.word = DT.word
There a little discrepancy between declared table schema and your example data, but it was solved:
1) Setup
-- this the table with the initial data
-- drop table DocumentWordData
create table DocumentWordData
(
Word NVARCHAR(50),
WordCount INT,
DocumentId INT
)
GO
-- these are result table with extra information (identity, primary key constraints, working foreign key definition)
-- drop table WordDictionary
create table WordDictionary
(
Id int IDENTITY(1, 1) CONSTRAINT PK_WordDictionary PRIMARY KEY,
Word nvarchar(50)
)
GO
-- drop table WordDetails
create table WordDetails
(
Id int IDENTITY(1, 1) CONSTRAINT PK_WordDetails PRIMARY KEY,
WordId int CONSTRAINT FK_WordDetails_Word REFERENCES WordDictionary,
WordCount int,
DocumentId int
)
GO
2) The actual script to put data in the last two tables
begin tran
-- this is to make sure that if anything in this block fails, then everything is automatically rolled back
set xact_abort on
-- the dictionary is obtained by considering all distinct words
insert into WordDictionary (Word)
select distinct Word
from DocumentWordData
-- details are generating from initial data joining the word dictionary to get word id
insert into WordDetails (WordId, WordCount, DocumentId)
SELECT W.Id, DWD.WordCount, DWD.DocumentId
FROM DocumentWordData DWD
JOIN WordDictionary W ON W.Word = DWD.Word
commit
-- just to test the results
select * from WordDictionary
select * from WordDetails
I expect this script to run very fast, if you do not have a very large number of records (millions at most).
This is the query. I'm using temp table to be able to test.
if you use the 2 CTEs, you'll be able to generate the final result
1.Setting up a sample data for test.
create table #original (word varchar(10), wordCount int, documentId int)
insert into #original values
('Ball', 10, 1),
('School', 11, 1),
('Car', 4, 1),
('Machine', 3, 1),
('House', 1, 2),
('Tree', 5, 2),
('Ball', 4, 2)
2. Use cte1 and cte2. In your real database, you need to replace #original with the actual table name you have all initial records.
;with cte1 as (
select ROW_NUMBER() over (order by word) Id, word
from #original
group by word
)
select * into #WordDictionary
from cte1
;with cte2 as (
select ROW_NUMBER() over (order by #original.word) Id, Id as wordId,
#original.word, #original.wordCount, #original.documentId
from #WordDictionary
inner join #original on #original.word = #WordDictionary.word
)
select * into #WordDetails
from cte2
select * from #WordDetails
This will be data in #WordDetails
+----+--------+---------+-----------+------------+
| Id | wordId | word | wordCount | documentId |
+----+--------+---------+-----------+------------+
| 1 | 1 | Ball | 10 | 1 |
| 2 | 1 | Ball | 4 | 2 |
| 3 | 2 | Car | 4 | 1 |
| 4 | 3 | House | 1 | 2 |
| 5 | 4 | Machine | 3 | 1 |
| 6 | 5 | School | 11 | 1 |
| 7 | 6 | Tree | 5 | 2 |
+----+--------+---------+-----------+------------+
I am trying to update a lead table to assign a random person from a lookup table. Here is the generic schema:
TableA (header),
ID int,
name varchar (30)
TableB (detail),
ID int,
fkTableA int, (foreign key to TableA.ID)
recordOwner varchar(30) null
other detail colums..
TableC (owners),
ID int,
fkTableA int (foreign key to TableA.ID)
name varchar(30)
TableA has 10 entries, one for each type of sales lead pool. TableB has thousands of entries for each row in TableA. I want to assign the correct recordOwners from TableC to and even number of rows each (or as close as I can). TableC will have anywhere from one entry for each row in tableA or up to 10.
Can this be done in one statement? It doesn't have to be. I can't seem to figure out the best approach for speed. Any thoughts or samples are appreciated.
Updated:
TableA has a 1 to many relation ship with TableC. For every record of TableA, TableC will have at least one row, which represents an owner that will need to be assigned to a row in TableB.
TableA
int name
1 LeadSourceOne
2 LeadSourceTwo
TableC
int(id) int(fkTableA) varchar(name)
1 1 Tom
2 1 Bob
3 2 Timmy
4 2 John
5 2 Steve
6 2 Bill
TableB initial data
int(id) int(fkTableA) varchar(recordOwner) (other detail columns)
1 1 NULL ....
2 1 NULL ....
3 1 NULL ....
4 2 NULL ....
5 2 NULL ....
6 2 NULL ....
7 2 NULL ....
8 2 NULL ....
9 2 NULL ....
TableB end result
int(id) int(fkTableA) varchar(recordOwner) (other detail columns)
1 1 TOM ....
2 1 BOB ....
3 1 TOM ....
4 2 TIMMY ....
5 2 JOHN ....
6 2 STEVE ....
7 2 BILL ....
8 2 TIMMY ....
9 2 BILL ....
Basically I need to randomly assign a record from tableC to tableB based on the relationship to tableA.
UPDATE TabB SET name = (SELECT TOP 1 coalesce(tabC.name,'') FROM TabC INNER JOIN TabA ON TabC.idA = TabA.id WHERE tabA.Id = TabB.idA )
Should work but its not tested.
Try this:
UPDATE TableB
SET recordOwner = (SELECT TOP(1) [name]
FROM TableC
ORDER BY NEWID())
WHERE recordOwner IS NULL
I ended up looping thru and updating x percent of the detail records based on how many owners I had. The end result is something like this:
create table #tb_owners(userId varchar(30), processed bit)
insert into #tb_owners(
userId,
processed)
select userId = name,
processed = 0
from tableC
where fkTableA = 1
select #percentUpdate = cast(100 / count(*) as numeric(8,2))
from #tb_owners
while exists(select 1 from #tb_owners o where o.processed = 0)
begin
select top 1
#userFullName = o.name
from #tb_owners o
where o.processed = 0
order by newId()
update tableB
set recordOwner = #userFullName
from tableB ptbpd
inner join (
select top (#percentUpdate) percent
id
from tableB
where recordOwner is null
order by newId()
) nd on (ptbpd.id = nd.id)
update #tb_owners
set processed = 1
where userId = #oUserId
end
--there may be some left over, set to last person used
update tableB
set recordOwner = #userFullName
from tableB
where ptbpd.recordOwner is null