I have a view that looks similar to this,
SELECT dbo.Staff.StaffId, dbo.Staff.StaffName, dbo.StaffPreferences.filter_type
FROM dbo.Staff LEFT OUTER JOIN
dbo.StaffPreferences ON dbo.Staff.StaffId = dbo.StaffPreferences.StaffId
I'm trying to update StaffPreferences.filter_type using,
UPDATE vw_Staff SET filter_type=1 WHERE StaffId=25
I have read this in an MSDN article,
Any modifications, including UPDATE, INSERT, and DELETE statements,
must reference columns from only one base table.
Does this mean that I can only update fields in dbo.Staff (which is all I can currently achieve) In this context does the definition of 'base table' not extend to any subsequently joined tables?
Your statement should work just fine since you are only modifying column(s) from one table (StaffPreferences).
If you tried to update a columns from different tables in the same update statement you would get an error.
Msg 4405, Level 16, State 1, Line 7
View or function 'v_ViewName' is not updatable because the modification affects multiple base tables.
The rules for updatable join views are as follows:
General Rule
Any INSERT, UPDATE, or DELETE operation on a join view can modify only
one underlying base table at a time.
UPDATE Rule All updatable columns of a join view must map to
columns of a key-preserved table. See "Key-Preserved Tables" for a
discussion of key-preserved tables. If the view is defined with the
WITH CHECK OPTION clause, then all join columns and all columns of
repeated tables are non-updatable.
DELETE Rule
Rows from a join view can be deleted as long as there is exactly one
key-preserved table in the join. If the view is defined with the WITH
CHECK OPTION clause and the key preserved table is repeated, then the
rows cannot be deleted from the view.
INSERT Rule An INSERT statement must not explicitly or
implicitly refer to the columns of a nonkey preserved table. If the
join view is defined with the WITH CHECK OPTION clause, INSERT
statements are not permitted.
http://download.oracle.com/docs/cd/B10501_01/server.920/a96521/views.htm#391
I think you can see some of the problems that might occur if there's a row in Staff with StaffId 25, but no matching row in StaffPreferences. There are various right things you could do (preserve the appearance that this is a table, perform an insert in StaffPreferences; reject the update; etc).
I think at this point, the SQL Server engine will give up, and you'll have to write a trigger that implements the behaviour you want (whatever that may be. You need to consider all of the cases for the join working/not working)
Here is how I solved it.
In my case it was table, not a view, but I needed to find the schema id that referenced the table in the data construction in a reference table, say called our_schema.
I ran the following:
select schemaid from our_schema where name = "MY:Form"
This gave me the id as 778 (example)
Then I looked where this ID was showing up with a prefix of T, B, or H.
In our case we have Table, Base and History tables where the data is stored.
I then ran:
delete from T778
delete from B778
delete from H778
This allowed me to delete the data and bypass that restriction.
Related
I was facing issues with merge statement over large tables.
The source table for merge is basically clone of the target table after applying some DML.
e.g. In the below example PUBLIC.customer is target and STAGING.customer is the source.
CREATE OR REPLACE TABLE STAGING.customer CLONE PUBLIC.customer;
MERGE INTO STAGING.customer TARGET USING (SELECT * FROM NEW_CUSTOMER) AS SOURCE ON TARGET.ID = SOURCE.ID
WHEN MATCHED AND SOURCE.DELETEFLAG=TRUE THEN DELETE
WHEN MATCHED AND TARGET.ROWMODIFIED < SOURCE.ROWMODIFIED THEN UPDATE SET TARGET.AGE = SOURCE.AGE, ...
WHEN NOT MATCHED THEN INSERT (AGE) VALUES (AGE, DELETEFLAG, ID,...);
Currently, we are simply merging the STAGING.customer back to PUBLIC.customer at the end.
This final merge statement is very costly for some of the large tables.
While looking for a solution to reduce the cost, I discovered Snowflake "CHANGES" mechanism. As per the documentation,
Currently, at least one of the following must be true before change tracking metadata is recorded for a table:
Change tracking is enabled on the table (using ALTER TABLE … CHANGE_TRACKING = TRUE).
A stream is created for the table (using CREATE STREAM).
Both options add hidden columns to the table which store change tracking metadata. The columns consume a small amount of storage.
I assumed that the metadata added to the table is equivalent to the result-set of the select statement using "changes" clause, which doesn't seem to be the case.
INSERT INTO PUBLIC.CUSTOMER(AGE,...) (SELECT AGE,... FROM STAGING.CUSTOMER CHANGES (information => default) at(timestamp => 1675772176::timestamp) where "METADATA$ACTION" = 'INSERT' );
The select statement using "changes" clause is way slower than the merge statement that I am using currently.
I checked the execution plan and found that Snowflake performs a self-join(sort of) on the table at two different timestamp.
Should it really be the behaviour or am I missing something here? I was hoping to get better performance assuming to scan the table one time and then simply inserting the new records which should be faster than the merge statement.
Also, even if it does a self join, why does the merge query perform better than this, the merge query is also doing join on similar volumes.
I was also hoping to use same mechanism for delete/updates on source table.
so here is a little ms sql server query:
update tblSaleReturnMain set sync=0
from tblSaleOrderMain s join tblSaleReturnMain r on r.ID=s.intReturnOrderId
where s.sync=0
it updates my "tblSaleReturnMain" table just fine, also I wrote this query myself, but I dont know why it works. My question is, with all the many join-ed tables that I could reference after the "from" clause, and all the possible data that can be produced, how does this query know that the tblSaleReturnMain mentioned in "update tblSaleReturnMain .." is the same that is being filtered in join statement? Is that always like a protocol, like we mention a table before the "set" keyword and do not give it an alias, but then go on filtering/joining its data any way we like, and what remains in a resultset is what the "set" statement will apply to?
My question is specifically for Update statements that have JOINS after the FROM keyword.
Also this question is not about "how to use join in Update statement", because I already did that successfully above.
Yes, as long as you only use the target table once in the FROM SQL Server will assume it is the same table reference. From the docs (emphasis mine):
FROM <table_source>
Specifies that a table, view, or derived table source is used to provide the criteria for the update operation. For more information, see FROM (Transact-SQL).
If the object being updated is the same as the object in the FROM clause and there is only one reference to the object in the FROM clause, an object alias may or may not be specified. If the object being updated appears more than one time in the FROM clause, one, and only one, reference to the object must not specify a table alias. All other references to the object in the FROM clause must include an object alias.
If you reference the same table more than once and try to update it using just the table name rather than the alias, you'll get an error along the lines of:
Msg 8154 Level 16 State 1 Line 2
The table 'tblSaleReturnMain ' is ambiguous.
You can reference the same table more than once, but if doing so you must use the alias as the table_or_view_name, e.g.
UPDATE Alias
SET Col = 1
FROM dbo.T1 AS Alias
INNER JOIN dbo.T1 AS Alias2
ON Alias.ID = Alias2.ID;
Examples on DB<>Fiddle
I personally always use the alias regardless of whether the full table reference would be ambiguous.
SQL Server's UPDATE ... FROM syntax is non-standard and confusing.
Much better to write a CTE, examine the results, and then update the CTE, eg:
with q as
(
select r.sync
from tblSaleOrderMain s
join tblSaleReturnMain r
on r.ID=s.intReturnOrderId
where s.sync=0
)
update q set sync = 0
I'm trying to create a check constraint on a table so that ParentID is never a descendant of current record
For instance, I have a table Categories, with the following fields ID, Name, ParentID
I have the following CTE
WITH Children AS (SELECT ID AS AncestorID, ID, ParentID AS NextAncestorID FROM Categories UNION ALL SELECT Categories.ID, Children.ID, Categories.ParentID FROM Categories JOIN Children ON Categories.ID = Children.NextAncestorID) SELECT ID FROM Children where AncestorID =99
The results here are correct, but when I try to add it as a constraint to the table like this:
ALTER TABLE dbo.Categories ADD CONSTRAINT CK_Categories CHECK (ParentID NOT IN(WITH Children AS (SELECT ID AS AncestorID, ID, ParentID AS NextAncestorID FROM Categories UNION ALL SELECT Categories.ID, Children.ID, Categories.ParentID FROM Categories JOIN Children ON Categories.ID = Children.NextAncestorID) SELECT ID FROM Children where AncestorID =ID))
I get the following error:
Incorrect syntax near the keyword 'with'. If this statement is a
common table expression, an xmlnamespaces clause or a change tracking
context clause, the previous statement must be terminated with a
semicolon.
Adding a semicolon before the WITH, didn't help.
What would be the correct way to do this?
Thanks!
Per the SQL Server documentation on column constraints:
CHECK
Is a constraint that enforces domain integrity by limiting the possible values that can be entered into a column or columns.
logical_expression
Is a logical expression used in a CHECK constraint and returns TRUE or FALSE. logical_expression used with CHECK constraints cannot reference another table but can reference other columns in the same table for the same row. The expression cannot reference an alias data type.
(The above was quoted from the SQL Server 2017 version of the documentation, but the general principle applies to all previous versions as well, and you didn't state what version you are working with.)
The important part here is the "cannot reference another table, but can reference other columns in the same table for the same row" (emphasis added). A CTE would count as another table.
As such, you can't have a complex query like a CTE used for a CHECK constraint.
Like Saman suggested, if you want to check against existing data in other rows, and it must be in the DB layer, you could do it as a trigger.
However, triggers have their own drawbacks (e.g. issues with discoverability, behavior that is unexpected by those who are unaware of the trigger's presence).
As Sami suggested in their comment, another option is a UDF, but that's not w/o its own issues, potentially with both performance and stability according to the answers on this question about this approach in SQL Server 2008. It's probably still applicable to later versions as well.
If possible, I would say it's usually best to move that logic into the application layer. I see from your comment that you already have "client-side" validation. If there is an app server between that client and the database server (such as in a web app), I would suggest putting that additional validation there (in the app server) instead of in the database.
I think I am missing something fundamental about working with SQL statements and (Delphi's ADO)Query component and/or setting up relationships between fields in (Access 2003) databases. I get error messages whenever I want to delete, update, etc. anything more complex than than SQL.Text="SELECT something FROM aTable."
For example, I created a simple many-to-many relationship between tables called Outline and Reference. The junction or join table is called Note:
Outline
OutlineID (PK)
etc.
Reference
RefID (PK)
etc.
Note
NoteID (PK)
OutlineID
RefID
NoteText
I enforced referential integrity on the joins in Access, but didn't tick the checkboxes to cascade deletes or updates. Meanwhile, over in Delphi my Query.SQL.Text is
SELECT Note.NoteID, Outline.OutlineID, Ref.RefID, Note.NoteText, Ref.Citation, Outline.OutlineText
FROM (Note LEFT JOIN Outline ON Outline.OutlineID=Note.OutlineID)
LEFT JOIN Ref on Ref.RefID=Note.RefID;
Initially I left out the references to keys in the SELECT statement, producing an 'insufficient key column info' error when I tried deleting a record from the resulting table. I think I understand: you have to SELECT all the fields the db will need for any operations it will be asked to perform. It can't delete, update, etc. joined fields if it doesn't know what's joined to what. (Is this right?)
So, then, how do I go about deleting a record from this query? In other words, I want to (1) display a grid showing NoteText, Citation, and OutlineText, (2) select a record from the grid, (3) do something like click the Delete button on a DBNavigator, and (4) delete the record from the Note table that has the same NoteID and NoteText as the selected record.
Both James L and Hendra provide the essence of how to do what you want. The following is a way to implement it.
procedure TForm1.ADOQuery1BeforeDelete(DataSet: TDataSet);
var
SQL : string;
begin
SQL := 'DELETE FROM [Note] WHERE NoteID='+
DataSet.FieldByName('NoteID').AsString;
ADOConnection1.Execute(SQL);
TADOQuery(DataSet).ReQuery;
Abort;
end;
This will allow TADOQuery.Delete to work properly. The Abort is necessary to prevent TADOQuery from also trying to delete the record after you have deleted it. The primary down side is that the TADOQuery.ReQuery does not preserve the cursor position, i.e. the current record will be the first record.
Update:
The following attempts to restore the cursor. I do not like the second Requery, but it appears to be necessary to restore the DataSet after attempting to restore a invalid bookmark (due to deleting the last record). This worked with my limited testing.
procedure TForm1.ADOQuery1BeforeDelete(DataSet: TDataSet);
var
SQL : string;
bm : TBookmarkStr;
begin
SQL := 'DELETE FROM [Note] WHERE NoteID='+
DataSet.FieldByName('NoteID').AsString;
bm := Dataset.BookMark;
ADOConnection1.Execute(SQL);
TADOQuery(DataSet).ReQuery;
try
Dataset.BookMark := bm;
except
TADOQuery(DataSet).Requery;
DataSet.Last;
end;
Abort;
end;
If you were using a TADOTable, then the components handle the deletes in the database when you delete them from the TADOTable dataset. However, since you are using a TADOQuery that joins multiple tables, you need to handle the database delete differently.
When you make the record you want to delete the current record in the db grid, it scrolls the TADOQuery's cursor to that row in its dataset. You can then use TADOQuery.Delete to delete the current record. If you write code for the TADOQuery.BeforeDelete event, the you can capture the id fields from the record before it is deleted locally, and using another TADOQuery or TADOCommand component, you can create and execute the SQL to delete the record(s) from the database.
Since the code that deletes the records from the database is in the BeforeDelete event, if an exception occurs and the database records aren't deleted, the local delete will be cancelled too, and the local record will not be deleted -- and the error will be displayed (e.g., 'foreign key violation'...).
I have two tables in Filemaker:
tableA (which includes fields idA (e.g. a123), date, price) and
tableB (which includes fields idB (e.g. b123), date, price).
How can I create a new table, tableC, with field id, populated with both idA and idB, (with the other fields being used for calculations on the combined data of both tables)?
The only way is to script it (for repeating uses) or do it 'manually', if this is an ad-hoc process. Details depend on the situation, so please clarify.
Update: Sorry, I actually forgot about the question. I assume the ID fields do not overlap even across tables and you do not need to add the same record more than once, but update it instead. In such a case the simplest script would be like that:
Set Variable[ $self, Get( FileName ) ]
Import Records[ $self, Table A -> Table C, sync on ID, update and add new ]
Import Records[ $self, Table B -> Table C, sync on ID, update and add new ]
The Import Records step is managed via rather elaborate dialog, but the idea is that you import from the same file (you can just type file:<YourFileName> there), the format is FileMaker Pro, and then set the field mapping. Make sure to choose the Update matching records and Add remaining records options and select the ID fields as key files to sync by.
It would be a FileMaker script. It could be run as a script trigger, but then it's not going to be seamless to the user. Your best bet is to create the tables, then just run the script as needed (manually) to build Table C. If you have FileMaker Server, you could schedule this script to be run periodically to keep Table C up-to-date.
Maybe you can use the select into statement.
I'm unsure if you wish to use calculated field from TableA and TableB or if your intension was to only calculate fields from the same table?
If tableA.IdA exists also in tableB.IdA, you could join the two tables and select into.
Else, you run the statement once for each table.
Select into statement
Select tableA.IdA, tableA.field1A, tableA.field2A, tableA.field1A * tableB.field2A
into New_Table from tableA
Edit: missed the part where you mentioned FileMaker.
But maybe you could script this on the db and just drop the table.