Using a single TADOQuery i pull records from two different tables using a left outer join:
Select M*, D.* from Courier M Left outer join Courier_VT D on M.Courier_Identifier = D.FK_Courier_Identifier
I use a TDBGrid to successfully post field updates to my MSSQL DB.
Since there is foreign key reference (FK_Courier_Identifier with Courier_Identifier) I get a error when I insert a record,
Cannot Insert the value Null in to column 'FK_Courier_Identifier', table Courier_VT; column does not allow null
but a record is posted in Courier table, i do know that i need to assign the Courier_Identifier to FK_Courier_Identifier before posting but don't how and where to do it
How do we Insert \ Delete records in this scenario ? Is it possible to achieve using a single TADOQuery ?
AFAIK TADOQuery is unable to handle insert/delete/update statements when multiple tables are joined. The reason behind it is that it can not know which table it has to update or how to do it.
The usual approach with other database access components is to either provide a property for each type of DML sentence (ODAC components are one example) or you have to add a second "update SQL" component linked to your query which will contain the DML sentences (Zeos is one example of components that use this approach).
Said this, probably your best bet is to use the BeforeDelete and BeforePost event handlers to treat your scenario. Basically you would use them to issue the DML sentence, execute it with some storedproc or sql component and then abort the event handler. Check the accepted answer to this SO question for more information and a code sample.
EDIT: if your code can handle the updates and deletes as you say in your comment, then the problem only lies with the assignment of the FK_Courier_Identifier on inserting (should have read the question more carefully...), which you can solve by using the OnBeforePost event handler:
procedure TMyForm.MyADOQueryBeforePost(Sender: TObject);
begin
MyADOQuery.FieldByName('FK_Courier_Identifier').AsString := CourierId;
end;
Of course, you will need to adapt this code since here I am supposing the field is a varchar and that you know prior to inserting in the database the value of Courier ID.
HTH
Related
I have an API that i'm trying to read that gives me just the updated field. I'm trying to take that and update my tables using a stored procedure. So far the only way I have been able to figure out how to do this is with dynamic SQL but i would prefer to not do that if there is a way not to.
If it was just a couple columns, I'd just write a proc for each but we are talking about 100 fields and any of them could be updated together. One ticket might just need a timestamp updated at this time, but the next ticket might be a timestamp and who modified it while the next one might just be a note.
Everything I've read and have been taught have told me that dynamic SQL is bad and while I'll write it if I have too, I'd prefer to have a proc.
YOU CAN PERHAPS DO SOMETHING LIKE THIS:::
IF EXISTS (SELECT * FROM NEWTABLE NOT IN (SELECT * FROM OLDTABLE))
BEGIN
UPDATE OLDTABLE
SET OLDTABLE.OLDRECORDS = NEWTABLE.NEWRECORDS
WHERE OLDTABLE.PRIMARYKEY= NEWTABLE.PRIMARYKEY
END
The best way to solve your problem is using MERGE:
Performs insert, update, or delete operations on a target table based on the results of a join with a source table. For example, you can synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table.
As you can see your update could be more complex but more efficient as well. Using MERGE requires some proficiency, but when you start to use it you'll use it with pleasure again and again.
I am not sure how your business logic works that determines what columns are updated at what time. If there are separate business functions that require updating different but consistent columns per function, you will probably want to have individual update statements for each function. This will ensure that each process updates only the columns that it needs to update.
On the other hand, if your API is such that you really don't know ahead of time what needs to be updated, then building a dynamic SQL query is a good idea.
Another option is to build a save proc that sets every user-configurable field. As long as the calling process has all of that data, it can call the save procedure and pass every updateable column. There is no harm in having a UPDATE MyTable SET MyCol = #MyCol with the same values on each side.
Note that even if all of the values are the same, the rowversion (or timestampcolumns) will still be updated, if present.
With our software, the tables that users can edit have a widely varying range of columns. We chose to create a single save procedure for each table that has all of the update-able columns as parameters. The calling processes (our web servers) have all the required columns in memory. They pass all of the columns on every call. This performs fine for our purposes.
I went over the documentation for Clickhouse and I did not see the option to UPDATE nor DELETE. It seems to me its an append only system.
Is there a possibility to update existing records or is there some workaround like truncating a partition that has records in it that have changed and then re-insering the entire data for that partition?
Through Alter query in clickhouse we can able to delete/update the rows in a table.
For delete: Query should be constructed as
ALTER TABLE testing.Employee DELETE WHERE Emp_Name='user4';
For Update: Query should be constructed as
ALTER TABLE testing.employee UPDATE AssignedUser='sunil' where AssignedUser='sunny';
UPDATE: This answer is no longer true, look at https://stackoverflow.com/a/55298764/3583139
ClickHouse doesn't support real UPDATE/DELETE.
But there are few possible workarounds:
Trying to organize data in a way, that is need not to be updated.
You could write log of update events to a table, and then calculate reports from that log. So, instead of updating existing records, you append new records to a table.
Using table engine that do data transformation in background during merges. For example, (rather specific) CollapsingMergeTree table engine:
https://clickhouse.yandex/reference_en.html#CollapsingMergeTree
Also there are ReplacingMergeTree table engine (not documented yet, you could find example in tests: https://github.com/yandex/ClickHouse/blob/master/dbms/tests/queries/0_stateless/00325_replacing_merge_tree.sql)
Drawback is that you don't know, when background merge will be done, and will it ever be done.
Also look at samdoj's answer.
You can drop and create new tables, but depending on their size this might be very time consuming. You could do something like this:
For deletion, something like this could work.
INSERT INTO tableTemp SELECT * from table1 WHERE rowID != #targetRowID;
DROP table1;
INSERT INTO table1 SELECT * from tableTemp;
Similarly, to update a row, you could first delete it in this manner, and then add it.
Functionality to UPDATE or DELETE data has been added in recent ClickHouse releases, but its expensive batch operation which can't be performed too frequently.
See https://clickhouse.yandex/docs/en/query_language/alter/#mutations for more details.
It's an old question, but updates are now supported in Clickhouse. Note it's not recommended to do many small changes for performance reasons. But it is possible.
Syntax:
ALTER TABLE [db.]table UPDATE column1 = expr1 [, ...] WHERE filter_expr
Clickhouse UPDATE documentation
We are doing a DB2 migration to SQL Server and there are a number of BEFORE inserts/updates that we need to migrate. We can take care of the insert pretty simply by using an INSTEAD OF INSERT by simply using the command "INSERT INTO TableName SELECT * FROM inserted".
However, for the update it is harder as you can't just do a command like "UDPATE TableName SELECT * FROM Inserted. Instead, the only option we have found is to declare variables for each of the incoming columns, and then use those in the UPDATE TableName SET ColumnName = #col1, etc. Unfortunately, this would result in quite a bit of manual work, and I would like to find a more automatable solution.
Some questions:
1) Is there a way you can issue an update using inserted from the trigger, without knowing the specific column information?
2) Is there a way to write a loop in the trigger that would automatically step through the inserted columns, and update those to the database?
3) Is there a way to get access to the original command that caused the trigger? So I can do an EXEC #command and take care of things that way?
Any help would be greatly appreciated!
Thanks!
Bob
You must specify the column names in an UPDATE
You could loop through the metadata of the target table (in sys.columns) and build an UPDATE statement dynamically, but dynamic SQL executes in its own scope, so it would not be able to access the inserted and deleted tables directly. Although you can work around this by copying the data into local temp tables (#inserted) first, it seems like a very awkward approach in general
There is no way to access the original UPDATE statement
But I'm not sure what you're really trying to achieve. Your question implies that the trigger does the original INSERT or UPDATE anyway without modifying any data. If that's really the case, you might want to explain what the purpose of your trigger is because there may be an alternative, easier way to do whatever it is that it's doing.
I'm also a bit confused by your statement that you have to "declare variables for each of the incoming columns, and then use those in the UPDATE TableName SET ColumnName = #col1, etc". Triggers in SQL Server always fire once per statement, so you normally do an UPDATE with a join to handle the case where the UPDATE is for more than one row.
You might also find the UPDATE() or COLUMNS_UPDATED() functions useful for limiting your trigger code to process only those columns that were really updated.
I am trying to create a generic trigger in SQL Server which can copy all column data from Table A and insert them in corresponding fields in Table B.
There are few problems I am facing.
I need this copy to occur under three conditions : INSERT, DELETE and UPDATE.
The trigger needs to be triggered after CUD operations. using AFTER throws SQL error saying ntext etc are not supported in inserted. How do I resolve this error?
Instead of if used can work for INSERT but not for delete. Is there a way to do this for delete operations?
Is there a way I can write a generic code inside the trigger which can work for all sorts of tables (we can assume that all the columns in table a exists in column b)
I am not well versed with triggers or for that matter DDL in SQL Server.
Appreciate if some can provide me some solutions.
Thanks
Ben
CREATE TRIGGER (Transact-SQL)
Use nvarchar(max) instead of ntext.
You can have an instead of trigger for delete.
You can have one trigger that handles insert/update/delete for one table but you can not connect a trigger to more than one table.
Update: My problem doesn't seem to be with the SQL server. I executed an update statement inside the database manually against the view and I was able to update the Bar table. I'll close this and research the OleDbDataAdapters more since I think the problem lies with them.
This question applies to MS SQL Server 2000. How do I determine which table of the multitable view can be modified?
I have the following view:
CREATE VIEW dbo.MyView
AS
SELECT dbo.Foo.Id, dbo.Bar.Id, dbo.Foo.column1, dbo.Foo.column2,
dbo.Foo.column3, dbo.Bar.column1, dbo.Bar.column2,
dbo.Bar.column3
FROM dbo.Bar INNER JOIN
dbo.Foo ON dbo.Bar.Id = dbo.Foo.ForeignId
When I update this view, (using VB.NET OleDbDataAdapters), I can update columns of Foo, but not columns of Bar. My investigation into this tells me that in a multitable view like this that MS SQL server only allows you to update one of the tables. So my question is, how does SQL server determine which table can be updated?
I tried a test where I edit the fields of a particular row from the view. Afterwards, I used the OleDbDataAdapter to update the view. Only the edits to the Foo table were accepted. The edits to the Bar table were ignored (no exception thrown).
Is there a way to predict which of the tables can be updated or a way to control which one? What if I wanted Bar to be the updateable table instead of Foo?
Update: I found this on google, MS SQL Server 2000 Unleased:
http://books.google.com/books?id=G2YqBS9CQ0AC&pg=RA1-PA713&lpg=RA1-PA713&dq=ms+sql+server+"multitable+view"++updated&source=bl&ots=ZuQXIlEPbO&sig=JbgdDe5yU73aSkxh-SLDdtMYZDs&hl=en&ei=b-0SSq-aHZOitgPB38zgDQ&sa=X&oi=book_result&ct=result&resnum=1#PRA1-PA713,M1
(For some reason the URL I'm trying to paste doesn't work with this site, sorry that you have to copy&paste.)
Which says:
An update through a multitable view cannot affect more than one underlying base table.
A delete cannot be executed against multitable views.
But, I don't yet see an answer to my question.
Again, my question is:
How do I determine which table of the multitable view can be modified?
I realize I can write two update statements one for each table. My concern is different. I need to audit code that uses views like the one above and does updates on the views. I was hoping to find a way to determine which parts of the updates will be silently ignored.
Example:
I edit Bar.Column1 and then call the Update() method of the OleDbDataAdapter. This results in the Bar table not being modified nor is an exception thrown. If I edit Foo.Column2 then call Update() the Foo table does get modified.
You can update any table in the view, but only fields that are all in the same table in that statement. If you need to update fields from two tables in a view, then you must write two update statements.
Personally I prefer not to update or delete from views at all. I use the base tables for that.
There are also rules concerning whether view is updateble. See Books online. Search for this:
views-SQL Server, modifying data
You need to be able to uniquely identify the row in the table by returning the primary key. Try returning dbo.Bar.Id in the view, and you should be able to edit columns in table Bar.