SQL Server Nested Triggers not firing as expected - sql-server

I have upgraded a SQL Server 6.5 database to SQL Server 2012 by scripting the schema from 6.5, fixing any syntax issues in this script and then I have used this script to create a 2012 database.
At the same time I have upgraded the front-end application from PowerBuilder 6 to 12.5.
When I perform a certain action in the application it inserts data in to a given table. This table has a trigger associated with the INSERT action and within this trigger other tables are updated. This causes additional triggers to fire on these tables as well.
Initially the PowerBuilder application reports the following error:
Row changed between retrieve and update.
No changes made to database.
Now I understand what this error message means but this is where it gets really 'interesting'!
In order to understand what is happening in the triggers I decided to insert data in to a logging table from within the triggers so that I could better understand the flow of events. This had a rather unexpected side effect - the PowerBuilder application no longer reports any errors and when I check in the database all data is written away as expected.
If I remove these lines of logging, the application once again fails with the error message previously listed.
My question is - Can anyone explain why adding some lines of logging could possibly have this side effect? It almost seems like the act of adding some logging which write data away to a logging table, slows things down or somehow serializes the triggers to fire in the correct order....
Thanks in advance for any insight you can offer :-)

Well, let's recap why this message comes up (I have a longer explanation at http://www.techno-kitten.com/PowerBuilder_Help/Troubleshooting/Database_Irregularities/database_irregularities.html). It's basically because the database can no longer find the data it's trying to UPDATE, based on the WHERE clause generated by the DataWindow. Triggers cause this by changing data in columns in the updated table, so logic of WHERE = fails.
If I were troubleshooting this, I'd do the following for both versions of the trigger:
retrieve your data in the app, and also in a DBMS tool, cache the data from both (data from the Original buffer in PB, debugger breakpoint in PB may help) and compare the columns you expect to be in the WHERE clause (client side manipulation of data and status flags can also cause this problem)
make your data changes and initiate the save
from a breakpoint in the SQLPreview events (this is likely multiple rows if it's trigger-caused), cache the pending UPDATE statements
while still paused in SQLPreview, use the WHERE clause in the UPDATE statements to SELECT the data with the DBMS tool
Somewhere through all this, you'll identify where the process is breaking down in the failure case, and figure out why it passes in the good case. I suspect you'll find a much simpler solution than you're hypothesizing.
Good luck,
Terry

Related

TADOConnection.OnExecuteComplete / OnWillExecute event not called with TADOTable

I try to trace SQL command. I read this post : How can I monitor the SQL commands send over my ADO connection?
It does work for select but not for Delete/Insert/Update...
Configuration : A TADOConnection (MS SQL Server), a TADOTable, a TDatasource, a TDBGrid with TDBNavigator.
So I can trace the SELECT which occurs when the table is open, but nothing occurs when I use the DBNavigator to UPDATE, INSERT, or DELETE records.
When I use a TADOCommand to delete a record, it works too. It seems It doesn't work only when I use the DBNavigator so maybe a clue but I don't find anything about that.
Thanks in advance
Hopefully someone will be able to point you in the direction of a pre-existing library that does your logging for you. In particular, if FireDAC is an option, you might take a look at what it says in here:
http://docwiki.embarcadero.com/RADStudio/XE8/en/Database_Alerts_%28FireDAC%29
Of course, converting your app from using Ado to FireDAC, may not be an option for you, but depending on how great your need is, you could conceivably extract the Sql-Server-specific method of event alerting FireDAC uses into an Ado application. I looked into this briefly a while ago and it looked like it would be fairly straightforward.
Prior to FireDAC, I implemented a server-side solution that caught Inserts, Updates and Deletes. I had to do this about 10 years ago (for Sql Server 2000) and it was quite a performance to set up.
In outline it worked like this:
Sql Server supports what MS used to call "extended stored procedures" which are implemented in custom DLLs (MS may refer to them by a different name these days or have even stopped supporting them). There are Delphi libraries around that provide a wrapper to enable these to be written in Delphi. Of course, these days, if your Sql Server is 64-bit, you need to generate a 64-bit DLL.
You write Extended Stored Procedures to log the changes any way you want, then write custom triggers in the database for Inserts, Updates and Deletes, that feed the data of the rows involved to your XSPs.
As luck would have it, my need for this fell away just as I was completing the project, before I got to stress-testing and performance-profiling it but it did work.
Of course, not in every environment will you be allowed/able to install s/ware and trigger code on the Sql Server.
For interest, you might also take a look at https://msdn.microsoft.com/en-us/library/ms162565.aspx, which provides an SMO object for tracing Sql Server activity, though it seems to be 32-bit only at the moment.
For amusement, I might have a go at implementing an event-handler for the recordset object that underlies a TAdoTable/TAdoQuery, which sould be able to catch the changes you're after but don't hold your breath ...
And, of course, if you're only interested in client-side logging, one way to do it is to write handlers for your dataset's AfterEdit, AfterInsert and AfterDelete events. Those wouldn't guarantee that the changes are ever actually applied at the server, of course, but could provide an accurate record of the user's activity, if that's sufficient for your needs.

Access and Linked Tables to SQL Server - edit data in layout view results in transaction and locked tables

I am using Access 2013 and Sql Server 2012. I have upsized the access application to Sql Server using linked tables. I have an Access query (not on SQL server) that provides a result from about 4 tables. I then have this displayed in a bound layout/table view where each row corresponds to a row from the result query.
In the layout view, the user can edit the data in any row. Once the user does an edit, apparently Access opens a transaction and keeps it open. As long as the user is in the editable layout view, the tables that were part of the query are locked. If another user on a different computer is using Access, then they are unable to edit any of the tables (through any method, not just the same layout view). The 2nd user will get a 30 second pause in their application, and then finally will get error...
ODBC-update on a linked table 'TableName' failed.
[Microsoft][ODBC Sql Server DRiver]Query timeout expired (#0)
Once the first user exits the layout view, then all is opened up again for other users to edit.
Is there any way to control the transaction? Maybe just have it update the one row, and then release the transaction.
May have to change the data source for the layout view to be a SQL Server Sproc or View and not allow edits in the table. Instead, if the user wants to change something in a row, then click to bring up an edit form. Looking for other options.
You have a few possible solutions.
One solution is to open the form in question with a where clause to ONLY thus open the one row, not many rows.
The above is only practical suggestion if the form in question is NOT a continues form.
So you go:
strInvoice = inputbox("What invoice to work on")
docmd.Openform "frmInvoice",,,"[invoiceNum] = " & strInvoice
So limiting the form to the one row will fix this assuming there is an index on the invoice column. Also such a design tends to be more user friendly. I explain this important search concept here:
http://www.kallal.ca/Search/index.html
Another way to fix this issue is to FORCE fill the form with all records (this is really a bad idea from a performance point of view, and even without SQL server launching a form WITHOUT ANY kind of where clause as I show above is a bad idea from a user point of view, and also from a performance point of view)..
The reason why the table lock often occurs is the form starts pulling data from SQL server but then Access says HEY WAIT I have enough data – but the queries that started on SQL server already fired off some table locks and ASSUMED that all rows would be returned to the client (so the client halting the flow of records is what really causes your locks). What you can do to prevent this issue is thus execute a move last to pull all records, and thus that will eliminate (release) the table locks.
So in the forms on-load event, you can go:
me.recordset.MoveLast
me.recordset.MoveFirst
As noted the problem here is that pulling all records into a form is a VERY bad design in the first place.
Last by not least:
Another way to eliminate the table lock is to build the query as a view SQL server side and include the NOLOCK hint. You then setup a link to the view, and based the form NOT on a local query, but that of the view. Since the view has a NOLOCK hint, then you don’t need the movelast/movefirst suggestion as per above.
So out of the several solutions here, opening the form to ONE record is really the recommended solution. I mean when you walk up to an instant teller machine, it does not download EVERY account and THEN ask you what account to work on. When you use a search engine such as google, then it does not download the WHOLE internet and THEN ask you what to search for. Even an old lady at the bus stop can figure this out, let alone someone writing software!
So when you design + build a form in Access, it makes VERY little sense to download all records from the table into the form and THEN let the user search. So the user needs to be prompted for what to work on BEFORE the form is loaded, and if you use a where clause when opening the form EVEN if the form is bound to a large linked table to SQL server ONLY the record(s) that match the where clause will be pulled down into that form.
If you don’t use the where clause, then as noted, the locking issue will rear its ugly head.
As a temp fix, try the movelast/movefirst in the forms on-load, as that should fix the locking issue. However for longer term, I would suggest using a where clause, or consider the view idea.
Note that if you have ANY combo box based on any of the 4 tables, then AGAIN will you find locking issues since the combo box will request the data (and SQL server places a table lock during that request). And AGAIN Access will THEN tell sql server to please stop loading up the combo box since all rows are NOT yet required (but too late, as the table lock has occurred).
So if you have ANY combo box in that form based on any of the tables used, then again you find table locks occur. In these cases, I would again suggest basing the combo boxes on either pass-through quires with the NO LOCK hint, or on views with again the no-lock hint.
So check the form for any combo box based on any of those tables – they will cause table locks also.

Limiting the number of updated/deleted rows in SQL Server Management Studio

It is very easy to make mistakes when it comes to UPDATE and DELETE statements in SQL Server Management Studio. You can easily delete way more than you want if you had a mistake in the WHERE condition or, even worse, delete the whole table if you mistakenly write an expression that evaluates to TRUE all the time.
Is there anyway to disallow queries that affects a large number of rows from within SQL Server Management Studio? I know there is a feature like that in MySQL Workbench, but I couldn't find any in SQL Server Management Studio.
No.
It is your responsibility to ensure that:
Your data is properly backed up, so you can restore your data after making inadverdent changes.
You are not writing a new query from scratch and executing it directly on a production database without testing it first.
You execute your query in a transaction, and review the changes before committing the transaction.
You know how to properly filter your query to avoid issuing a DELETE/UPDATE statement on your entire table. If in doubt, always issue a SELECT * or a SELECT COUNT(*)-statement first, to see which records will be affected.
You don't rely on some silly feature in the front-end that might save you at times, but that will completely screw you over at other times.
A lot of good comments already said. Just one tiny addition: I have created a solution to prohibit occasional execution of DELETE or UPDATE without any WHERE condition at all. This is implemented as "Fatal actions guard" in my add-in named SSMSBoost.
(My comments were getting rather unwieldy)
One simple option if you are uncertain is to BEGIN TRAN, do the update, and if the rows affected count is significantly different than expected, ROLLBACK, otherwise, do a few checks, e.g. SELECTs to ensure just the intended data was updated, and then COMMIT. The caveat here is that this will lock rows until you commit / rollback, and potentially require escalation to TABLOCK if a large number of rows are updated, so you will need to have the checking scripts planned in advance.
That said, in any half-serious system, no one, not even senior DBA's, should really be executing direct ad-hoc DML statements on a prod DB (and arguably the formal UAT DB too) - this is what tested applications are meant for (or tested, verified patch scripts executed only after change control processes are considered).
In less formal dev environments, does it really matter if things get broken? In fact, if you are an advocate of Chaos Monkey, having juniors break your data might be a good thing in the long run - it will ensure that your process re scripting, migration, static data deployment, integrity checking are all in good order?
My suggestion for you is disable auto commit. where you can commit your changes after review it. and commit it before ending the session.
for more details you can please follow the MSDN link:
http://msdn.microsoft.com/en-us/library/ms187807.aspx

Viewing database records realtime in WPF application

disclaimer: I must use a microsoft access database and I cannot connect my app to a server to subscribe to any service.
I am using VB.net to create a WPF application. I am populating a listview based on records from an access database which I query one time when the application loads and I fill a dataset. I then use LINQ to dataset to display data to the user depending on filters and whatnot.
However.. the access table is modified many times throughout the day which means the user will have "old data" as the day progresses if they do not reload the application. Is there a way to connect the access database to the VB.net application such that it can raise an event when a record is added, removed, or modified in the database? I am fine with any code required IN the event handler.. I just need to figure out a way to trigger a vb.net application event from the access table.
Think of what I am trying to do as viewing real-time edits to a database table, but within the application.. any help is MUCH appreciated and let me know if you require any clarification - I just need a general direction and I am happy to research more.
My solution idea:
Create audit table for ms access change
Create separate worker thread within the users application to query
the audit table for changes every 60 seconds
if changes are found it will modify the affected dataset records
Raise event on dataset record update to refresh any affected
objects/properties
Couple of ways to do what you want, but you are basically right in your process.
As far as I know, there is no direct way to get events from the database drivers to let you know that something changed, so polling is the only solution.
I the MS Access database is an Access 2010 ACCDB database, and you are using the ACE drivers for it (if Access is not installed on the machine where the app is running) you can use the new data macro triggers to record changes to the tables in the database automatically to an audit table that would record new inserts of updates, deletes, etc as needed.
This approach is the best since these happen at the ACE database driver level, so they will be as efficient as possible and transparent.
If you are using older versions of Access, then you will have to implement the auditing yourself. Allen Browne has a good article on that. A bit of search will bring other solutions as well.
You can also just run some query on the tables you need to monitor
In any case, you will need to monitor your audit or data table as you mentioned.
You can monitor for changes much frequently than 60s, depending on the load on the database, number of clients, etc, you could easily check ever few seconds.
I would recommend though that you:
Keep a permanent connection to the database while your app is running: open a dummy table for reading, and don't close it until you shutdown your app. This has no performance cost to anyone, but it will ensure that the expensive lock file creation is done only once, and not for every query you run. This can have a huge performance import. See this article for more information on why.
Make it easy for your audit table (or for your data table) to be monitored: include a timestamp column that records when a record was created and last modified. This makes checking for changes very quick and efficient: you just need to check if the most recent record modified date matches the last one you read.
With Access 2010, it's easy to add the trigger to do that. With older versions, you'll need to do that at the level of the form.
If you are using SQL Server
Up to SQL 2005 you could use Notification Services
Since SQL Server 2008 R2 it has been replaced by StreamInsight
Other database management systems and alternatives
Oracle
Handle changes in a middle tier and signal the client
Or poll. This requires you to configure the interval so you do not miss out on a change too long.
In general
When a server has to be able to send messages to clients it needs to keep a channel/socket open to the clients this can become very expensive when there are a lot of clients. I would advise against a server push and try to do intelligent polling. Intelligent polling means an interval that is as big as possible and appropriate caching on the server to prevent hitting the database to many times for the same data.

Records not replicated when inserted by custom replication stored procedure

I've just recently setup a custom replication for my subscriber database, as described in another post here. Basically, when the publisher pushes a new record to the subscribers, the stored procedure will also insert a replicated time into an extra column in the table, and insert a new record to a log table.
My problem occurs when trying to replicate the log table back to the main publication database. This is what I did:
In the database where the log table is located, I setup a new transactional replication, and set it to create a snapshot.
Once the publication is created, I create a new push subscription, and set it to initialize immediately.
Once the subscription is created, I checked the synchronization status and confirm that the snapshot is applied successfully.
Now here's the weird part: if I manually add a record to the log table using the SQL Server Management Studio, the record will be replicated fine. If the record is added by the custom replication stored procedure, it will not. The status will always display "No replicated transactions are available".
I have no clue why the publication is behaving this way: I really don't see how it is treating the data inserted by the custom replication stored procedure differently.
Can someone explain what may I've done wrong?
UPDATE: I finally have an answer for this problem a few months ago, just that I never got around to update this question. We have to log a support call to Microsoft, but we got a working solution.
ANSWER: To resolve the problem, when adding a subscription,
you need to run the script like below:
sp_addsubscription #publication = 'TEST', ..., #loopback_detection = 'false'
The key to the solution is the last parameter shown above. By default, the generated subscription script will not have this parameter.
I finally have an answer for this problem a few months ago, just that I never got around to update this question. We have to log a support call to Microsoft, but we got a working solution.
To resolve the problem, when adding a subscription, you need to run the script like below:
sp_addsubscription #publication = 'TEST', ..., #loopback_detection = 'false'
The key to the solution is the last parameter shown above: #loopback_detection = 'false'. By default, the generated subscription script will not have this parameter.
I see this is a very old question now so you've probably resolved this, but anyway...
The problem you describe certainly doesn't seem to make sense. The replication will be invoked further to any change to the source table via the replication trigger. The only thing that doesn't look right in your process description (though I may be misreading) is that you are creating a snapshot before pushing the subscription. Typically you should be setting up the replication, pushing the subscription and then creating / pushing a snapshot. Don't trust the sync status as this isn't checking anything, it's simply saying it has no transactions to copy, it doesn't know that the tables are synched.
As to why your manual insert works but not the automated one I would check and recheck your workings, as fundamentally, if the replication is working then any change to this table will be replicated, irrespective of the source.
If you have long since resolved this I'd be interested to hear the resolution.
Edit:
A late thought: when you are updating your datetime field using your custom proc that then fires triggers back into the replication database, you could be causing deadlocking problems between the replication model and your inserts. This could potentially be causing the failure to replicate back. Bit complex to figure out without running tests, but it's a possibility.

Resources