MS Access Write Conflict - SQL Server - Me = Dirty - sql-server

I'm getting the error message:
This record has been changed by another user since you started editing it. If you save the record, you will overwrite the changes the other user made...
I know this is a common question and I've spent hours researching and testing but to no avail. A few notes:
There are no bit fields anywhere in my database
All tables have a primary key, data type = identity
All tables have a create/modified timestamp trigger on updates and insert
I'm fairly certain that the problem has to do when the form (and multiple subforms) are creating the identity fields and/or the timestamp triggers. Specifically, I only get this error on the "Individual Fish" table when I go back to edit an old 'fish' (as shown on the screenshot). If I just tab through the form and don't make any edits, it works fine. But if I need to edit anything on a previous 'fish' - after the identity / trigger fires - then it gives me the error.
I've gone through and added If Me.Dirty Then Me.Dirty = False End If to every form for the following events:
On Current, On Load, On Click, After Update, Before Update, Before Insert, On Dirty.
I also added DoCmd.RunCommand acCmdSaveRecord to On Deactivate. I will admit that I am not great at VBA, so there could be something silly I did here. Code attached. I've also messed around with Record Locks = Edited Record.
So far nothing seems to work. Please let me know if you think I'm missing something. Also, if you have any random comments or suggestions about my database design or anything else, I always welcome feedback.
Thanks!
UPDATE:
Albert's answers got me to the right place. The short version is to add a rowversion (aka timestamp) field to all tables. This was mentioned on several other posts, but i didn't realize the [awfully named] "timestamp" didn't actually have to do with date or time. Thanks for the help!
]3

Ok, lots of things here to check. First of all, placing a me.Dirty = false in on-current, or events like before update will cause the "same" event to try and trigger again. You really (but really really) don't want to do this. (so wild random tossing in of me.dirty in those events will only make this issue much worse and often cause the very same event to trigger again.
next up:
All tables have a create/modified timestamp trigger on updates and insert
Ok, now the above is confusing. Do you have an actual trigger - as that is a separate issue and will often trigger the record been modified by someone else.
Also when we speak of a timestamp column, do keep in mind that such columns have ZERO ZERO ZERO to do with datetime. Over the years Microsoft has attempted to "change" the name used. The correct name is ROWVERSION, and this column is NOT a datetime data type column, but is called timestamp. Do NOT in ANY WAY confuse this rowversion system/column with that of a datetime column.
So, the assumptions are:
You have a timestamp column - this is of data type timestamp. This column is NOT touched by your code or trigger in ANY WAY. This column is NOT of datetime, nor of datetime2, but is of data type timestamp.
If you don't have a actual timestamp column here (it does not need to be on the form), then you will get constant "dirty" warnings. You also get this with any real data type columns - especially if they are set by server code. (access will round differnt).
Bottom line:
You need a actual rowversion column (of type timestamp) in that table. Behind the scenes if access does NOT find this column, then it will do a column by column compare, and without question with a trigger to set some LastUpdated column with GETDATE() on the server side trigger, then this will cause nothing but problems. Introduction of a timestamp column will STOP access from doing the column by column compare after a update, and it will ONLY look at the timestamp column. So, your trigger can now update the LastUpdated, and the timestamp column should not change from access points of view.
So, you need to be sure:
A PK column is seen by access - all sql tables need a PK.
All tables should have a rowversion column.
If you do add a timestamp (rowverison) column to the problem table, then make sure you re-link on the access client side. In fact after ANY table change or modifications server side, then you should re-link the client side.
I would remove any stray me.Dirty = False code in that form.
You can place a "save" button on the form if you wish, and simply have it go
if me.dirty = true then me.Dirty = False
Edit
With the above setup, you should be able to re-introduce your server side trigger that sets the LastUpdated. However, you not want any code in the form that "touches" or uses that column. You can however should be able to drop in that LastUpdated column into the form and see it update after you save.

Bottom line as I have run across this error on an upgrade of SQL Server to 2016, due not assume "timestamp" is of data type "datetime". It is not. The data type that Access requires is of type "timestamp". Add a column with that data type to any table that is editable through Access and that will clear the "Write conflict with grayed out save button" message.

Related

Why can't Access update table?

Access front end, SQL Server back end.
Simple update query
PARAMETERS ParamTransactionID Long, ParamVoidFlag Short;
UPDATE tblTransaction
SET tblTransaction.VoidInProgressFlag = [ParamVoidFlag]
WHERE (tblTransaction.TransactionID=[ParamTransactionID]);
using the query here
Set qdef = CurrentDb.QueryDefs("qUPD-tblTransaction_VoidInProgress")
qdef.Parameters![ParamVoidFlag] = VoidFlag
qdef.Parameters![ParamTransactionID] = TransactionID
qdef.Execute dbFailOnError + dbSeeChanges
qdef.Close
gives
[Microsoft][ODBC SQL Server Driver]Query timeout expired
ODBC--update on a linked table 'tblTransaction' failed.
Editing the table directly works.
Opening the query and giving parameters works.
From the app still doesn't.
UPDATE
Deleted the view, no affect.
The old version is now getting the same failure, so it seems like it is not a code issue.
The only thing in common is the table, so it might be a minor change I made there.
I'll check and see if it is just that table or the entire database.
But seems odd that I can make the change by running the query directly, but get different results running it from code.
UPDATE 2
I thought that perhaps it was something to do with the entire database being read-only somehow, and this is just the first place it is getting hit. But no. Other forms can update their tables with no issues.
So it looks related to the specific table. But it still seems odd that I can update perfectly fine by just running the Update Query.
UPDATE 3
To make testing easier, I am running the queryfrom the main menu form, instead of going through all the forms to get to the point where it fails.
Running against the original DB schema worked. Made the same changes again, replacing NTEXT with VARCHAR(MAX), and it still works.
Back to the original table, still works.
Go back through all the forms, fails.
So the problem seems to be related to one of the forms that is open.
I'll go back through that sequence again.
Also, this explains why it works from the query and not from the form.
Sadly, I can't get to the query to run that while the form is open.
Ok, the first question is why/how did you wind up with "Short" data type for the parameter?
It should be:
PARAMETERS ParamTransactionID Long, ParamVoidFlag Bit;
UPDATE tblTransaction
SET tblTransaction.VoidInProgressFlag = [ParamVoidFlag]
WHERE (tblTransaction.TransactionID=[ParamTransactionID]);
You also state that this query works when you run it from the UI.
So, in code, then you have this:
Make sure that ALL code modules have option Explicit at the start like this:
Option Compare Database
Option Explicit
So, your code should now be:
Dim qdef as DAO.QueryDef
Set qdef = CurrentDb.QueryDefs("qUPD-tblTransaction_VoidInProgress")
qdef.Parameters![ParamVoidFlag] = VoidFlag
qdef.Parameters![ParamTransactionID] = TransactionID
qdef.Execute dbFailOnError + dbSeeChanges
So correct the data type you have for ParmaVoidFlag.
Also, check the table name in the left nave pane - is it
dbo_tblTransaction
or
tblTransaction.
Also, OPEN UP the linked table in design view - (ignore read only message). Look at the data types. You MUST have a PK defined - so check for a PK.
Next up, on sql server side, true/false fields MUST NOT have a default of null. If they do then updates will fail. So in sql server, make sure the table in question has a default of 0 set.
Last but not least?
If the query still errors out, then you need to add a timestamp (not date time) to the sql server table and re-link. This will/is required if any columns are floating point.
After you add the option explicit to the start of the code module, then also from menu do a debug->compile - make sure code compiles.
It turned out that the query failed if a particular form was open.
That form queried the same table, but with Snapshot instead of Dynaset.
I don't know why that locks the table. It has a proper key and index. But Dynaset fixes it.

Linked SQL Server's table shows all fields as #Deleted, but when converted to local, all information is there

My company has a really old Access 2003 .ADP front-end connected to an on-premise SQL Server. I was trying to update the front-end to MS Access 2016, which is what we're transitioning to, but when linking the tables I get all the fields in this specific table as #Deleted. I've looked around and tried to change some of the settings, but I'm really not that into SQL Server to know what I'm doing, hence asking for help.
When converting the table to local, all the info is correctly displayed, so it begs the question. Also, skipping to the last record will reveal the info on that record, or sorting/filtering reveals some of the records, but most of the table stays "#Deleted"...
Since I know you're going to ask: Yes, I need to edit the records.. Although the snapshot method would work for people trying to view the info, some of us need to edit it.
I'm hoping someone can shed some light on this,
Thanks in advance, Rafael.
There are 3 common reasons for this:
You have bit fields in SQL server, but they are null. They should be assigned a default of 0.
The table in question does NOT have a PK (primary key).
Last but not least you need (want) to add a timestamp column. Keep in mind that this is really what we call a “row version” column (so it not a date/time column, but a timestamp column). Adding this column will help access determine if a record been changed, and this is especially the case for any table/form in Access that allows editing of “real” number data types (single, double). If access does not find a timestamp column, then it reverts to a column by column comparison to determine table changes, and due to how computers handle “real” numbers (with rounding), then such comparisons often fail.
So, check for the above 3 issues. You likely should re-run the linked table manager have making any changes.

MSAccess- Changes to linked table (View) not saved

I have an MS-Access front end to a MSSQL server back end. The existing, functional updates that the tool makes are applied through a MSSQL view which is inserted to MS-Access as a linked table. There is a primary key defined for this linked 'table' (view).
The user sees a subset of records that match previously selected criteria, and uses comboboxes (unbound) to select the value of several fields that are then applied to all matching records using DoCmd.RunSQL with Me.Filter on the "After Update" Event.
Users have requested an additional piece of functionality.
I have:
Added the new column required to the underlying table referenced in the view
Added the column to be output in the view
Refreshed the linked table in MS-Access
Added the new field to the form that will be updating it, and modified the DoCmd.RunSQL statement to enact the UPDATE
When updating the new field via the form, I get the standard message "You are about to change x rows" where x is the appropriate number. Pressing OK gives no errors, but the table is not updated.
To debug, I attempted to change the record in the linked table view directly. Again no errors were thrown, and the row seems to be updated, but this is not reflected in SSMS, and reloading the table in MS-Access the change is no longer present. I can change the values of columns other than the new one.
I also tested adding the underlying table as a linked table and I can edit the rows in MS-Access in this table.
(Update)
At #ErikvonAsmuth suggestion below I tried using Recordsets on the bound form instead of the DoCmd.RunSQL. Again could access the record and an update gave no error on rst.Update, but the change is not reflected in the database for the new field. I can change a previously existing field using this method as above.
Seems my problem is independent of the update method.
(/Update)
I would appreciate any ideas for next steps to check.
I found the issue.
There was a trigger defined on the view to handle saving to the table. I altered this in SSQL to add the new column and everything is working now.
Found a hint when I tried to edit the field in SSMS and it wouldn't work there either - was getting a Row failed to retrieve on last operation error.
That lead me to a thread referencing triggers on ExpertsExchange
I will be going back and changing the DoCMD.RunSQL statements to use recordsets.

SQL Server: How to list changed columns with change tracking?

I use SQL Server 2012 Standard edition, and I activated Change Tracking function on a table.
When I list changes on a table with the CHANGETABLE function, I have a SYS_CHANGE_COLUMNS property with binary data
0x0000000045000000460000004700000048000000
How do I know which columns have changed ?
Because the column is a bitmask made up of the column IDs of all the columns which were changed, it's difficult to know what it's made up of. In fact, MSDN says not to interrogate SYS_CHANGE_COLUMNS directly here: https://msdn.microsoft.com/en-us/library/bb934145.aspx
This binary value should not be interpreted directly.
However, when you are detecting changes for notification purposes, usually the notification consumer has a good idea of which columns they are interested in changing.
For this use-case, use the CHANGE_TRACKING_IS_COLUMN_IN_MASK function.
-- Get the column ID of my column
declare #MyColumnId int
set #MyColumnId = columnproperty(object_id('MyTable'), 'MyColumn', 'ColumnId')
-- Check if it's changed
declare #MyColumnHasChanged bit
set #MyColumnHasChanged = CHANGE_TRACKING_IS_COLUMN_IN_MASK (MyColumnId, #change_columns_bitmask);
If CHANGE_TRACKING_IS_COLUMN_IN_MASK tell me if a column has changed,
how can I write a script that tell me which columns have changed ? I
have around 50 attributes for each table.
I'm afraid you'll need to loop through all of the columns you may be interested in... If this is too restrictive, you may have to use another change-notification approach, like Change Data Capture (CDC), or Triggers

Why won't SQL Server CE for WebMatrix 3 allow me to manually add a row to a table in my database?

I haven't had this problem until I first tried to manually add data to a database since my upgrade to WebMatrix 3, so I don't know if this is a bug or some kind of fault prevention.
I have defined a very simple table with the primary key as int and set it to not allow nulls, and be of the type IsIdentity so that the int value will automatically increment, as needed.
A pic of that is shown here:
Okay, seems simple enough, but when I try to manually add data to the table, it, as it should, does NOT allow me to modify the primary key value in any way (because it is automatic).
All I do is put in a couple of string values to the type and location columns and it tells me that it couldn't commit changes to the database because of the invalid value in the primary key field (it acts as though it is gonna try to throw NULL in as the value, but this should be overridden when it automatically adds the row. The user-interface does not allow me to control or edit this value in anyway).
A pic of this is shown here:
What is this? Some kind of bug? Is it a new rule that WebMatrix does not allow a developer to add values to the database manually? Do I have to write a query every time I want to add something to the database? Am I in the Twilight Zone? (Okay, sorry about the last one...)
Also, I've noticed that if I don't have IsIdentity set, I can edit the field, put a PERFECTLY VALID integer therein, and it still errors the same way, so I use ESC to backup my changes, then hit refresh, only to find that it did, indeed, add the row anyway :/ . So, this interface seems kind of buggy to begin with. In my scenario above (using IsIdentity), it DOES NOT add the row anyway, unfortunately.
--------------------UPDATE--------------------------
I just recently downloaded a WebMatrix update, and it appears that they have fixed this! Yay! (till now I was just querying generic INSERT INTO statements and editing them manually from there).
I think the SQL CE tooling with WM3 is broken, suggest you look at other tools for editing data - I can recommend the standalone SQL Server Compact Toolbox (disclosure: I am the author)

Resources