Concurrency error with MS-Access Linked Tables - sql-server

I am linking tables to a SQL 2008R2 DB via MS Access Linked Tables.
I am getting this warning when I want to change the data in an Access linked table where the underlying SQL table has more than one bit field in it:
The record has been changed by another user since you started editing
it. If you save the record, you will overwrite the changes the other
user made
I don't have any problems when there is only one bit field in the table. It's really a strange error imho. Has any one else encountered this before and found a work around for it by any chance?

I've seen this sort of issue in working with linked tables in general with SQL. I'm not sure why you're seeing the issue specifically with bit fields. Try adding a 'ts' column with the datatype of timestamp (rowversion) to the table and relink it in Access.

I know this is an old question, but maybe my answer will benefit others since I struggled with same and other similar issues.
I had similar error and was mostly able to get around it. One thing that may help is to use SQL Profiler on the database and watch the SQL commands made by Access while you are trying to add a new row.
Few things to check..
1) Verify that you have an ID column in the table set as the Primary key and AutoNumber
2) If this involves a master/child relationship between another table, in the Access Database Tools "Relationships", specify the relationship and the join type between these types.
3) If a join between tables, then play around with the primary column and foreign column being exposed in the query.
Using the SQL Profiler, I would see where it would try to find the row to update based on other columns besides the primary key. e.g.
update table
set ...
where id = 5 and data1 = somevalue and data2 == othervalue
When doing this, I would sometimes get the same error since I may have edited other values in the new row and therefore the complex where clause would fail. What you want is to have the update rely totally on the primary key.

Related

Delete all records of a table which are not referenced in any other table; dozens of foreign tables; dynamic solution

I am looking for a solution to detect and delete all records of a table "UniqueKeys" which are not referenced anymore from records in any other table. As my question seems not clear, I have rephrased it.
Challenge:
If there is a table called "UniqueKeys" which consists of an ID and a uniqueIdentifier column and if there are dozens of tables which references the ID field of the "UniqueKeys" table - and now there are some records in the "UniqueKeys" table whose ID are not used in any of these other tables' references, I want to be able to detect and delete them with a SQL query without hard-code the joins to all of these other tables.
The found solutions so far included explicitly writing joins with each of the "other" tables which I want to avoid here.
Like this: Other SO answer
The goal: should be a generic solution so that at any time Devs can add additional foreign tables and this solution should continually be able (without modification) to detect any references to table "X" (and avoid the deletion of such affected records).
I know that I simply could programmatically (in the programming language of my choice) iterate through all given records of table "UniqueKeys" and use exception handling to simply continue when a given record cannot be deleted because of an active constraint.
This is what I am currently doing - and it yields the desired result - but imho this is a very ugly approach.
As I am no SQL expert, show me how to better re-phrase the above if that will help a better understanding of what I am trying to achieve.

Find foreign keys based on data

I am looking at a database which has almost no foreign keys defined.
Is there a tool that can perform some data analysis/heuristics and "guess" the relations based on data. I am looking for some kind of report, which can be used as a manual guide/checklist.
I had a similar problem - Every Table had a Object_ID column... But had secondary IDs too.
All were of a wierd GUID-ish form.
I ended up writing a brute force scanner (using Dynamic sql from informtion_schema.columns)
Of course this approach relied on the values being globally unique... If you have a bunch of int identity cols and no way to connect the Tables then you are in a bit of trouble!
Perhaps there is a timestamp column or a DateTime defaulting to GetDate() - you could use this to identidy records in different tables that are created at approx the same time.
A lot depends on your schema...

Finding all pseudo related data within an SQL Server database

I have a requirement to change a "broken" computed column in a table to an identity column and as part of this work update some of the field values. This column is a pseudo primary key so doesn't have any constraints defined against it. I therefore need to determine if any other tables in the database contain a pseudo foreign key back to this column.
Before writing something myself I'd like to know if there is a script/tool in existence that when given a value (not a column name) can search across the data in all of the tables within an SQL Server database and show where that value exists?
Thanks in advance.
Quick google found this page/script:
http://vyaskn.tripod.com/search_all_columns_in_all_tables.htm
I don't personally know of a pretty GUI-interfaced utility that'll do it.

Sql 2005 data migration

I have the same database running on two different machines. The DB's make extensive use of Identity columns, and the tables have clashed pretty horribly. I now want to merge these two together before sorting out the undelying issue which I may do by
A) Using GUIDs (unweildy but works everywhere)
B) Assigning Identity ranges, kind of naff, but means you can still access records in order, knock up basic Sql and select records easily, and it identifies which machine originated the data.
My question is, what's the best way of re-keying (ie changing the primary keys) on one of the databases so the data no longer clashes. We're only looking at 6 tables total, but lots of rows ~2M in the 3 tables.
Update - is there any real sql code out there that does this, I know about Identity Insert etc. I've solved this issue in a number of in-elegant ways before, and I was looking for the elegant solution, preferable with a nice TSQL SP to do the donkey work - if that doesn't exist I'll code it up and place on wiki.
A simplistic way is to change all keys on the one of the databases by a fixed increment, say 10,000,000, and they will line up. In order to do this, you will have to bring the applications down so the database is quiet and drop all FK references affected by this, recreating them when finished. You will also have to reset the seed value on all affected identity columns to an appropriate value.
Some of the tables will be reference data, which will be more complicated to merge if it is not in sync. You could possibly have issues with conflicting codes meaning the same thing on different instances or the same code having different meanings. This may or may not be an issue with your application but if the instances have been run without having this coordinated between them you might want to check carefully for this.
Also, data like names and addresses are very likely to be out of sync if there wasn't a canonical source for these. You may need to get these out, run a matching query and get the business to tidy up any exceptions.
I would add another column to the table first, populate that with the new Primary key.
Then I'd use update statements to update the new foreign key fields in all related tables.
Then you can drop the old Primary key and old foreign key fields.

Preventing Duplicate Inserts Into SQL With PHP

I'm going to running thousands of queries into SQL and I need to prevent the duplication of field 'domain'. Never had to do this before and any help would be appreciated.
You probably want to create a "UNIQUE" constraint on the field "Domain" - this constraint will raise an error if you create two rows that have the same domain in the database. For an explanation, see this tutorial in W3C school -
http://www.w3schools.com/sql/sql_unique.asp
If this doesn't solve your problem, please clarify the database you have chosen to use (MySql?).
NOTE: This constraint is completely separate from your choice of PHP as a programming language, it is a SQL database definition thing. A huge advantage of expressing this constraint in SQL is that you can trust the database to preserve the constraint even when people import / export data from the database, your application is buggy or another application shares the database.
If this is an absolute database integrity requirement (It's not likely to change, nor does existing data have this problem), I would enforce it at the database with a unique constraint.
As far as detecting it before or after the attempt in order to notify the user, there are a number of techniques which could be used.
Where is the data coming from? Is this something you only want to run once, or a couple of times, or often? If the domain-value already exists, do you just want to skip the insert or do something else (ie increment a counter)?
Depending on your answers, there are many possible solutions:
Pre-sort your data, eliminate duplicates, then insert
(assumes relatively static data, empty table to begin with)
Use an associative array in PHP as a local domain-value cache
(if table already contains data, start by reading existing content;
not thread-safe, but works if it only runs once at a time)
Make domain a UNIQUE column and write wrapper code to handle return errors
Make domain a UNIQUE or PRIMARY KEY column and use an ON DUPLICATE KEY clause:
INSERT INTO mydata ( domain, count ) VALUES
( 'firstdomain', 1 ),
( 'seconddomain', 1 ),
( 'thirddomain', 1 )
ON DUPLICATE KEY
UPDATE count = count+1
Insert all data into the table, then remove duplicates
Note that batching inserts (ie using multiple value clauses per statement) can be significantly faster.
I'm not really sure I understood your question, but perhaps you are looking for SQL's "UNIQUE" constraint. If the query tries to insert a pre-existing value to a field, you (PHP) will be notified about this constraint breach.
There are a bunch of ways to approach this. You could set a unique constraint (like a primary key) on that column. This will cause the insert to fail if that domain has also been inserted. You could also insert all of the duplicate domains and just delete them later on. This will work well if not that many of the domains are duplicated. There are a few questions posted already on finding duplicate rows.
This can be doen with sql, rather than with php.
i am assuming that you are using MySQl, but the same principles will work with different databases.
make the Domain column the primary key. (makes sense, as it has to unique.)
Rather than using INSERT, use UPDATE.
if the primary key already exists (that you are trying to put into the table), update will update the existing tuple, rather than creating a new tuple.
so you will overwrite existing data if it is different, and if it is identical the update will be skipped.

Resources