SSMA timestamp. What's it for, how is it used? - sql-server

I rencently used the SQL Server Migration Assistant to import a database into SQL Server 2005. I noticed that a number of tables that were imported have been ammended with a new column called SSMA_timestamp.
Can anyone tell me what this is for and how it would be used?

The added SSMA_timestamp columns are not only used during migration. They actually help avoid errors when Access updates records in tables linked to SQL Server. So if you are still using an Access front end linked to the migrated SQL Server database, it would be best to not drop the SSMA_timestamp columns.
From the MSDN article Optimizing Microsoft Office Access Applications Linked to SQL Server:
Supporting Concurrency Checks
Probably the leading cause of updatability problems in Office Access–linked tables is that Office Access is unable to verify whether data on the server matches what was last retrieved by the dynaset being updated. If Office Access cannot perform this verification, it assumes that the server row has been modified or deleted by another user and it aborts the update.
There are several types of data that Office Access is unable to check reliably for matching values. These include large object types, such as text, ntext, image, and the varchar(max), nvarchar(max), and varbinary(max) types introduced in SQL Server 2005. In addition, floating-point numeric types, such as real and float, are subject to rounding issues that can make comparisons imprecise, resulting in cancelled updates when the values haven't really changed. Office Access also has trouble updating tables containing bit columns that do not have a default value and that contain null values.
A quick and easy way to remedy these problems is to add a timestamp column to the table on SQL Server. The data in a timestamp column is completely unrelated to the date or time. Instead, it is a binary value that is guaranteed to be unique across the database and to increase automatically every time a new value is assigned to any column in the table. The ANSI standard term for this type of column is rowversion. This term is supported in SQL Server.
Office Access automatically detects when a table contains this type of column and uses it in the WHERE clause of all UPDATE and DELETE statements affecting that table. This is more efficient than verifying that all the other columns still have the same values they had when the dynaset was last refreshed.
The SQL Server Migration Assistant for Office Access automatically adds a column named SSMA_TimeStamp to any tables containing data types that could affect updatability.

I think this is generated so that the Migration assistant can detect changes to the data during the migration.
Unless you are continuing to use Access as a front end to this specific database you have migrated to SQL Server (in which case see Simon's answer), I don't think they will be used for anything after migration is complete, so it should be safe to drop these new columns once you are sure everything is done.

<!-- Set project preference.
Preference path/name/value can be found in preferences.prefs file stored in SSMA project directory.
Preference path is the node name path starting from root to leaf node separating by "/". -->
<set-project-preference preference-path="prefs/ssma-for-access/a2ss/conversion"
preference-name="timestamp-columns-opt"
preference-value="never" />

From SSMA GUI you can also click tools--> default project setting --> conversion --> Tables --> add timestamp columns --> set to Never

Related

#DELETE viewing SQL Server table in Access

A new issue has cropped up this morning. I have databases that reside on SQL Server and I use Access for the front end. One of the databases which has been in use for at least 10 years now suddenly stopped working today, and I have found that the issue is one that is affecting 2 (possible more, I've not checked them all) tables.
When I open the table in access all I get is #DELETED in all the rows and colums. I have seen this behaviour before and it is usually something to do with the data type but this doesn't seem to be the case in this instance.
To troubleshoot the problem I have created a view that retrieves all the columns from the table and when this view is linked and opened in Access I have the same issue. I have found that if I link to the view without selecting a unique record identifier I can see the data without any problem. I could use this as a work-around, but clearly it is not ideal.
The SQL server version is 14.0.2037.2 and I am accessing it using SQL Server Native Client 11.0.
I have found the cause and solution. The affected tables had nvarchar fields as the primary keys. SQL Server Native Client has been deprecated for some time now and is replaced by MS OLE driver which is our mistake. The reason this problem has only reared its head now is due to an update to MS Access 365. I found this which has more details:
#DELETED when linked with ODBC
I had the same issue. This situation emerged this past weekend (5/29/2022).
This is a bug created by a 365 Office update. My update occurred 5/29/2022. The best remedy is to roll back your most-recent 365 update.
In my application, the #Deleted value appeared in all cells linked to any SQL Server table having a nvarchar field included in any unique index - it didn’t have to be a primary key. In my case, eliminating the unique attribute in any index including a nvarchar caused the problem to go away.

Bypass SQL Server 2008 R2 Express 10 go limit

The limit of 10 GB is reached and different constraints force to circumvent this limit the time that a set of patches can be put in place. An appropriate license is already in place on another server but unfortunately the migration can not be done in a reasonable time. To address the most pressing, we must find the way to override the limit imposed by SQL Server Express. Shrinkage, aliasing, file splitting, index changes, all of these were attempted without success. Suggestions?
Since the 10GB limit is per database you can use the following trick to split the data among several databases. Warning: people with strong DB beliefs please close your eyes now :-)
Move some tables to other database, choosing a set of tables than doesn't break foreign key constraints.
For each table create a view with the same name in the original database like this:
create view TableName as
select * from TheOtherDB..TableName
In this way you use the view as the table and you don't have to change a single query, SQL Server allows INSERT, UPDATEand DELETE on that type of views as if they were a table, but the data is stored in the other DB.
Of course after you migrate to the new server you should move the data back to one database.

Collation change on MS sql server 2012

Dear all, Currently I am just researching how I could handle the change of the collation on the database.
Somebody made an unusual decision to create accent sensitive database for global use... but I am on the way to handle this!
REASON: of changing the collation is that database contains data collected from different countries and as we all know some of cultures have their own letters.
With the respect for the customers, our organization would like to have Accent Insensitive database. That will allow users to request data from the server without any limitations using local characters.
As far as I have find out, there may be an option to drop constraints and etc. change collation and then just to bring everything back. In this case I am afraid if this would be enough to affect already existing data (columns).
Another way, I have found an article in Collation change on 2005 and 2008 server. However, this does not include the 2012 server.
Also I am taking the complexity of this example into consideration as well.
I believe that I am not in an easy phase. But I am hoping to get few advises what would be the best and safest way to handle this.
Thank you for your concerns and assistance.
UPDATE let me add what architecture do we have: The complete system contains 4 databases and more than 1.000 tables in total. So my expectations is that not all of the possible ways may work in an optimal way.
me too i had to deal with a similar issue because of a different reason: ancient databases with an old SQL collation installed ages ago on a SQL6.5 server that has been inplace upgraded for each version from sql 7 to sql 2005 and now should be updated to sql 2012.
why all these inplace upgrades? because the actual collation was the server collation and was so old that is not available during then install process of a recent version (2000+) of sql server...
i decided to drop all that old rubbish so i had to find a way that allowed me to move to a new installation with a windows collation.
i had to exclude the data migration (create a new database and import data) because of the lack of documentation and the huge number of customizations, triggers, hidden rules and so on.
the solution i used (the order matters):
disable automatic statistics generation
script the creation of all foreign keys and then drop them
script unique and primary indexes and then drop them
script all remaining indexes and then drop them
script custom statistics and then drop them
script CHECK and DEFAULT constraints and then drop them
now you can run the ALTER commands needed to change the collation of the columns and change the collation of the database itself.
when done repeat the above in reverse order to rebuild all the needed objects.
it happens that if the database is so old as is mine you may incur in something funny like existing foreign key that references fields with different datatypes.
Changing collation of all existing columns is a real pain. I suggest a side-by-side migration rather than alter each column individually. Create a new database with the desired collation containing only empty tables. Copy data from the old db to the new one using INSERT...SELECT (or the ETL tool of your choice), and then create constraints, indexes, and other database objects.
Consider upvoting the Make it easy to change collation on a database SQL Server feature request.
There are a number of complicated solutions on the internet for inplace collation changes but the simplest (and safest) way we have found is to script out the database, alter the script to create a new db with the collation set at the start and then import the data to the new database.
We achieve this using MS SQL Server 2012 Management Studio in the following way:
Script out all database objects with Tasks -> Generate Scripts -> Script entire Database and all Database objects
Alter the script with the following 2 changes and then run it to create a new database:
a) Change DB name to MY-NEW-DB
b) Under the CREATE DATABASE statement add: ALTER DATABASE [MY-NEW-DB] collate Latin1_General_CS_AS
If desired, use a tool like RG SQL Compare to compare the old and new database to verify all indexes, constraints, types etc were the same and collation on relevant columns only was changed.
Run Tasks->Import Data ensuring 'Enable Identity Insert' checked. All data transferred to the new case sensitive database correctly.
Run DBCC CHECKDB if you wish to check consistency

Message: This row was successfully committed to the database. However, a problem occurred

I have a table in SQL Server 2005 whose primary key is an identity column (increment 1), and I also have a default value set for one of the other columns.
When I open the table in SQL Server Management Studio and type in a new record into the table, the inserted values are not displayed, and I get the following message on save:
However, if the table has either an identity column, or one or more columns with a default value specified, the inserted value(s) will be displayed in the table after a save. And can be edited.
I frequently create test data in ssms this way and this issue makes it cumbersome to do some things I would like to.
Is there any way around this?
Right click on it and say Execute SQL...it should not display it(error)..its just sql server way of doing things..since it inserts the identity column later..You should not add records in that way in the first place.
You should not add records to a database that way! It can have unfortunate side effects (especially on large tables) as you have discovered.
Records for lookup tables should be added through rerunable scripts. Those scripts should in source control. This makes them easy to promote from dev to Qa to staging to prod.
Test records should also be done in scripts (including scripts to remove the test records) so that you can run thenm on other environments as well as being able to delete and recreate them if some process you are testing went bad. These too should eb in source control (as should all database changes which also should not be done through the GUI).

Copy Database Data from Many DBs to One. Data Replication (sort of)

This involves data replication, kind of:
We have many sites with SQL Express installed, there is an 'audit' database on each site that has one table in 1st normal form (to make life simple :)
Now I need to get this table from each site, and copy the contents (say, with a Date Time Value > 1/1/200 00:00, but this will change obviously) and copy it to a big 'super table' in sql server proper, that also has the primary key as the Site Name (That needs injecting in) and the current primary key from the SQL Express table)
e.g. Many SQL Express DBs with the following table columns
ID, Definition Name, Definition Type, DateTime, Success, NvarChar1, NvarChar2 etc etc etc
And the big super table needs to have:
SiteName, ID, Definition Name, Definition Type, DateTime, Success, NvarChar1, NvarChar2 etc etc etc
Where items in bold are the primary key(s)
Is there a Microsoft (or non MS I suppose) app/tool/thing to manager copying all this data accross already, or do we need to write our own?
Many thanks.
You can use SSIS (which comes with SQL Server) to populate, it can be set up with variables to change the connection string to the various databases. I have one that loops through the whole list and does the same process using three differnt files from three differnt vendors. You could so something simliar to loop through the different site databases. Put the whole list of database you want to copy the audit data from in a table and loop through it changing the connection string each time.
However, why on earth would you want one mega audit table per site? If every table in the database populates the audit table as changes happen, then the audit table eventually becomes a huge problem for performance. Every insert, update and delete has to hit this table and then you are proposing to add an export on top of that. This seems to me to be a guaranteed structure for locking and deadlocks and all sorts of nastiness. Do yourself a favor and limit each audit table to the table it is auditing.
Things to consider:
Linked servers and sp_msforeachdb as part of a do-it-yourself solution.
SQL Server Replication (by Microsoft) (which I believe can pull data from SQL Server Express)
SQL Server Integration Services which can pull data from SQL Server Express instances.
Personally, I would investigate Integration Services first.
Good luck.
You could do this with SymmetricDS. SymmetricDS is open source, web-enabled, database independent, data synchronization/replication software. It uses web and database technologies to replicate tables between relational databases in near real time. The software was designed to scale for a large number of databases, work across low-bandwidth connections, and withstand periods of network outage.
As of right now, however, you would need to implement a custom IDataLoaderFilter extension point (in Java) to add the extra column. The metadata would be available though because your SiteName would be the external_id.

Resources