ArcGIS SQL Server ArcSDE: Versioning vs Identity Column - sql-server

[I am asking here instead of GIS Stackexchange because this maybe more of a SQL Server issue?]
I have SQL Server ArcSDE connection in which data is batch inserted via some scripts. Currently, anytime there is a new row of data then an 'OBJECTID' column, set to INT and Identity Column increases by number 1. So far so good. Except I need to enable "versioning" on the table.
And so I follow this: http://resources.arcgis.com/en/help/main/10.1/index.html#//003n000000v3000000
but get errors because ArcGIS is complaining about the Identity column, per: http://support.esri.com/cn/knowledgebase/techarticles/detail/40329 ; and when I remove the Identity attribute to the column then the column value becomes NULL--not good.
So, in my scenario, how I can I increase the value of OBJECTID by 1 number as auto-increment? I supposed, I can just insert some GUID into the 'OBJECTID' field through the script? Also, if I follow the GUID route then I am not sure if I will be able to add rows manually via ArcGIS Desktop on occasional basis?
Thanks!
Update 1 Okay, so changed the OBJECTID field to a 'uniqueidentifier' one with a default GUID value and now I am able to enable "versioning" using ArcGIS Desktop. However, ArcGIS is expecting GUID to be an INT data type--and so no go?

In light of my "update 1" in the Question above I managed to take care of this by inserting an INT value for OBJECTID during the batch insertions per the following: How to insert an auto_increment key into SQL Server table
so per the above link, I ended doing:
INSERT INTO bo.TABLE (primary_key, field1, fiels2) VALUES ((SELECT ISNULL(MAX(id) + 1, 0) FROM bo.Table), value1, value2)
EXCEPT in my case the IDENTITY remains not 'ON' at all either in the database or, unlike the above link, I didn't have to set Identity On/Off during batch insertions; Works for some reasons anyway!

Related

Enable identity insert is not working when importing data

I am trying to import many tables from access db to MS SQL server using the import wizard.
Some rows in source tables has been deleted so the sequence of IDs are like this: 2,3,5,8,9,12,...
but when I import the data into my destination, the IDs start from 1 and increment by 1, so they don't exactly match with source data.
I even check the "Enable Identity insert" but it does not help.
The only work around I have found is to change the IDs in destination tables from Identity to integer one by one, then import, and then change them back to identity, which is very time consuming.
Is there any better way to do this?
If you want to insert an id in the identity column, you need to use:
SET IDENTITY_INSERT table_name ON
https://msdn.microsoft.com/es-us/library/ms188059.aspx
Remember to set it OFF at the end of the script.

SQL Azure raise 40197 error (level 20, state 4, code 9002)

I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx

How to merge table from access to SQL Express?

I have one table named "Staff" in access and also have this table(same name) in SQL 2008.
Both table have thousands of records. I want to merge records from the access table to sql table without affecting the existing records in sql. Normally, I just export using OCBC driver and that works fine if that table doesn't exist in sql server. Please advise. Thanks.
A simple append query from the local access table to the linked sql server table should work just fine in this case.
So, just drop in the first (from) table into the query builder. Then change the query type to append, and you are prompted for the append table name.
From that point on, just drop in the columns you want (do not drop in the PK column, as they need not be used nor transferred in this case).
You can also type in the sql directly in the query builder. Either way, you will wind up with something like:
INSERT INTO dbo_custsql
( ADMINID, Amount, Notes, Status )
SELECT ADMINID, Amount, Notes, Status
FROM custsql1;
This may help: http://www.red-gate.com/products/sql-development/sql-compare/
Or you could write a simple program to read from each data set and do the comparison, adding, updating, and deleting, etc.

Can't set auto-increment via SQL Server Express Management Studio?

I just tried inserting value in to a database and that work. Now I insert again and I get an error for identical primary key.
I can't find any option to alter it to be auto-increment.
I'm updating the table via Linq-To-Sql.
User u = new User(email.Text, HttpContext.Current.Request.UserHostAddress,
CalculateMD5Hash(password.Text));
db.Users.InsertOnSubmit(g);
db.SubmitChanges();
I didn't fill in the user_id and it worked fine the first time. It became zero.
Trying to add a second user, it wants to make the ID 0 again.
I could query the database and ask for the highest ID, but that's going to far if you know about auto-increment.
How can I turn this on? All I can find are scripts for table creation. I'd like to keep my existing table and simply edit it.
How is your Linq-to-SQL model defined?? Check the properties of the user_id column - what are they set to??
In your Linq-to-SQL model, be sure to have Auto Generated Value set to true, Auto-Sync set to OnInsert, and the server data type should also match your settings (INT IDENTITY),
In SQL Server Management Studio, you need to define the user_id column to be of type INT IDENTITY - in the visual table designer, you need to set this property here:
It is zero because you have a integer for a primary key column type. To use auto-increment, set tables identity column to the ID (selected in the table properties)
Would probably be easier to edit the database using VS if you have a version that will work for, otherwise if you have to edit it in management studio see this article:
http://blogs.msdn.com/b/sqlexpress/archive/2006/11/22/connecting-to-sql-express-user-instances-in-management-studio.aspx
Or you can increment the user_id manually and pass it to the insert function if you cannot alter the property/table field description

SQL Server: Copying table contents from one database to another

I want to update a static table on my local development database with current values from our server (accessed on a different network/domain via VPN). Using the Data Import/Export wizard would be my method of choice, however I typically run into one of two issues:
I get primary key violation errors and the whole thing quits. This is because it's trying to insert rows that I already have.
If I set the "delete from target" option in the wizard, I get foreign key violation errors because there are rows in other tables that are referencing the values.
What I want is the correct set of options that means the Import/Export wizard will update rows that exist and insert rows that do not (based on primary key or by asking me which columns to use as the key).
How can I make this work? This is on SQL Server 2005 and 2008 (I'm sure it used to work okay on the SQL Server 2000 DTS wizard, too).
I'm not sure you can do this in management studio. I have had some good experiences with
RedGate SQL Data Compare in synchronising databases, but you do have to pay for it.
The SQL Server Database Publishing Wizard can export a set of sql insert scripts for the table that you are interested in. Just tell it to export just data and not schema. It'll also create the necessary drop statements.
One option is to download the data to a new table, then use commands similar to the following to update the target:
update target set
col1 = d.col1,
col2 = d.col2
from downloaded d
inner join target t on d.pk = t.pk
insert into target (col1, col2, ...)
select (d.col1, d.col2, ...) from downloaded d
where d.pk not in (select pk from target)
If you disable the FK constrains during the 2nd option - and resume them after finsih - it will work.
But if you are using identity to create pk that are involves in the FK - it will cause a problem, so it works only if the pk values remains the same.

Resources