[Microsoft][ODBC SQL Server Driver][SQL Server]Violation of PRIMARY KEY constraint - sql-server

I have an inventory database to track some equipment which I sometimes loan out. I have a Device table, and also a DeviceHistory table. There are two forms which I use to update the Device record, and updates are also recorded in the DeviceHistory table. Both forms call the same update function. I am temporarily writing out the sql to try to locate the differences. When I use formA it results in:
insert into QADeviceHistory (DeviceID, Timestamp, StatusID, AuthorID, AssignedToID, History)
values ( 264, '11/15/2018 9:31:10 AM', 'AVAIL', 'rongray', '', '');
and everything works just fine. However, when I use formB it results in:
insert into QADeviceHistory (DeviceID, Timestamp, StatusID, AuthorID, AssignedToID, History)
values ( 264, '11/15/2018 9:31:45 AM', 'AVAIL', 'rongray', '', '');
Microsoft OLE DB Provider for ODBC Drivers error '80040e14'
[Microsoft][ODBC SQL Server Driver][SQL Server]Violation of PRIMARY
KEY constraint 'QADeviceHistory_pk'. Cannot insert duplicate key in
object 'dbo.QADeviceHistory'. /py2/DeviceRecord.asp, line 1393
The primary key for DeviceHistory is DeviceID and Timestamp, and the values I am attempting to enter really are unique. Oddly enough, the DeviceHistory record DOES get written to the table, so I really don't understand why I am getting the error when using formB but not getting the error when using formA. I am tempted to just add an on resume next and ignore it but would at least like to understand what's happening.
(Also, this is not new code... it's been around for a few years. The only recent change is that I had to migrate the database from a Win 2008 server to a Win 2016 server, and both servers are using MS SQL Server 2008.)

Related

Syntax Error Creating Identity/Primary Key Column on Azure through SSMS

I came across an syntax error message when I tried creating a table in sql-azure through a local SSMS connection. This error does not occur if I run the same query on a local host DB instead of my azure connection.
Create table Portfolio_Company_Financials
(
PCF_ID int not null identity(1,1) PRIMARY KEY,
CompanyID int,
ReportingDate date,
Revenue float,
)
Above throws the error:
Parse error at line: 3, column: 21: Incorrect syntax near 'identity'
It will execute when I comment out identity(1,1) and over. It has the same issue with with using using only primary:
...PCF_ID int not null PRIMARY KEY,...
Additionally it looks like I cannot manually change column properties through the SSMS object explorer and can only refresh/delete when right clicking to the column.
It looks like a SSMS/permissions/azure issue? Can anyone help me here?
This error occurs when you try to create the table in Microsoft Azure SQL Data Warehouse. Azure SQL Data Warehouse does not yet support primary keys and the identity property. You can confirm your version with the following sql:
select ##version
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-overview

How do I convert an Oracle TIMESTAMP data type to SQL Server DATETIME2 data type while connected via Link Server.?

I have tried some examples but so far not working.
I have a Link Server (SQL Server 2014) to an Oracle 12C Database.
The table contain a datatype TIMESTAMP with data like this:
22-MAR-15 04.18.24.144789000 PM
When attempting to query this table in SQL Server 2014 via link server I get the following error using this code:
SELECT CAST(OracleTimeStampColumn AS DATETIME2(7)) FROM linkServerTable
Error:
Msg 7354, Level 16, State 1, Line 8
The OLE DB provider "OraOLEDB.Oracle" for linked server "MyLinkServer" supplied invalid metadata for column "MyDateColumn". The data type is not supported.
While the error is self explanatory, I am not certain how to resolve this.
I need to convert the timestamp to datetime2. Is this possible?
You can work around this problem by using OPENQUERY. For me, connecting to Oracle 12 from SQL 2008 over a linked server, this query fails:
SELECT TOP 10 TimestampField
FROM ORACLE..Schema.TableName
...with this error:
The OLE DB provider "OraOLEDB.Oracle" for linked server "ORACLE" supplied invalid metadata for column "TimestampField". The data type is not supported.
This occurs even if I do not include the offending column (which is of type TIMESTAMP(6). Explicitly casting it to DATETIME does not help either.
However, this works:
SELECT * FROM OPENQUERY(ORACLE, 'SELECT "TimestampField" FROM SchemaName.TableName WHERE ROWNUM <= 10')
...and the data returned flows nicely into a DATETIME2() field.
One way to solve the problem is to create a view in oracle server and convert the OracleTimeStampColumn compatible with sql server's datetime2datatype. You can change the time format to 24 hours format in oracle server's view and mark the field as varchar. Then you can convert the varchar2 column to datetime2 when selecting the column in SQL Server.
In Oracle Server
Create or Replace View VW_YourTableName As
select to_char(OracleTimeStampColumn , 'DD/MM/YYYY HH24:MI:SS.FF') OracleTimeStampColumn from YourTableName
In SQL Server
SELECT CAST(OracleTimeStampColumn AS DATETIME2(7)) FROM **linkServerVIEW**

"The column cannot be modified because it is an identity, rowversion or a system column" - but it isn't

I am getting this error:
The column cannot be modified because it is an identity, rowversion or
a system column. [Column name = BatchClosed]
But [BatchClosed] is a nullable bit column and identity is false.
I am using Sql Server Compact Edition and the table is used in merge replication.
There are system columns ( _sysIG, _sysCG, _sysCD, _sysP1, _sysMC, _sysMCS, _sysSR) and a rowguid for the purpose of replication in the table.
The table is not marked as download-only in the publication.
The table is filtered though, and the BatchClosed field is used as a part of that filter:
WHERE surveyorid = convert(int, HOST_NAME()) AND BatchClosed = 0
When I test it in Management Studio connected to the Sql Server CE database with this sql I get the same error
UPDATE tblBatch SET BatchClosed = 0 WHERE BatchClosed = 1 AND SurveyID = 160;
Interestingly, this sql would not actually do an update because there are no records with BatchClosed = 1. (I assume that's just something to do with the way Sql Server CE works)
NB the test sql will work in Sql Server 2008 R2 but not on the Sql Server CE version after synchronization
EDIT
If I try to update any column in that table I get the same error message - as if all columns are system columns, not just the one in the filter
EDIT 2
I checked my installation and noted that the server tools had an older installation date while the x64 version was at SP1:
So I un-installed the x64 components, then downloaded and installed the server tools. It now looks like this:
I immediately lost my web synchronization. It took me a painful day of working through various dead ends before I found out how to get that back. (Solution here: Configuring Web Synchronization for Merge Replication to Sql Server CE)
Result? Still get the same error. :-(
I can both delete and insert in the table in question, and also update like this:
-- Script Date: 05-07-2014 09:26 - ErikEJ.SqlCeScripting version 3.5.2.39
UPDATE [tblBatch]
SET [SamplePercentage] = 0
WHERE BatchId = 2;
GO
I think you cannot update any other columns, as they are either system controlled (PK or rowguid) or participate in join filters in the publication. But to do updates, you can do a DELETE followed by an INSERT.

Access ODBC linked table to SQL Server allows inserts but not updates

Based on Web readings, I built a new ODBC connection, carefully looking for subtle configuration parameters that might suggest fostering updates, but none were found. Then I tested the new link.
To Re-Test my issue:
1) I created the following table on SQL Server 2005:
[TestTbl]
column1: Key Type:Integer
Column2: Name Type:varchar(5)
Populated as follows
Key Name
=== ====
1 Apple
2 Bear
3 Cat
2) Then in Access 2007, created a link to the SQL Server table TestTbl using my latest ODBC connection.
3) Next successfully inserted the following new records into the SQL Server table using the link and executing my inserts from Access 2007:
Key Name
=== ====
4 Dog
5 Elephant
4) Finally I tried to execute the following simple update query:
UPDATE dbo_TestTable SET dbo_TestTable.TestName = "CatNip"
WHERE (((dbo_TestTable.TestKey)=3));
I got the error message "Operation must be an updateable query"
5) Out of frustration, I inserted another record
Key Name
=== ====
6 Nonsense
Then I posted this question asking for help.
Can anyone please explain why I can insert new records to the linked table but I cannot update existing records?
The problem you are having is because there is either no primary key defined or when you linked the table in Access the Primary key was not defined. Re-add the linked table (delete and add) and select a primary key field, in this case column1

SQL Server: Copying table contents from one database to another

I want to update a static table on my local development database with current values from our server (accessed on a different network/domain via VPN). Using the Data Import/Export wizard would be my method of choice, however I typically run into one of two issues:
I get primary key violation errors and the whole thing quits. This is because it's trying to insert rows that I already have.
If I set the "delete from target" option in the wizard, I get foreign key violation errors because there are rows in other tables that are referencing the values.
What I want is the correct set of options that means the Import/Export wizard will update rows that exist and insert rows that do not (based on primary key or by asking me which columns to use as the key).
How can I make this work? This is on SQL Server 2005 and 2008 (I'm sure it used to work okay on the SQL Server 2000 DTS wizard, too).
I'm not sure you can do this in management studio. I have had some good experiences with
RedGate SQL Data Compare in synchronising databases, but you do have to pay for it.
The SQL Server Database Publishing Wizard can export a set of sql insert scripts for the table that you are interested in. Just tell it to export just data and not schema. It'll also create the necessary drop statements.
One option is to download the data to a new table, then use commands similar to the following to update the target:
update target set
col1 = d.col1,
col2 = d.col2
from downloaded d
inner join target t on d.pk = t.pk
insert into target (col1, col2, ...)
select (d.col1, d.col2, ...) from downloaded d
where d.pk not in (select pk from target)
If you disable the FK constrains during the 2nd option - and resume them after finsih - it will work.
But if you are using identity to create pk that are involves in the FK - it will cause a problem, so it works only if the pk values remains the same.

Resources