Updateable view of local table and linked server table - data has changed - sql-server

i need to combine two tables on a local and a linked server in a updateable view on my local sql-server, which i link to an MS Access Frontend.
Accessing this view in Access works, and updates to existing rows on the local table works to. But i can not add new rows to this local table as the foreign key [ProjektNr] is not set automatically. Sadly it is not possible to add a foreign key contraint between local and linked server, so i need an alternative. I already read about replicating/tringgering the foreign table in a local table, but this is not what i want. The two tables have a 1:1 relation. If a want to attach [Notizen] to a known Project, i need a new row in [tblProjekt] with matching [ProjektNr], but this row is not generated by updating the view. I get:
data has changed since the results pane was last retrieved
even directly in SSMS.
my local table where i can attach further projectinformation to existing projects: (SQL-Server 2014)
CREATE TABLE [dbo].[tblProjekt](
[ID] [int] IDENTITY(1,1) NOT NULL,
[ProjektNr] [int] NOT NULL,
[Notizen] [nvarchar](max) NULL,
[TimeStamp] [timestamp] NOT NULL,
CONSTRAINT [PK_tblProjekt] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
my linked server table which i don´t want to touch: (SQL-Server 2008R2)
CREATE TABLE [dbo].[Projekt](
[MandantNr] [smallint] NOT NULL,
[ProjektNr] [int] NOT NULL,
[KundenNr] [int] NULL,
[ProjektName] [nvarchar](150) NULL,
[Abgeschlossen] [bit] NULL,
CONSTRAINT [PK_Projekt] PRIMARY KEY NONCLUSTERED
(
[MandantNr] ASC,
[ProjektNr] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Projekt] WITH NOCHECK ADD CONSTRAINT [FK_Projekt_Mandanten] FOREIGN KEY([MandantNr])
REFERENCES [dbo].[Mandanten] ([MandantNr])
GO
ALTER TABLE [dbo].[Projekt] CHECK CONSTRAINT [FK_Projekt_Mandanten]
GO
my local combining view:
SELECT Projekt_1.ProjektNr, Projekt_1.KundenNr, Projekt_1.ProjektName,
CASE WHEN (tblProjekt.ProjektNr IS NULL) THEN '0' ELSE '-1' END AS Übernommen, dbo.tblProjekt.ID, dbo.tblProjekt.ProjektNr AS GProjektNr, dbo.tblProjekt.Notizen, dbo.tblProjekt.TimeStamp, Projekt_1.Abgeschlossen, Projekt_1.MandantNr
FROM LINKEDSERVER.Catalog.dbo.Projekt AS Projekt_1 LEFT OUTER JOIN dbo.tblProjekt ON Projekt_1.ProjektNr = dbo.tblProjekt.ProjektNr
WHERE (Projekt_1.Abgeschlossen = 0) AND (Projekt_1.MandantNr = 1)
Problem solved:
Sadly SQL-Server seems not to be able to solve this simple task.
I linked the to tables in Access, wrote the same query and it worked instantly.

Related

How can I add values of a column but only where the artist is the same, in SQL Server?

I have 2 tables, first table:
CREATE TABLE [dbo].[songs]
(
[ID_Song] [INT] IDENTITY(1,1) NOT NULL,
[SongTitle] [NVARCHAR](100) NOT NULL,
[ListenedCount] [INT] NOT NULL,
[Artist] [INT] NOT NULL,
CONSTRAINT [PK_songs]
PRIMARY KEY CLUSTERED ([ID_Song] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[songs] WITH CHECK
ADD CONSTRAINT [FK_songs_artists]
FOREIGN KEY([Artist]) REFERENCES [dbo].[artists] ([ID_Artist])
GO
And second table:
CREATE TABLE [dbo].[artists]
(
[ID_Artist] [INT] IDENTITY(1,1) NOT NULL,
[Name] [NVARCHAR](100) NOT NULL,
CONSTRAINT [PK_artists]
PRIMARY KEY CLUSTERED ([ID_Artist] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
As you can see column Artist in table Songs references column ID_Artist of table Artists.
I want to get all Artists by summing up ListenedCount of all their songs where it's value is greater than a value.
I have trouble writing the query.
There are many ways to achieve it.
One is by summing in a subquery and using it, the sum, as a filter in the query.
select art.[Name], gba.[ListenedSum]
from [dbo].[artists] art
join
(
select sg.[Artist], sum(sg.[ListenedCount]) as [ListenedSum]
from [dbo].[songs] sg
group by sg.[Artist]
) as gba on gba.[Artist] = art.[ID_Artist]
where gba.[ListenedSum] > 1000000
A more direct way can be using HAVING
select art.[Name], sum(sg.[ListenedCount]) as [ListenedSum]
from [dbo].[artists] art
join [dbo].[songs] sg on sg.[Artist] = art.[ID_Artist]
group by art.[Name]
having sum(sg.[ListenedCount]) > 1000000
It's interesting to note the engine can end running these two queries in different ways (not guaranteed) and they can end with different performances.
There's another interesting way, like using a CTE but I think it's a bit more complicated.

AspNetUserLogins table and maximum size of index keys in SQL Server

The schema of the identity model in VS2017/aspnetcore defines a table called AspNetUserLogins table to store external logins (CREATE statement below). It defines the primary key as a composite of [LoginProvider] [nvarchar] (450) and [ProviderKey] [nvarchar] (450). The SQL server limits for the maximum size of index keys is specified at 900 bytes here. A note on that page specifically says
"If a table column is a Unicode data type such as nchar or nvarchar,
the column length displayed is the storage length of the column. This
is two times the number of characters specified in the CREATE TABLE
statement. In the previous example, City is defined as an nvarchar(30)
data type; therefore, the storage length of the column is 60."
So is this key not twice the allowed size?
Sql Server Management Studio seems to think so....
Warning! The maximum key length for a clustered index is 900 bytes.
The index 'PK_AspNetUserLogins' has maximum length of 1800 bytes. For
some combination of large values, the insert/update operation will
fail.
CREATE TABLE [dbo].[AspNetUserLogins](
[LoginProvider] [nvarchar](450) NOT NULL,
[ProviderKey] [nvarchar](450) NOT NULL,
[ProviderDisplayName] [nvarchar](max) NULL,
[UserId] [nvarchar](450) NOT NULL,
CONSTRAINT [PK_AspNetUserLogins] PRIMARY KEY CLUSTERED
(
[LoginProvider] ASC,
[ProviderKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Looks like they know...issue1451
It looks as though this will cause subsequent issues. I originally created my database on my desktop prior to deploying it to Azure and there is a significant difference between the 2 databases. In SSMS, using the "Script Table as > CREATE table", the tables designs are:
Azure database:
CREATE TABLE [dbo].[AspNetUserLogins](
[LoginProvider] [nvarchar](225) NOT NULL,
[ProviderKey] [nvarchar](225) NOT NULL,
[ProviderDisplayName] [nvarchar](max) NULL,
[UserId] [nvarchar](450) NOT NULL,
CONSTRAINT [PK_AspNetUserLogins] PRIMARY KEY CLUSTERED
(
[LoginProvider] ASC,
[ProviderKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
Desktop database:
CREATE TABLE [dbo].[AspNetUserLogins](
[LoginProvider] [nvarchar](450) NOT NULL,
[ProviderKey] [nvarchar](450) NOT NULL,
[ProviderDisplayName] [nvarchar](max) NULL,
[UserId] [nvarchar](450) NOT NULL,
CONSTRAINT [PK_AspNetUserLogins] PRIMARY KEY CLUSTERED
(
[LoginProvider] ASC,
[ProviderKey] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Note the [PRIMARY] references, I cannot get these into Azure. This results in the following error from a: MVC Net core 2 website using Microsoft.AspNetCore.Identity;
MVC Net Core 2.0 error resulting from the inability to add primary clustered keys

Add data from one Azure DB table to another Azure DB table

I Want add data from one Azure DB table to another Azure DB table But same server and same table format. this task i want to run every 5 min. how can i do it any idea about it? i'm founded use elastic database but i want to do different way
CREATE TABLE [dbo].[AuditLog](
[AuditLogId] [int] IDENTITY(1,1) NOT NULL,
[TableName] [nvarchar](max) NULL,
[UserId] [int] NULL,
[EmployeeName] [nvarchar](max) NULL,
CONSTRAINT [PK_Application.AuditLog] PRIMARY KEY CLUSTERED ([AuditLogId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]
TEXTIMAGE_ON [PRIMARY]
GO
INSERT INTO [DBTwo].[dbo].
[AuditLog(TableName,UserId,EmployeeName,Actions)
SELECT TableName,UserId,EmployeeName,Actions
FROM [CemexTenant].[Application].[AuditLog]
DELETE FROM [DBOne].[dbo].[AuditLog]

When provisioning an on-premises database with Sync Framework I get 'Cannot update identity column i_OBJ_Identity'

I'm trying to sync an on-premis Sql Server 2008 R2 database to an Azure Sql Database. I've divided the syncs into scopes and provisioning and syncing works for most scopes but one gives me "Cannot update identity column 'i_OBJ_Identity'".
The schema for the table containing this column is
CREATE TABLE [dbo].[OBJ_ObjectContentStaging](
[i_OBJ_PublishedID] [int] NOT NULL,
[i_OBJ_TypeID] [int] NOT NULL,
[i_OBJ_ChannelID] [int] NOT NULL,
[ntx_OBJ_Content] [ntext] NULL,
[nvc_OBJ_PreviewContent] [nvarchar](512) NULL,
[i_OBJ_Identity] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,
[ts_OBJ_TimeStamp] [timestamp] NULL,
CONSTRAINT [PK_OBJ_ObjectContentStaging] PRIMARY KEY CLUSTERED
(
[i_OBJ_PublishedID] ASC,
[i_OBJ_TypeID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
So in this table the identity column is not the primary key.
What can I do to remedy this? I cannot change the schema of the source database and selecting the columns to sync on a per table basis would be a huge undertaking.
Please advice,
Mathias

Updating a table after adding Index

I am designing a database using SQLExpress.
I have a table which has three columns. The table looks as below.
CREATE TABLE [dbo].[dummy](
[id] [int] IDENTITY(1,1) NOT NULL,
[someLongString] [text] NOT NULL,
[someLongText_Hash] [binary](20) NOT NULL,
CONSTRAINT [PK_dummy] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
I already have some data in this table. Whenever I want to add a new row, I first compute a hash on someLongString and query the table to see if a row with this hash already exists. As the table size grows, this query talks longer time and hence I plan to index it by the someLongText_Hash column.
Can some please suggest how to do this in SQL Server Management Studio. Also, after adding this index, how do I index the existing rows in this table ?
Why can't you just set the 'someLongString' field to be unique? That way you don't need to keep a hash and an extra primary key?
You could try using a CHECKSUM.
CREATE TABLE [dbo].[dummy](
[id] [int] IDENTITY(1,1) NOT NULL,
[someLongString] [text] NOT NULL,
[someLongText_CheckSum] NOT NULL,
CONSTRAINT [UC_someLongText_CheckSum] UNIQUE (someLongText_CheckSum),
CONSTRAINT [PK_dummy] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
See here for further explanation

Resources