Access Linked Table from SQL Shows #Delete - sql-server

I created this table:
CREATE TABLE [dbo].[dbo_Country]
(
[Country] [nvarchar](100) NOT NULL,
[ISO3166Code] [smallint] NULL,
[CountryEn] [nvarchar](255) NULL,
[Abriviation] [nvarchar](255) NULL,
CONSTRAINT [dbo_Country$PrimaryKey]
PRIMARY KEY CLUSTERED ([Country] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Then I linked it to a MS Access database I try to open the table and see the information but see this:
Does anyone have a solution?

#Deleted is normally shown when rows have been deleted in the underlaying database table while the table is open in Access. They may have been deleted by you in another window or by other users. #Deleted will not show initially when you open the table in Access. Type Shift-F9 to requery. The deleted rows should disappear.

Set a default of 0 for the number (ISO3166Code)
(update all existing number column to = 0.)
Add a row version column (timestamp - NOT date time).
Re-link your table(s).
This is a long time known issue. With bit fields (or int) as null, then you will get that error. As noted, also add a timestamp column (not a datetime column).

Related

Why RowVersion datatype is missing inside SSMS

We have SQL server 2017, and we want to create a new field inside existing database
the field data type is RowVersion
but using the SQL Management Studio I can not define a field with RowVersion data type
we can use Timestamps
but per my knowledge TimeStamp are now deprecated in favor of RowVersion
Any advice on this?
Here is the DataType list which does not contain rowversion:
EDIT
Now i wrote the following script to create a new table with rowversion column type:-
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[test2](
[id] [int] NOT NULL,
[rowversion] [rowversion] NOT NULL,
CONSTRAINT [PK_test2] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
but when i open the table inside the SQL management studio GUI >> the type for the new column inside the new table will be timestamp instead of rowversion + if i generate a Create to script for the new table i will get this:-
USE [test]
GO
/****** Object: Table [dbo].[test2] Script Date: 26/08/2021 19:02:56 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[test2](
[id] [int] NOT NULL,
[rowversion] [timestamp] NOT NULL,
CONSTRAINT [PK_test2] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
so seems rowversion can be used to create the table but it will be converted to timestamp... the issue is that Microsoft say that timestamp is deprecated and that we should use rowversion instead... totally confusing!!
This was an error in the microsoft documentation. I submitted a pull request to have that corrected, and that PR has been accepted.
The problem, and the thing confusing you here, was that the documentation claimed that "timestamp" was a synonym for "rowversion". But in fact the opposite is true. "Rowversion" is the synonym, "timestamp" is the base type name.
Both names actually refer to the same thing under the covers, but different tools have differing levels of support for synonyms. The graphical designers are old and have not been updated in a very, very long time.

Updateable view of local table and linked server table - data has changed

i need to combine two tables on a local and a linked server in a updateable view on my local sql-server, which i link to an MS Access Frontend.
Accessing this view in Access works, and updates to existing rows on the local table works to. But i can not add new rows to this local table as the foreign key [ProjektNr] is not set automatically. Sadly it is not possible to add a foreign key contraint between local and linked server, so i need an alternative. I already read about replicating/tringgering the foreign table in a local table, but this is not what i want. The two tables have a 1:1 relation. If a want to attach [Notizen] to a known Project, i need a new row in [tblProjekt] with matching [ProjektNr], but this row is not generated by updating the view. I get:
data has changed since the results pane was last retrieved
even directly in SSMS.
my local table where i can attach further projectinformation to existing projects: (SQL-Server 2014)
CREATE TABLE [dbo].[tblProjekt](
[ID] [int] IDENTITY(1,1) NOT NULL,
[ProjektNr] [int] NOT NULL,
[Notizen] [nvarchar](max) NULL,
[TimeStamp] [timestamp] NOT NULL,
CONSTRAINT [PK_tblProjekt] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
my linked server table which i don´t want to touch: (SQL-Server 2008R2)
CREATE TABLE [dbo].[Projekt](
[MandantNr] [smallint] NOT NULL,
[ProjektNr] [int] NOT NULL,
[KundenNr] [int] NULL,
[ProjektName] [nvarchar](150) NULL,
[Abgeschlossen] [bit] NULL,
CONSTRAINT [PK_Projekt] PRIMARY KEY NONCLUSTERED
(
[MandantNr] ASC,
[ProjektNr] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
ALTER TABLE [dbo].[Projekt] WITH NOCHECK ADD CONSTRAINT [FK_Projekt_Mandanten] FOREIGN KEY([MandantNr])
REFERENCES [dbo].[Mandanten] ([MandantNr])
GO
ALTER TABLE [dbo].[Projekt] CHECK CONSTRAINT [FK_Projekt_Mandanten]
GO
my local combining view:
SELECT Projekt_1.ProjektNr, Projekt_1.KundenNr, Projekt_1.ProjektName,
CASE WHEN (tblProjekt.ProjektNr IS NULL) THEN '0' ELSE '-1' END AS Übernommen, dbo.tblProjekt.ID, dbo.tblProjekt.ProjektNr AS GProjektNr, dbo.tblProjekt.Notizen, dbo.tblProjekt.TimeStamp, Projekt_1.Abgeschlossen, Projekt_1.MandantNr
FROM LINKEDSERVER.Catalog.dbo.Projekt AS Projekt_1 LEFT OUTER JOIN dbo.tblProjekt ON Projekt_1.ProjektNr = dbo.tblProjekt.ProjektNr
WHERE (Projekt_1.Abgeschlossen = 0) AND (Projekt_1.MandantNr = 1)
Problem solved:
Sadly SQL-Server seems not to be able to solve this simple task.
I linked the to tables in Access, wrote the same query and it worked instantly.

Speed up retrieval of distinct values for dropdown via caching

Overview
In my ASP.Net MVC application, I have several pages that utilize a DataRecord search functionality that is dynamically configured by the site admin to have specific DataRecord fields available as criteria in one of a few different search input types. One of the input types available is a dropdown, which is populated with the distinct DataRecord values of that particular field that are relevant to whatever the search context is.
I'm looking to decrease the amount of time it takes to create these dropdowns, and am open to suggestions.
I'll list out things in the following manner:
SQL Structure
Sample Query
Business Rules
Miscellaneous Info (may or may not be relevant, but I didn't want to rule anything out)
SQL Structure
Listed from greatest to lowest scope, with only relevant fields. Each table has a one to many relationship with the table that follows. Keep in mind these were all created and maintained via EF Code First with Migrations.
CREATE TABLE [dbo].[CompanyInfoes](
[Id] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [PK_dbo.CompanyInfoes] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[BusinessLines](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Company_Id] [int] NOT NULL,
CONSTRAINT [PK_dbo.BusinessLines] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
ALTER TABLE [dbo].[BusinessLines] WITH CHECK ADD CONSTRAINT [FK_dbo.BusinessLines_dbo.CompanyInfoes_Company_Id] FOREIGN KEY([Company_Id])
REFERENCES [dbo].[CompanyInfoes] ([Id])
ALTER TABLE [dbo].[BusinessLines] CHECK CONSTRAINT [FK_dbo.BusinessLines_dbo.CompanyInfoes_Company_Id]
CREATE TABLE [dbo].[DataFiles](
[Id] [int] IDENTITY(1,1) NOT NULL,
[FileStatus] [int] NOT NULL,
[FileEnvironment] [int] NOT NULL,
[BusinessLine_Id] [int] NOT NULL,
CONSTRAINT [PK_dbo.DataFiles] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
ALTER TABLE [dbo].[DataFiles] WITH CHECK ADD CONSTRAINT [FK_dbo.DataFiles_dbo.BusinessLines_BusinessLine_Id] FOREIGN KEY([BusinessLine_Id])
REFERENCES [dbo].[BusinessLines] ([Id])
ON DELETE CASCADE
ALTER TABLE [dbo].[DataFiles] CHECK CONSTRAINT [FK_dbo.DataFiles_dbo.BusinessLines_BusinessLine_Id]
CREATE TABLE [dbo].[DataRecords](
[Id] [int] IDENTITY(1,1) NOT NULL,
[File_Id] [int] NOT NULL,
[Field1] [nvarchar](max) NULL,
[Field2] [nvarchar](max) NULL,
...
[Field20] [nvarchar](max) NULL,
CONSTRAINT [PK_dbo.DataRecords] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
ALTER TABLE [dbo].[DataRecords] WITH CHECK ADD CONSTRAINT [FK_dbo.DataRecords_dbo.DataFiles_File_Id1] FOREIGN KEY([File_Id])
REFERENCES [dbo].[DataFiles] ([Id])
ON DELETE CASCADE
ALTER TABLE [dbo].[DataRecords] CHECK CONSTRAINT [FK_dbo.DataRecords_dbo.DataFiles_File_Id1]
Sample Query (as generated by EF)
SELECT [Distinct1].[Field2] AS [Field2]
FROM ( SELECT DISTINCT
[Extent1].[Field2] AS [Field2]
FROM [dbo].[DataRecords] AS [Extent1]
INNER JOIN [dbo].[DataFiles] AS [Extent2] ON [Extent1].[File_Id] = [Extent2].[Id]
WHERE ([Extent2].[BusinessLine_Id] IN (4, 5, 6, 7, 8, 11, 12, 13, 14)) AND (0 = [Extent2].[FileEnvironment]) AND (1 = [Extent2].[FileStatus])
) AS [Distinct1]
Business Rules
The values within the Dropdown should be based on the viewing User's BusinessLine access ([BusinessLine_Id] clause in query), and the current page that the search is being used in conjunction with ([FileEnvironment] and [FileStatus]).
Which of the 20 DataRecords Fields should be presented as a Dropdown for searching is controlled by a site admin via an admin page, and is configured at a company level. Company A may have a Dropdown for Field1, Company B may have one for Field5, Field7, and Field18, and Company C may not have any Dropdowns what so ever.
While the layout and format of the DataRecords is consistent from company to company, the usage, and therefore the uniqueness of values, of Field1 - Field20 is not. Company A may have 3 unique values for Field1 across 900k records (hence why it makes sense to use a Dropdown for Field1 for them), while Company B may have something unique in Field1 for every DataRecord.
Everything database related is maintained via EF Migrations, and the site is set to auto apply migrations on App Startup (or on Deploy in the case of the Azure staging site). Anything that is recommended from a database perspective must be able to be implemented programmatically through migrations, so that the upgrading or instancing of the site and database may be done without manual intervention by someone with db access. Also, any database changes that need to be done should be not interfere with CodeFirst Migrations that are created when models are changed (IE cannot rename a column because some rogue index that was added outside of annotations exists)l
Similarly to the previous point, the Dropdown configuration is controlled via the site, so anything that needs to be done must be able to be added and removed on demand at runtime.
Relevant data changes that occur within usage of the site, but not necessarily by the current user:
FileStatus of a DataFile changes from 0 to 1 or 2
Which BusinessLines the current user can access changes
Additional BusinessLines are added
Relevant data changes that occur outside of the site (via importer app which is also part of the solution that the site is in and therefore can be modified if necessary):
New DataFiles and DataRecords are added
Additional BusinessLines are added (not a copy/paste error, they can be added through the importer as well)
Miscellaneous Info
The site is deployed to many locations, but in each deployment, the site to database is 1:1. So an in-memory caching is not out of the question.
There is only one Site Admin that controls which fields are represented as Dropdowns, and he can be educated about ramifications of making frequent changes and the caching each change may result in if necessary. He is also familiar with the data in each field at a Company level, and knows which fields are good candidates for Dropdowns.
Just to give a little data quantity context, in just over 2.5 months, the number of DataRecords for one company went from 558k to 924k. So obviously the solution should be able to work with an ever-growing amount of data.
Offloading the load time of loading of the values to an ajax request as to not hold up the page load is a good solution in general, but not one I can use for this.
Two quick items that jump out here would be
1) to add the Field2 column that is being returned, as an INCLUDE in the CLUSTERED INDEX on the DataRecords table. That will keep it from needing to do a bookmark lookup to find the Field2 after the ON clause has done the main work of finding the ID's.
2) Not sure why there is an double select happening. I don't think it would be a big impact, but the query is just reselecting what it selected as distinct, not even changing the name...

Trigger Not Putting Data in History Table

I have the following trigger (along with others on similar tables) that sometimes fails to put data into the historic table. It should put data into a historic table exactly as it's inserted/updated and stamped with a date.
CREATE TRIGGER [dbo].[trig_UpdateHistoricProductCustomFields]
ON [dbo].[productCustomFields]
AFTER UPDATE,INSERT
AS
BEGIN
IF ((UPDATE(data)))
BEGIN
SET NOCOUNT ON;
DECLARE #date bigint
SET #date = datepart(yyyy,getdate())*10000000000+datepart(mm,getdate())*100000000+datepart(dd,getdate())*1000000+datepart(hh,getdate())*10000+datepart(mi,getdate())*100+datepart(ss,getdate())
INSERT INTO historicProductCustomFields (productId,customFieldNumber,data,effectiveDate) (SELECT productId,customFieldNumber,data,#date from inserted)
END
END
Schema:
CREATE TABLE [dbo].[productCustomFields](
[id] [int] IDENTITY(1,1) NOT NULL,
[productId] [int] NOT NULL,
[customFieldNumber] [int] NOT NULL,
[data] [varchar](50) NULL,
CONSTRAINT [PK_productCustomFields] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[historicProductCustomFields](
[id] [bigint] IDENTITY(1,1) NOT NULL,
[productId] [int] NOT NULL,
[customFieldNumber] [int] NOT NULL,
[data] [varchar](50) NULL,
[effectiveDate] [bigint] NOT NULL,
CONSTRAINT [PK_historicProductCustomFields] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
I insert and update only on one record at a time on the productCustomFields table. It seems to work 99% of the time and hard to test for failure. Can anyone shed some light on what I may be doing wrong or better practices for this type of trigger?
Environment is Sql Server Express 2005. I haven't rolled out the service pack yet for sql server either for this particular client.
I think the right way to solve this is keep a TRY CATCH block when inserting into the dbo.historicProductCustomFields table and write the errors into a custom errorlog table. From there it is easy to track this down.
I also see a PK on the historicProductCustomFields table but if you insert and update a given record in ProductCustomFields table then won't you get primary key violations on the historicProductCustomFields table?
You should schema qualify your table that you are inserting into.
You should check to ensure that there are not multiple triggers on the table, as if there are, only 1 trigger for that type of trigger will fire and if there are 2 defined, they are run in random order. In other words, 2 triggers of the same type (AFTER INSERT) then one would fire and the other would not, but you don't necessary have control as to which will fire.
try to use this trigger. i just give you example try to write trigger with this trigger.
create TRIGGER [dbo].[insert_Assets_Tran]
ON [dbo].[AssetMaster]
AFTER INSERT , UPDATE
AS BEGIN
DECLARE #isnum TINYINT;
SELECT #isnum = COUNT(*) FROM inserted;
IF (#isnum = 1)
INSERT INTO AssetTransaction
select [AssetId],[Brandname],[SrNo],[Modelno],[Processor],[Ram],[Hdd],[Display],[Os],[Office],[Purchasedt]
,[Expirydt],[Vendor],[VendorAMC],[Typename],[LocationName],[Empid],[CreatedBy],[CreatedOn],[ModifiedBy]
,[ModifiedOn],[Remark],[AssetStatus],[Category],[Oylstartdt],[Oylenddt],[Configuration]
,[AStatus],[Tassign]
FROM inserted;
ELSE
RAISERROR('some fields not supplied', 16, 1)
WITH SETERROR;
END

Which approach is better for this scenario?

We have the following table:
CREATE TABLE [dbo].[CampaignCustomer](
[ID] [int] IDENTITY(1,1) NOT NULL,
[CampaignID] [int] NOT NULL,
[CustomerID] [int] NULL,
[CouponCode] [nvarchar](20) NOT NULL,
[CreatedDate] [datetime] NOT NULL,
[ModifiedDate] [datetime] NULL,
[Active] [bit] NOT NULL,
CONSTRAINT [PK_CampaignCustomer] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
and the following Unique Index:
CREATE UNIQUE NONCLUSTERED INDEX [IX_CampaignCustomer_CouponCode] ON [dbo].[CampaignCustomer]
(
[CouponCode] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 20) ON [PRIMARY]
GO
We do pretty constant queries using the CouponCode and other foreign keys (not shown above for simplicity). The CampaignCustomer table has almost 4 million records and growing. We also do campaigns that don't require Coupon Codes and therefore we don't insert those records. Now we need to also start tracking those campaigns as well for another purpose. So we have 2 options:
We change the CouponCode column ot allow nulls and create a unique filetered index to not include nulls and allow the table to grow even bigger and faster.
Create a separate table for tracking all campaigns for this specific purpose.
Keep in mind that the CampaignCustomer table is used very often for redeeming coupons and inserting new ones. Bottom line is we don't want our customer to redeem a coupon and stay waiting until they give up or for other processes to fail. So, from an efficiency perspective, which option do you think is best and why?
I'd go for the filtered index... you're storing the same data so keep it in the same table.
Splitting the table is refactoring when you probably don't need it and adds complexity.
Do you have problems with 4 million rows? It's not that much especially for such a narrow table
I'm against a duplicate table for the sake of a single column
Allowing the couponcode to be null means that someone could accidentally create a record where the value is NULL when it should be a valid couponcode
I would create a couponcode that indicates as being a non-coupon rather than resorting to indicator columns "isCoupon" or "isNonCouponCampaign", and use a filtered index to ignore the "nocoupon" value.
Which leads to my next point - I don't see a foreign key reference, but it would be key to knowing what coupons existed and which ones were actually used. Some of the columns in the existing table could be moved up to the parent couponcode table...

Resources