TSQL How to select employee with skills in xml column - sql-server

In a table schema like below
CREATE TABLE [dbo].[Employee](
[EmployeeId] [uniqueidentifier] NOT NULL,
[Name] [nvarchar](50) NOT NULL,
[Location] [nvarchar](50) NOT NULL,
[Skills] [xml] NOT NULL
CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED
How would i get Employees having C#(case insensitive) programming skills assuming the xml saved in the Skills columns is as below.
Could you advice on other functions would help me filter, sort when using xml data type columns
<Skills><Skill>C#</Skill><Skill>ASP.NET</Skill><Skill>VB.NET</Skill></Skills>

The comparison is case sensitive so you need to compare against both c# and C#. In SQL Server 2008 you can use upper-case.
declare #T table
(
ID int identity,
Skills XML
)
insert into #T values
('<Skills><Skill>C#</Skill><Skill>ASP.NET</Skill><Skill>VB.NET</Skill></Skills>')
insert into #T values
('<Skills><Skill>CB.NET</Skill><Skill>ASP.NET</Skill><Skill>c#</Skill></Skills>')
insert into #T values
('<Skills><Skill>F#</Skill><Skill>ASP.NET</Skill><Skill>VB.NET</Skill></Skills>')
select ID
from #T
where Skills.exist('/Skills/Skill[contains(., "C#") or contains(., "c#")]') = 1
Result:
ID
-----------
1
2
Update:
This will also work.
select T.ID
from #T as T
cross apply T.Skills.nodes('/Skills/Skill') as X(N)
where X.N.value('.', 'nvarchar(50)') like '%C#%'

Related

How to insert values into two SQL Server tables updating primary key and foreign key simultaneously, using a procedure?

I have something like this:
CREATE TABLE [dbo].[table1]
(
[id1] [int] IDENTITY(1,1) NOT NULL,
[data] [varchar](255) NOT NULL,
CONSTRAINT [PK_table1] PRIMARY KEY(id1)
)
CREATE TABLE [dbo].[table2]
(
[id2] [int] IDENTITY(1,1) NOT NULL,
[id1] [int] ,
CONSTRAINT [PK_table2] PRIMARY KEY (id2)
CONSTRAINT [FK_table2] FOREIGN KEY(id1) REFERENCES Table1
)
I want to add values to both the tables using a procedure. I'm not adding any key values just data values.
If I use INSERT INTO to add data into Table 1, its primary key will be autoincremented. I will also be incrementing the Table 2 in the same procedure.
I want that the autoincremented primary key of Table 1 should automatically be updated as foreign key in Table 2 when I run that procedure.
You need to do something like this:
CREATE PROCEDURE dbo.InsertData (#data VARCHAR(255))
AS
BEGIN
-- Insert row into table1
INSERT INTO dbo.Table1 (data) VALUES (#data);
-- Capture the newly generated "Id1" value
DECLARE #NewId1 INT;
SELECT #NewId1 = SCOPE_IDENTITY();
-- Insert data into table2
INSERT INTO dbo.table2 (Id1) VALUES (#NewId1);
END
I don't know if I understand what do you want to do but I think you can do something like this:
INSERT INTO table1 (data) VALUES 'mydata'
DECLARE #LastKey INT
SET #LastKey = SCOPE_IDENTITY() -- FOR SQL SERVER, OR LAST_INSERT_ID() FOR MYSQL
INSERT INTO table2 (data) VALUES #LastKey

How to have an identity column for a temp table in SQL?

How to have an identity column for a temp table in SQL?
Explicit value must be specified for identity column in table '#T'
either when IDENTITY_INSERT is set to ON or when a replication user is
inserting into a NOT FOR REPLICATION identity column.
I am getting SQL Syntax error for the below block.
if object_id('tempdb.dbo.#t') is not null
drop table #t
create table #t
(
[ID] [int] IDENTITY(1,1) NOT NULL,
[TotalCount] int,
[PercentageComplete] nvarchar(4000)
)
insert into #t
select totalcount, percentagecomplete from table_a
Add this to your query after table declaration
SET IDENTITY_INSERT #t OFF
This should fix it. The following code works on my machine
CREATE TABLE #t
(
[ID] [INT] IDENTITY(1,1) NOT NULL,
[TotalCount] INT,
[PercentageComplete] NVARCHAR(4000)
)
SET IDENTITY_INSERT #t OFF
INSERT INTO #t
SELECT
totalcount, percentagecomplete
FROM
table_a

SCOPE_IDENTITY() return 1

I use this code in SQL Server 2012:
INSERT INTO [dbo].[test] ([t])
VALUES ('tyy');
SELECT [Id]
FROM [dbo].[test]
WHERE ([Id] = SCOPE_IDENTITY())
It returns the last inserted id and it worked well.
But the same code in vb.net 2017 and .net Framework 4.7.2 is not working - for every insert, it returns 1.
This is the code:
Dim id = TestTableAdapter.InsertQuery("nvnh")
Table:
CREATE TABLE [dbo].[test]
(
[Id] INT IDENTITY (1, 1) NOT NULL,
[t] NVARCHAR (50) NULL,
CONSTRAINT [PK_test] PRIMARY KEY CLUSTERED ([Id] ASC)
);
thanks for all.
i finally found the solution.
i use already any from above solutions,
but the query eqcutemode is NonQuery ,so it give me the count of inserted rows
i changed it to Scalar

Slow Performance when ORDER BY in SQL Server

I'm working on a project (Microsoft SQL Server 2012) in which I do need to store quite some data.
Currently my table does contains 1441352 records in total.
The structure of the table is as follows:
RecordIdentifier (int, not null)
GlnCode (PK, nvarchar(100), not null)
Description (nvarchar(MAX), not null)
VendorId (nvarchar(100), not null)
VendorName (nvarchar(100), not null)
ItemNumber (PK, nvarchar(100), not null)
ItemUOM (PK, nvarchar(128), not null)
My table is indexed on the following fields:
NonClustered - GlnCode, Ascending
NonClustered - ItemNumber, Ascending
NonClustered - ItemUOM, Ascending
NonClustered - VendorID, Ascending
Clustered - Unique (The above 4 columns together).
Now, when I'm writing an API to return the records in the table.
The API exposes methods and it's executing this query:
SELECT TOP (51)
[GlnCode] AS [GlnCode],
[VendorId] AS [VendorId],
[ItemNumber] AS [ItemNumber],
[ItemUOM] AS [ItemUOM],
[RecordIdentitifer] AS [RecordIdentitifer],
[Description] AS [Description],
[VendorName] AS [VendorName]
FROM [dbo].[T_GENERIC_ARTICLE]
If I look at the performance, this is good.
But, this doesn't guarantee me to return always the same set, so I need to apply an ORDER BY clause, meaning the query being executed looks like this:
SELECT TOP (51)
[GlnCode] AS [GlnCode],
[VendorId] AS [VendorId],
[ItemNumber] AS [ItemNumber],
[ItemUOM] AS [ItemUOM],
[RecordIdentitifer] AS [RecordIdentitifer],
[Description] AS [Description],
[VendorName] AS [VendorName]
FROM [dbo].[T_GENERIC_ARTICLE]
ORDER BY [GlnCode] ASC, [ItemNumber] ASC, [ItemUOM] ASC, [VendorId] ASC
Now, the query takes a few seconds to return, which I can't afford.
Anyone has any idea on how to solve this issue?
Your table index definitions are not optimal. You also don't have to created the additional individual indexes because they are covered by the Non Clustered Index. You will have better performance when structuring your indexes as follows:
Table definition:
CREATE TABLE [dbo].[T_GENERIC_ARTICLE]
(
RecordIdentifier int IDENTITY(1,1) PRIMARY KEY NOT NULL,
GlnCode nvarchar(100) NOT NULL,
Description nvarchar(MAX) NOT NULL,
VendorId nvarchar(100) NOT NULL,
VendorName nvarchar(100) NOT NULL,
ItemNumber nvarchar(100) NOT NULL,
ItemUOM nvarchar(128) NOT NULL
)
GO
CREATE UNIQUE NONCLUSTERED INDEX [UniqueNonClusteredIndex-Composite2]
ON [dbo].[T_GENERIC_ARTICLE](GlnCode, ItemNumber,ItemUOM,VendorId ASC);
GO
Revised Query
SELECT TOP (51)
[RecordIdentifier] AS [RecordIdentitifer],
[GlnCode] AS [GlnCode],
[VendorId] AS [VendorId],
[ItemNumber] AS [ItemNumber],
[ItemUOM] AS [ItemUOM],
[Description] AS [Description],
[VendorName] AS [VendorName]
FROM [dbo].[T_GENERIC_ARTICLE]
ORDER BY [GlnCode], [ItemNumber], [ItemUOM], [VendorId]
First a key lookup will be performed on the Primary Key and then a Non Clustered Index Scan. This is where you want the majority of the work to be done.
Reference:
Indexes in SQL Server
Hope This helps

How to improve my query performance by indexing

i just want to know how will i index the this table for optimal performance? This will potentially hold around 20M rows.
CREATE TABLE [dbo].[Table1](
[ID] [bigint] NOT NULL,
[Col1] [varchar](100) NULL,
[Col2] [varchar](100) NULL,
[Description] [varchar](100) NULL
) ON [PRIMARY]
Basically, this table will be queried ONLY in this manner.
SELECT ID FROM Table1
WHERE Col1 = 'exactVal1' AND Col2 = 'exactVal2' AND [Description] = 'exactDesc'
This is what i did:
CREATE NONCLUSTERED INDEX IX_ID
ON Table1(ID)
GO
CREATE NONCLUSTERED INDEX IX_Col1
ON Table1(Col1)
GO
CREATE NONCLUSTERED INDEX IX_Col2
ON Table1(Col2)
GO
CREATE NONCLUSTERED INDEX IX_ValueDescription
ON Table1(ValueDescription)
GO
Am i right to index all these columns? Not really that confident yet. Just new to SQL stuff, please let me know if im on the right track.
Again, a lot of data will be put on this table. Unfortunately, i cannot test the performance yet since there are no available data. But I will soon be generating some dummy data to test the performance. But it would be great if there is already another option(suggestion) available that i can compare the results with.
Thanks,
jack
I would combine these indexes into one index, instead of having three separate indexes. For example:
CREATE INDEX ix_cols ON dbo.Table1 (Col1, Col2, Description)
If this combination of columns is unique within the table, then you should add the UNIQUE keyword to make the index unique. This is for performance reasons, but, also, more importantly, to enforce uniqueness. It may also be created as a primary key if that is appropriate.
Placing all of the columns into one index will give better performance because it will not be necessary for SQL Server to use multiple passes to find the row you are seeking.
Try this -
CREATE TABLE dbo.Table1
(
ID BIGINT NOT NULL
, Col1 VARCHAR(100) NULL
, Col2 VARCHAR(100) NULL
, [Description] VARCHAR(100) NULL
)
GO
CREATE CLUSTERED INDEX IX_Table1 ON dbo.Table1
(
Col1
, Col2
, [Description]
)
Or this -
CREATE TABLE dbo.Table1
(
ID BIGINT PRIMARY KEY NOT NULL
, Col1 VARCHAR(100) NULL
, Col2 VARCHAR(100) NULL
, [Description] VARCHAR(100) NULL
)
GO
CREATE UNIQUE NONCLUSTERED INDEX IX_Table1 ON dbo.Table1
(
Col1
, Col2
, [Description]
)

Resources