SQL Arithmetic Overflow for Identity Insert - sql-server

I have an error with arithmetic overflow inserting values into a lookup table with a row-id set as TINYINT datatype. This IS NOT a case where the number of unique records exceeds 255 values. This is a bit more unusual and did not occur during the first tests of this setup.
The production version of the code below actually only has 66 unique values, but it is possible that new values could be added (slowly and in very small numbers) over time... 255 available slots should be more than enough for the lifespan of this analysis process.
My initial thoughts were that it may be due to a cached plan recognizing the hierarchical source table has more than 255 values (there are in fact 1028), and evaluating that this may exceed the destination table's capacity. I have tested that this is not true however.
-- This table represents a small (tinyint) subset of unique primary values.
CREATE TABLE #tmp_ID10T_Test (
ID10T_Test_ID tinyint identity (1,1) not null,
ID10T_String varchar(255) not null
PRIMARY KEY CLUSTERED
(ID10T_String ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
) ON [PRIMARY]
-- This table represents a larger (smallint) set of non-unique source values, defined by a secondary key value (Value_Set).
CREATE TABLE #tmp_ID10T_Values (
ID10T_Value_ID smallint identity (1,1) not null,
ID10T_Value_Set tinyint not null,
ID10T_String varchar(255) not null
) ON [PRIMARY]
-- Create the initial dataset - 100 unique records; The insertion tests below illustrate that the INDEX is working
-- correctly on the primary key field for repetative values, however something is happening with the IDENTITY field...
DECLARE #ID10T tinyint
, #i tinyint -- A randomized value to determine which subset of printable ASCII characters will be used for the string.
, #String varchar(255)
SET #ID10T = 0
WHILE #ID10T < 100
BEGIN
SET #String = ''
WHILE LEN(#String) < (1+ROUND((254 * RAND(CHECKSUM(NEWID()))),0))
BEGIN
SELECT #i = (1 + ROUND((2 * RAND()),0)) -- Randomize which printable character subset is drawn from.
SELECT #String = #String + ISNULL(CASE WHEN #i = 1 THEN char(48 + ROUND(((57-48)* RAND(CHECKSUM(NEWID()))),0))
WHEN #i = 2 THEN char(65 + ROUND(((90-65) * RAND(CHECKSUM(NEWID()))),0))
WHEN #i = 3 THEN char(97 + ROUND(((122-97) * RAND(CHECKSUM(NEWID()))),0))
END,'-')
END
INSERT INTO #tmp_ID10T_Values (ID10T_Value_Set, ID10T_String)
SELECT 1, #String
SET #ID10T = #ID10T + 1
END
-- Demonstrate that IGNORE_DUP_KEY = ON works for primary key index on string-field
SELECT * FROM #tmp_ID10T_Values
-- Method 1 - Simple INSERT INTO: Expect Approx. (100 row(s) affected)
INSERT INTO #tmp_ID10T_Test (ID10T_String)
SELECT DISTINCT ID10T_String
FROM #tmp_ID10T_Values
GO
-- Method 2 - LEFT OUTER JOIN WHERE NULL to prevent dupes.
-- this is the test case to determine whether the procedure cache is mixing plans
INSERT INTO #tmp_ID10T_Test (ID10T_String)
SELECT DISTINCT T1.ID10T_String
FROM #tmp_ID10T_Values AS T1
LEFT OUTER JOIN #tmp_ID10T_Test AS t2
ON T1.ID10T_String = T2.ID10T_String
WHERE T2.ID10T_Test_ID IS NULL
GO
-- Repeat Method 1: Duplicate key was ignored (0 row(s) affected).
INSERT INTO #tmp_ID10T_Test (ID10T_String)
SELECT DISTINCT ID10T_String
FROM #tmp_ID10T_Values
GO
This does not seem to be a query plan cache issue - I should see the arithmetic error on Method 1 retests if that were true.
-- Repeat Method 1: Expected: Arithmetic overflow error converting IDENTITY to data type tinyint.
INSERT INTO #tmp_ID10T_Test (ID10T_String)
SELECT DISTINCT ID10T_String
FROM #tmp_ID10T_Values
GO
I am particularly curious why the exception would be thrown. I can understand that in Method 1 all 100 unique values are tested... So conceivably the query agent sees a potential of 200 records after the second insert attempt; I DO NOT understand why it would see a potential for 300 records after a third repetition - the second attempt resulted in 0 rows so at most there would be a potential of 200 unique values.
Can someone explain this please?

The sequence of events using IGNORE_DUP_KEY is:
record is prepared for insert, including consumption of IDENTITY values. This makes the IDENTITY sequence go up to 200
record is inserted. IGNORE_DUP_KEY is observed and the insertions are silently failed
Now your failed batch of INSERT actually generates the IDENTITY values 201-300, for which anything above 255 overflows your tinyint column.
This is trivially verified by running
select ident_current('tmp_ID10T_Test')
liberally throughout your code. So for your code, annotated below:
-- Method 1 - Simple INSERT INTO: Expect Approx. (100 row(s) affected)
-- ident_current = 1
-- Method 2 - LEFT OUTER JOIN WHERE NULL to prevent dupes.
-- ident_current = 100
-- NOTE: The SELECT doesn't produce any rows. No rows to insert
-- Repeat Method 1: Duplicate key was ignored (0 row(s) affected).
-- ident_current = 200
-- NOTE: SELECT produced 100 rows, which consumed IDENTITY numbers. Later ignored
-- Repeat Method 1: Expected: Arithmetic overflow error converting IDENTITY to data type tinyint.
-- ident_current = 255
-- error raised when IDENTITY reaches 256

Related

MS SQL How to field add auto increment dependent on another field

For example, there is a table
int type
int number
int value
How to make that when inserting a value into a table
indexing started from 1 for different types.
type 1 => number 1,2,3...
type 2 => number 1,2,3...
That is, it will look like this.
type
number
value
1
1
-
1
2
-
1
3
-
2
1
-
1
4
-
2
2
-
3
1
-
6
1
-
1
5
-
2
3
-
6
2
-
Special thanks to #Larnu.
As a result, in my case, the best solution would be to create a table for each type.
As I mentioned in the comments, neither IDENTITY nor SEQUENCE support the use of another column to denote what "identity set" they should use. You can have multiple SEQUENCEs which you could use for a single table, however, this doesn't scale. If you are specific limited to 2 or 3 types, for example, you might choose to create 3 SEQUENCE objects, and then use a stored procedure to handle your INSERT statements. Then, when a user/application wants to INSERT data, they call the procedure and that procedure has logic to use the SEQUENCE based on the value of the parameter for the type column.
As mentioned, however, this doesn't scale well. If you have an undeterminate number of values of type then you can't easily handle getting the right SEQUENCE and handling new values for type would be difficult too. In this case, you would be better off using a IDENTITY and then a VIEW. The VIEW will use ROW_NUMBER to create your identifier, while IDENTITY gives you your always incrementing value.
CREATE TABLE dbo.YourTable (id int IDENTITY(1,1),
[type] int NOT NULL,
number int NULL,
[value] int NOT NULL);
GO
CREATE VIEW dbo.YourTableView AS
SELECT ROW_NUMBER() OVER (PARTITION BY [type] ORDER BY id ASC) AS Identifier,
[type],
number,
[value]
FROM dbo.YourTable;
Then, instead, you query the VIEW, not the TABLE.
If you need consistency of the column (I name identifier) you'll need to also ensure row(s) can't be DELETEd from the table. Most likely by adding an IsDeleted column to the table defined as a bit (with 0 for no deleted, and 1 for deleted), and then you can filter to those rows in the VIEW:
CREATE VIEW dbo.YourTableView AS
WITH CTE AS(
SELECT id,
ROW_NUMBER() OVER (PARTITION BY [type] ORDER BY id ASC) AS Identifier,
[type],
number,
[value],
IsDeleted
FROM dbo.YourTable)
SELECT id,
Identifier,
[type],
number,
[value]
FROM CTE
WHERE IsDeleted = 0;
You could, if you wanted, even handle the DELETEs on the VIEW (the INSERT and UPDATEs would be handled implicitly, as it's an updatable VIEW):
CREATE TRIGGER trg_YourTableView_Delete ON dbo.YourTableView
INSTEAD OF DELETE AS
BEGIN
SET NOCOUNT ON;
UPDATE YT
SET IsDeleted = 1
FROM dbo.YourTable YT
JOIN deleted d ON d.id = YT.id;
END;
GO
db<>fiddle
For completion, if you wanted to use different SEQUENCE object, it would look like this. Notice that this does not scale easily. I have to CREATE a SEQUENCE for every value of Type. As such, for a small, and known, range of values this would be a solution, but if you are going to end up with more value for type or already have a large range, this ends up not being feasible pretty quickly:
CREATE TABLE dbo.YourTable (identifier int NOT NULL,
[type] int NOT NULL,
number int NULL,
[value] int NOT NULL);
CREATE SEQUENCE dbo.YourTable_Type1
START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE dbo.YourTable_Type2
START WITH 1 INCREMENT BY 1;
CREATE SEQUENCE dbo.YourTable_Type3
START WITH 1 INCREMENT BY 1;
GO
CREATE PROC dbo.Insert_YourTable #Type int, #Number int = NULL, #Value int AS
BEGIN
DECLARE #Identifier int;
IF #Type = 1
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type1;
IF #Type = 2
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type2;
IF #Type = 3
SELECT #Identifier = NEXT VALUE FOR dbo.YourTable_Type3;
INSERT INTO dbo.YourTable (identifier,[type],number,[value])
VALUES(#Identifier, #Type, #Number, #Value);
END;

'String or binary data would be truncated' without any data exceeding the length

Yesterday suddenly a report occurred that someone was not able to get some data anymore because the issue Msg 2628, Level 16, State 1, Line 57 String or binary data would be truncated in table 'tempdb.dbo.#BC6D141E', column 'string_2'. Truncated value: '!012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678'. appeared.
I was unable to create a repro without our tables. This is the closest as I can get to:
-- Create temporary table for results
DECLARE #results TABLE (
string_1 nvarchar(100) NOT NULL,
string_2 nvarchar(100) NOT NULL
);
CREATE TABLE #table (
T_ID BIGINT NULL,
T_STRING NVARCHAR(1000) NOT NULL
);
INSERT INTO #table VALUES
(NULL, '0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789'),
(NULL, '!0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789!');
WITH abc AS
(
SELECT
'' AS STRING_1,
t.T_STRING AS STRING_2
FROM
UT
INNER JOIN UTT ON UTT.UT_ID = UT.UT_ID
INNER JOIN MV ON MV.UTT_ID = UTT.UTT_ID
INNER JOIN OT ON OT.OT_ID = MV.OT_ID
INNER JOIN #table AS T ON T.T_ID = OT.T_ID -- this will never get hit because T_ID of #table is NULL
)
INSERT INTO #results
SELECT STRING_1, STRING_2 FROM abc
ORDER BY LEN(STRING_2) DESC
DROP TABLE #table;
As you can see the join of #table cannot yield any results because all T_ID are NULL nevertheless I am getting the error mentioned above. The result set is empty.
That would be okay if a text with more than 100 characters would be in the result set but that is not the case because it is empty. If I remove the INSERT INTO #results and display the results it does not contain any text with more than 100 characters. The ORDER BY was only used to determine the faulty text value (with the original data).
When I use SELECT STRING_1, LEFT(STRING_2, 100) FROM abc it does work but it does not contain the text either that is meant to be truncated.
Therefore: What am I missing? Is it a bug of SQL Server?
-- this will never get hit is a bad assumption. It is well known and documented that SQL Server may try to evaluate parts of your query before it's obvious that the result is impossible.
A much simpler repro (from this post and this db<>fiddle):
CREATE TABLE dbo.t1(id int NOT NULL, s varchar(5) NOT NULL);
CREATE TABLE dbo.t2(id int NOT NULL);
INSERT dbo.t1 (id, s) VALUES (1, 'l=3'), (2, 'len=5'), (3, 'l=3');
INSERT dbo.t2 (id) VALUES (1), (3), (4), (5);
GO
DECLARE #t table(dest varchar(3) NOT NULL);
INSERT #t(dest) SELECT t1.s
FROM dbo.t1
INNER JOIN dbo.t2 ON t1.id = t2.id;
Result:
Msg 2628, Level 16, State 1
String or binary data would be truncated in table 'tempdb.dbo.#AC65D70E', column 'dest'. Truncated value: 'len'.
While we should have only retrieved rows with values that fit in the destination column (id is 1 or 3, since those are the only two rows that match the join criteria), the error message indicates that the row where id is 2 was also returned, even though we know it couldn't possibly have been.
Here's the estimated plan:
This shows that SQL Server expected to convert all of the values in t1 before the filter eliminated the longer ones. And it's very difficult to predict or control when SQL Server will process your query in an order you don't expect - you can try with query hints that attempt to either force order or to stay away from hash joins but those can cause other, more severe problems later.
The best fix is to size the temp table to match the source (in other words, make it large enough to fit any value from the source). The blog post and db<>fiddle explain some other ways to work around the issue, but declaring columns to be wide enough is the simplest and least intrusive.

update Varbinary Adding additional zero in the beginning if the length is odd

I am trying to save a file but because of that additional zero the file is not opening when downloaded from Database.
There are many questions around this but could not get an answer
Update varbinary(MAX) field in SQLServer 2012 Lost Last 4 bits
Update varbinary(MAX) column
Can some one please help me in saving a file as a seed data while goes as a postscript in my DB project
Adding more info to repro issue.
CREATE TABLE [Thumbnail](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Data] [varbinary](max) NULL
CONSTRAINT [PK_Thumbnail] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ) ON [PRIMARY]
INSERT [Thumbnail] ( Data )
SELECT * FROM OPENROWSET (BULK 'img.png', SINGLE_BLOB) AS X
if I upload an image with the above script it is working good(with odd length as well). But the same script is updated as a
insert into thumbnail (data) values(0x7364736466736466736) it is appending additional zero and I am not able to open my file back if the length is odd.
Please help.
You can try this:
I set up a mockup to simulate your issue
DECLARE #tbl TABLE(SomeValue VARCHAR(10),TheValueAsBin VARBINARY(MAX));
INSERT INTO #tbl(SomeValue) VALUES(NULL),(''),('1'),('22'),('333'),('4444');
--With this I set a second column to the corresponding VARBINARY values
UPDATE #tbl SET TheValueAsBin = CAST(SomeValue AS VARBINARY(MAX));
--Check the intermediate result
SELECT *
,DATALENGTH(TheValueAsBin) AS DataLengthOfYourBinValue
FROM #tbl
/*
SomeValue TheValueAsBin DataLengthOfYourBinValue
NULL NULL NULL
0x 0
1 0x31 1
22 0x3232 2
333 0x333333 3
4444 0x34343434 4
*/
--Now I update the VARBINARY value with a CASE using the % (modulo) operator to switch between odd and even numbers.
UPDATE #tbl SET TheValueAsBin = TheValueAsBin + CASE WHEN DATALENGTH(TheValueAsBin)%2 = 1 THEN 0x0 ELSE 0x END
--Check the final output
SELECT *
,DATALENGTH(TheValueAsBin) AS DataLengthOfYourBinValue
FROM #tbl
The result
SomeValue TheValueAsBin DataLengthOfYourBinValue
NULL NULL NULL
0x 0
1 0x3100 2
22 0x3232 2
333 0x33333300 4
4444 0x34343434 4

Check constraint does not work on bulk insert for more than 250 records

My query :
INSERT into PriceListRows (PriceListChapterId,[No])
SELECT TOP 250 100943 ,N'2'
FROM #AnyTable
This query works fine and the following exception raises as desired:
The INSERT statement conflicted with the CHECK constraint
"CK_PriceListRows_RowNo_Is_Not_Unqiue_In_PriceList". The conflict
occurred in database "TadkarWeb", table "dbo.PriceListRows".
but with changing SELECT TOP 250 to SELECT TOP 251 (yes! just changing 250 to 251!) the query runs successfully without any check constrain exception!
Why this odd behavior?
NOTES :
My check constraint is a function which checks some sort of uniqueness. It queries about 4 table.
I checked on both SQL Server 2012 SP2 and SQL Server 2014 SP1
** EDIT 1 **
Check constraint function:
ALTER FUNCTION [dbo].[CheckPriceListRows_UniqueNo] (
#rowNo nvarchar(50),
#rowId int,
#priceListChapterId int,
#projectId int)
RETURNS bit
AS
BEGIN
IF EXISTS (SELECT 1
FROM RowInfsView
WHERE PriceListId = (SELECT PriceListId
FROM ChapterInfoView
WHERE Id = #priceListChapterId)
AND (#rowID IS NULL OR Id <> #rowId)
AND No = #rowNo
AND (#projectId IS NULL OR
(ProjectId IS NULL OR ProjectId = #projectId)))
RETURN 0 -- Error
--It is ok!
RETURN 1
END
** EDIT 2 **
Check constraint code (what SQL Server 2012 produces):
ALTER TABLE [dbo].[PriceListRows] WITH NOCHECK ADD CONSTRAINT [CK_PriceListRows_RowNo_Is_Not_Unqiue_In_PriceList] CHECK (([dbo].[tfn_CheckPriceListRows_UniqueNo]([No],[Id],[PriceListChapterId],[ProjectId])=(1)))
GO
ALTER TABLE [dbo].[PriceListRows] CHECK CONSTRAINT [CK_PriceListRows_RowNo_Is_Not_Unqiue_In_PriceList]
GO
** EDIT 3 **
Execution plans are here : https://www.dropbox.com/s/as2r92xr14cfq5i/execution%20plans.zip?dl=0
** EDIT 4 **
RowInfsView definition is :
SELECT dbo.PriceListRows.Id, dbo.PriceListRows.No, dbo.PriceListRows.Title, dbo.PriceListRows.UnitCode, dbo.PriceListRows.UnitPrice, dbo.PriceListRows.RowStateCode, dbo.PriceListRows.PriceListChapterId,
dbo.PriceListChapters.Title AS PriceListChapterTitle, dbo.PriceListChapters.No AS PriceListChapterNo, dbo.PriceListChapters.PriceListCategoryId, dbo.PriceListCategories.No AS PriceListCategoryNo,
dbo.PriceListCategories.Title AS PriceListCategoryTitle, dbo.PriceListCategories.PriceListClassId, dbo.PriceListClasses.No AS PriceListClassNo, dbo.PriceListClasses.Title AS PriceListClassTitle,
dbo.PriceListClasses.PriceListId, dbo.PriceLists.Title AS PriceListTitle, dbo.PriceLists.Year, dbo.PriceListRows.ProjectId, dbo.PriceListRows.IsTemplate
FROM dbo.PriceListRows INNER JOIN
dbo.PriceListChapters ON dbo.PriceListRows.PriceListChapterId = dbo.PriceListChapters.Id INNER JOIN
dbo.PriceListCategories ON dbo.PriceListChapters.PriceListCategoryId = dbo.PriceListCategories.Id INNER JOIN
dbo.PriceListClasses ON dbo.PriceListCategories.PriceListClassId = dbo.PriceListClasses.Id INNER JOIN
dbo.PriceLists ON dbo.PriceListClasses.PriceListId = dbo.PriceLists.Id
The explanation is that your execution plan is using a "wide" (index by index) update plan.
The rows are inserted into the clustered index at step 1 in the plan. And the check constraints are validated for each row at step 2.
No rows are inserted into the non clustered indexes until all rows have been inserted into the clustered index.
This is because there are two blocking operators between the clustered index insert / constraints checking and the non clustered index inserts. The eager spool (step 3) and the sort (step 4). Both of these produce no output rows until they have consumed all input rows.
The plan for the scalar UDF uses the non clustered index to try and find matching rows.
At the point the check constraint runs no rows have yet been inserted into the non clustered index so this check comes up empty.
When you insert fewer rows you get a "narrow" (row by row) update plan and avoid the problem.
My advice is to avoid this kind of validation in check constraints. It is difficult to be sure that the code will work correctly in all circumstances (such as different execution plans and isolation levels) and additionally they block parellelism in queries against the table. Try to do it declaratively (a unique constraint that needs to join onto other tables can often be achieved with an indexed view).
A simplified repro is
CREATE FUNCTION dbo.F(#Z INT)
RETURNS BIT
AS
BEGIN
RETURN CASE WHEN EXISTS (SELECT * FROM dbo.T1 WHERE Z = #Z) THEN 0 ELSE 1 END
END
GO
CREATE TABLE dbo.T1
(
ID INT IDENTITY PRIMARY KEY,
X INT,
Y CHAR(8000) DEFAULT '',
Z INT,
CHECK (dbo.F(Z) = 1),
CONSTRAINT IX_X UNIQUE (X, ID),
CONSTRAINT IX_Z UNIQUE (Z, ID)
)
--Fails with check constraint error
INSERT INTO dbo.T1 (Z)
SELECT TOP (10) 1 FROM master..spt_values;
/*I get a wide update plan for TOP (2000) but this may not be reliable
across instances so using trace flag 8790 to get a wide plan. */
INSERT INTO dbo.T1 (Z)
SELECT TOP (10) 2 FROM master..spt_values
OPTION (QUERYTRACEON 8790);
GO
/*Confirm only the second insert succceed (Z=2)*/
SELECT * FROM dbo.T1;
DROP TABLE dbo.T1;
DROP FUNCTION dbo.F;
It's possible that you are encountering an incorrect optimization of a query, but without having the data in all the involved tables, we cannot reproduce the bug.
However, for this kind of checks, I recommend using triggers instead of check constraints based on functions. In a trigger, you could use a SELECT statement to debug why it's not working as expected. For example:
CREATE TRIGGER trg_PriceListRows_CheckUnicity ON PriceListRows
FOR INSERT, UPDATE
AS
IF ##ROWCOUNT>0 BEGIN
/*
SELECT * FROM inserted i
INNER JOIN RowInfsView r
ON r.PriceListId = (
SELECT c.PriceListId
FROM ChapterInfoView c
WHERE c.Id = i.priceListChapterId
)
AND r.Id <> i.Id
AND r.No = i.No
AND (r.ProjectId=i.ProjectId OR r.ProjectId IS NULL AND i.ProjectId IS NULL)
*/
IF EXISTS (
SELECT * FROM inserted i
WHERE EXISTS (
SELECT * FROM RowInfsView r
WHERE r.PriceListId = (
SELECT c.PriceListId
FROM ChapterInfoView c
WHERE c.Id = i.priceListChapterId
)
AND r.Id <> i.Id
AND r.No = i.No
AND (r.ProjectId=i.ProjectId OR r.ProjectId IS NULL AND i.ProjectId IS NULL)
)
) BEGIN
RAISERROR ('Duplicate rows!',16,1)
ROLLBACK
RETURN
END
END
This way, you can see what is being checked and correct your views and/or existing data.

Speed up to offset in SQL Server 2014

I have a table with about 70000000 rows of phone numbers. I use OFFSET to read those 50 by 50 numbers.
But it takes a long time (about 1 min).
However, that full-text index used for search and does not impact for offset.
How I can speed up my query?
SELECT *
FROM tblPhoneNumber
WHERE CountryID = #CountryID
ORDER BY ID
OFFSET ((#NumberCount - 1) * #PackageSize) ROWS
FETCH NEXT #PackageSize ROWS ONLY
Throw a sequence on that table, index it and fetch ranges by sequence. You could alternatively just use the ID column.
select *
FROM tblPhoneNumber
WHERE
CountryID = #CountryID
and Sequence between #NumberCount and (#NumberCount + #PackageSize)
If you're inserting/deleting frequently, this can leave gaps, so depending on the code that utilizes these batches of numbers, this might be a problem, but in general a few gaps here and there may not be a problem for you.
Try using CROSS APPLY instead of OFFSET FETCH and do it all in one go. I grab TOP 2 to show you that you can grab any number of rows.
IF OBJECT_ID('tempdb..#tblPhoneNumber') IS NOT NULL
DROP TABLE #tblPhoneNumber;
IF OBJECT_ID('tempdb..#Country') IS NOT NULL
DROP TABLE #Country;
CREATE TABLE #tblPhoneNumber (ID INT, Country VARCHAR(100), PhoneNumber INT);
CREATE TABLE #Country (Country VARCHAR(100));
INSERT INTO #Country
VALUES ('USA'),('UK');
INSERT INTO #tblPhoneNumber
VALUES (1,'USA',11111),
(2,'USA',22222),
(3,'USA',33333),
(4,'UK',44444),
(5,'UK',55555),
(6,'UK',66666);
SELECT *
FROM #Country
CROSS APPLY(
SELECT TOP (2) ID,Country,PhoneNumber --Just change to TOP(50) for your code
FROM #tblPhoneNumber
WHERE #Country.Country = #tblPhoneNumber.Country
) CA

Resources