I am trying to prove a table design flaw in a production db, that a table must not have a clustered primary key on a column that can have a random data, in this case a code keyed in by end user.
Though we know the solution is to make the PK as non-clustered, I still need to add rows to it for testing purpose on its replica. Therefore, I will need to know what would be the character I can use after 'Z' as a prefix.
More, the column is not a unicode, and it would be a mess to prefix my fake data with a series of Zs. The table is now having hundred-thousands rows, and each insertion is taking seconds.
Just run this and go down the list. I added the sandwiching dots for clarity, esp. when non-visible characters are involved.
select number, '.' + char(number) + '.' collate SQL_Latin1_General_CP1_CI_AS thechar
from master..spt_values
where type='p' and number between 28 and 255
order by thechar
There are only 4 characters coming after 'Z', since you say the column is not N(Var)Char.
121 .y.
89 .Y.
253 .ý.
221 .Ý.
255 .ÿ.
90 .Z.
122 .z.
208 .Ð.
240 .ð.
254 .þ.
222 .Þ.
Related
So I'm trying to make a query which goes through varbinary data. The issue is that I can't really finish what I'm trying to achieve. What you should know about the column is varbinary(50) and the patterns that occur have no specific order in writing, meaning every prefix could be anywhere as long it has 3 bytes(0x000000) First one is the prefix second and third are value data that I'm looking to check if its within the range i like. All the data is written like this.
What I've tried:
DECLARE #t TABLE (
val VARBINARY(MAX)
)
INSERT INTO #t SELECT 0x00000100000000000000000000000000000000000000000000000000
INSERT INTO #t SELECT 0x00001000000000000000000000000000000000000000000000000000
INSERT INTO #t SELECT 0x00010000000000000000000000000000000000000000000000000000
INSERT INTO #t SELECT 0x00100000000000000000000000000000000000000000000000000000
INSERT INTO #t SELECT 0x00000f00000000000000000000000000000000000000000000000000
declare #pattern varbinary(max)
declare #pattern2 varbinary(max)
set #pattern = 0x0001
set #pattern2 = #pattern+0xFF
select #pattern,#pattern2
SELECT
*
FROM #t
WHERE val<#pattern
OR val>#pattern2
This was total bust the patterns were accurate up to 2 symbols if I were to use 4 symbols as pattern it would work only if the pattern is in predefined position. I've tried combination of this and everything below.
WHERE CONVERT(varbinary(2), val) = 0xdata
also this:
select *
from table
where CONVERT(varchar(max),val,2) like '%data%'
Which works great for searching exact patterns, but not for ranges, I need some combination of both.
I'm aware I could technically add every possible outcome 1 by 1 and let it cycle through all the listed possibilities, but there has to be a smarter way.
Goals:
Locating the prefix(first binary data pair)
Defining a max value after the prefix, everything above that threshold to be listed in the results. Let's say '26' is the prefix, the highest allowed number after is '9600' or '269600'. Basically any data that exceeds this pattern '269600' should be detected example '269700'.
or query result would post this:
select * from table where CONVERT(varchar(max),attr,2) like
'%269700%'
I need something that would detect this on its own while i just give it start and end to look in between like the highest number variation would be '26ffff', but limiting it to something like 'ff00' is acceptable for what I'm looking for.
My best guess is 2 defined numbers, 1 being the allowed max range
and 2nd for a cap, so it doesn't go through every possible outcome.
But I would be happy to whatever works.
I'm aware this explanation is pretty dire, but bear with me, thanks.
*Update after the last suggestion
SELECT MIN(val), MAX(val) FROM #t where CONVERT(varchar(max),val,2) like '%26%'
This is pretty close, but its not sufficient i need to cycle through alot of data and use it after this would select only min or max even with the prefix filter. I believe i need min and max defined as a start and end range where the query should look for.
**Update2
I'm afraid you would end up disappointed, its nothing that interesting.
The data origin is related to a game server which stores the data like this. There's the predefined prefixes which are the stat type and the rest of the data is the actual numeric value of the stat. The data is represented by 6 characters data intervals. Here is a sample of the data stream. Its always 6-6-6-6-6 as long there's space to record the data on since its capped at 50 characters.
0x0329000414000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000
**Update3
Yes the groups are always in 3byte fashion, yes my idea was exactly that use the first byte to narrow down the search and use then use the second 2 bytes to filter it. I just don't know how to pull it off in an effective way. I'm not sure if i understood what u meant by predictively aligned assuming you meant if stat/prefix/header would always end up at the same binary location, if that's correct the answer is no. If the 3byte pattern is violated the data becomes unreadable meaning even if you don't need the extra byte you have to count it otherwise the data breaks example of a working data.
0x032900'041400'
example of a broken data:
0x0329'041400'
The only issue i could think is when the prefix and part of the value are both true example:
0x262600
Unless the query is specifically ordered to read the data in 3byte sequence meaning it knows that the first byte is always a prefix and the other 2 bytes are value.
Q:Can that be used as an alignment indicator so that the first non-zero byte after at least 3 zero bytes indicates the start of a group?
A:Yes, but that's unlikely I mean it although possible it would be written in order like:
0x260000'270000'
It wouldn't skip forward an entire 3byte group filled with no data. This type of entry would occur if someone were to manually insert it to the db, the server doesn't make records with gaps like those as far I'm aware:
0x260000'000000'270000'
To address your last comment that's something I don't know how to express it in a working query, except for the boneheaded version which would be me manually adding every possible number within my desired range with +1bit after that number. As you can imagine the query would look terrible. That's why I'm looking for a smarter solution that I cannot figure out how to do so by my self.
select * from #t
where (CONVERT(varchar(max),val,2) like '%262100%' or
CONVERT(varchar(max),attr,2) like '%262200%' or
etc...)
This may be a partial answer from which you can build.
The following will split the input data up into 3-byte (6 hex character) groups. It then extracts the first byte as the key, and several representations of the remaining two bytes as values.
SELECT S.*, P.*
FROM #t T
CROSS APPLY (
SELECT
N.Offset,
SUBSTRING(T.val, N.Offset + 1, 3) AS Segment
FROM (
VALUES
(0), (3), (6), (9), (12), (15), (18), (21), (24), (27),
(30), (33), (36), (39)
) N(Offset)
WHERE N.Offset < LEN(T.val) - 3
) S
CROSS APPLY(
SELECT
CONVERT(TINYINT, SUBSTRING(S.Segment, 1, 1)) AS [Key],
CONVERT(TINYINT, SUBSTRING(S.Segment, 2, 1)) AS [Value1],
CONVERT(TINYINT, SUBSTRING(S.Segment, 3, 1)) AS [Value2],
CONVERT(SMALLINT, SUBSTRING(S.Segment, 2, 2)) AS [Value12],
CONVERT(SMALLINT, SUBSTRING(S.Segment, 3, 1) + SUBSTRING(S.Segment, 2, 1)) AS [Value21]
) P
Given the following input data
0x0329000414000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000
--^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----^-----
The following results are extracted:
Offset
Segment
Key
Value1
Value2
Value12
Value21
0
0x032900
3
41
0
10496
41
3
0x041400
4
20
0
5120
20
6
0x0B1400
11
20
0
5120
20
9
0x0C1400
12
20
0
5120
20
12
0x0D0F00
13
15
0
3840
15
15
0x177800
23
120
0
30720
120
18
0x224600
34
70
0
17920
70
21
0x467800
70
120
0
30720
120
24
0x473C00
71
60
0
15360
60
27
0x550F00
85
15
0
3840
15
30
0x000000
0
0
0
0
0
33
0x000000
0
0
0
0
0
36
0x000000
0
0
0
0
0
See this db<>fiddle.
DECLARE #YourTable table
(
Id INT PRIMARY KEY,
Val VARBINARY(50)
)
INSERT #YourTable
VALUES (1, 0x0329000414000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000),
(2, 0x0329002637000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000);
SELECT Id, Triplet
FROM #YourTable T
CROSS APPLY GENERATE_SERIES(1,DATALENGTH(T.Val),3) s
CROSS APPLY (VALUES (SUBSTRING(T.Val, s.value, 3))) V(Triplet)
WHERE Triplet BETWEEN 0x263700 AND 0x2637FF
This works only with '22 sql server because of 'generate_series'
DECLARE #YourTable table
(
Id INT PRIMARY KEY,
Val VARBINARY(50)
)
INSERT #YourTable
VALUES (1, 0x0329000414000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000),
(2, 0x0329002637000B14000C14000D0F00177800224600467800473C00550F00000000000000000000000000);
SELECT Id, Triplet
FROM #YourTable T
JOIN (VALUES (1),(4),(7),(10),(13),(16),(19),(22),(25),(28),(31),(34),(37),(40),(43),(46),(49)) Nums(Num) ON Num <= DATALENGTH(T.Val)
CROSS APPLY (VALUES (SUBSTRING(T.Val, Num, 3))) V(Triplet)
WHERE Triplet BETWEEN 0x263700 AND 0x2637FF
This one works on older versions without "generate_series"
The credit is to #Martin Smith from stackexchange
https://dba.stackexchange.com/questions/323235/varbinary-pattern-search
Is there a way to make a column have a contraint of exactly so many characters? I have a string of 152 characters, and want the column to only accept values that are 152 in length, not 151, not 153. I know char can handle the overflow, but what about the minimum version?
Add a check constraint which asserts that the length of the incoming string is exactly 152 characters:
ALTER TABLE [dbo].[YourTable] WITH CHECK
ADD CONSTRAINT [cnstr] CHECK (LEN(LTRIM([col])) = 152);
I am going to encrypted several fields in existing table. Basically, the following encryption technique is going to be used:
CREATE MASTER KEY ENCRYPTION
BY PASSWORD = 'sm_long_password#'
GO
CREATE CERTIFICATE CERT_01
WITH SUBJECT = 'CERT_01'
GO
CREATE SYMMETRIC KEY SK_01
WITH ALGORITHM = AES_256 ENCRYPTION
BY CERTIFICATE CERT_01
GO
OPEN SYMMETRIC KEY SK_01 DECRYPTION
BY CERTIFICATE CERT_01
SELECT ENCRYPTBYKEY(KEY_GUID('SK_01'), 'test')
CLOSE SYMMETRIC KEY SK_01
DROP SYMMETRIC KEY SK_01
DROP CERTIFICATE CERT_01
DROP MASTER KEY
The ENCRYPTBYKEY returns varbinary with a maximum size of 8,000 bytes. Knowing the table fields going to be encrypted (for example: nvarchar(128), varchar(31), bigint) how can I define the new varbinary types length?
You can see the full specification here
So lets calculate:
16 byte key UID
_4 bytes header
16 byte IV (for AES, a 16 byte block cipher)
Plus then the size of the encrypted message:
_4 byte magic number
_2 bytes integrity bytes length
_0 bytes integrity bytes (warning: may be wrongly placed in the table)
_2 bytes (plaintext) message length
_m bytes (plaintext) message
CBC padding bytes
The CBC padding bytes should be calculated the following way:
16 - ((m + 4 + 2 + 2) % 16)
as padding is always applied. This will result in a number of padding bytes in the range 1..16. A sneaky shortcut is to just add 16 bytes to the total, but this may mean that you're specifying up to 15 bytes that are never used.
We can shorten this to 36 + 8 + m + 16 - ((m + 8) % 16) or 60 + m - ((m + 8) % 16. Or if you use the little trick specified above and you don't care about the wasted bytes: 76 + m where m is the message input.
Notes:
beware that the first byte in the header contains the version number of the scheme; this answer does not and cannot specify how many bytes will be added or removed if a different internal message format or encryption scheme is used;
using integrity bytes is highly recommended in case you want to protect your DB fields against change (keeping the amount of money in an account confidential is less important than making sure the amount cannot be changed).
The example on the page assumes single byte encoding for text characters.
Based upon some tests in SQL Server 2008, the following formula seems to work. Note that #ClearText is VARCHAR():
52 + (16 * ( ((LEN(#ClearText) + 8)/ 16) ) )
This is roughly compatible with the answer by Maarten Bodewes, except that my tests showed the DATALENGTH(myBinary) to always be of the form 52 + (z * 16), where z is an integer.
LEN(myVarCharString) DATALENGTH(encryptedString)
-------------------- -----------------------------------------
0 through 7 usually 52, but occasionally 68 or 84
8 through 23 usually 68, but occasionally 84
24 through 39 usually 84
40 through 50 100
The "myVarCharString" was a table column defined as VARCHAR(50). The table contained 150,000 records. The mention of "occasionally" is an instance of about 1 out of 10,000 records that would get bumped into a higher bucket; very strange. For LEN() of 24 and higher, there were not enough records to get the weird anomaly.
Here is some Perl code that takes a proposed length for "myVarCharString" as input from the terminal and produces an expected size for the EncryptByKey() result. The function "int()" is equivalent to "Math.floor()".
while($len = <>) {
print 52 + ( 16 * int( ($len+8) / 16 ) ),"\n";
}
You might want to use this formula to calculate a size, then add 16 to allow for the anomaly.
Can someone please explain below behavior
KAP.ADMIN(ADMIN)=> create table char1 ( a char(64000),b char(1516));
CREATE TABLE
KAP.ADMIN(ADMIN)=> create table char2 ( a char(64000),b char(1517));
ERROR: 65536 : Record size limit exceeded
KAP.ADMIN(ADMIN)=> insert into char1 select * from char1;
ERROR: 65540 : Record size limit exceeded => why this error during
insert if create table does not throw any error for same table as
shown above.
KAP.ADMIN(ADMIN)=> \d char1
Table "CHAR1"
Attribute | Type | Modifier | Default Value
-----------+------------------+----------+---------------
A | CHARACTER(64000) | |
B | CHARACTER(1516) | |
Distributed on hash: "A"
./nz_ddl_table KAP char1
Creating table: "CHAR1"
CREATE TABLE CHAR1
(
A character(64000),
B character(1516)
)
DISTRIBUTE ON (A)
;
/*
Number of columns 2
(Variable) Data Size 4 - 65520
Row Overhead 28
====================== =============
Total Row Size (bytes) 32 - 65548
*/
I would like to know the calculation of row size in above case.
I checked the netezza db user guide, but not able to understand its calculation in above example.
I think this link does a good job of explaining the over head of Netezza / PDA Datatypes:
For every row of every table, there is a 24-byte fixed overhead of the rowid, createxid, and deletexid. If you have any nullable columns, a null vector is required and it is N/8 bytes where N is the number of columns in the record.
The system rounds up the size of
this header to a multiple of 4 bytes.
In addition, the system adds a record header of 4 bytes if any of the following is true:
Column of type VARCHAR
Column of type CHAR where the length is greater than 16 (stored internally as VARCHAR)
Column of type NCHAR
Column of type NVARCHAR
Using UTF-8 encoding, each Unicode code point can require 1 - 4 bytes of storage. A 10-character string requires 10 bytes of storage if it is ASCII and up to 20 bytes if it is Latin, or as many as 40 bytes if it is Kanji.
The only time a record does not contain a header is if all the columns are defined as NOT NULL, there are no character data types larger than 16 bytes, and no variable character data types.
https://www.ibm.com/support/knowledgecenter/SSULQD_7.2.1/com.ibm.nz.dbu.doc/c_dbuser_data_types_calculate_row_size.html
First create a temp table based on one row of data.
create temp table tmptable as
select *
from Table
limit 1
Then check the used bytes of the temp table. That should be the size per row.
select used_bytes
from _v_sys_object_storage_size a inner join
_v_table b
on a.tblid = b.objid
and b.tablename = 'tmptable'
Netezza has some Limitations:
1)Maximum number of characters in a char/varchar field: 64,000
2)Maximum row size: 65,535 bytes
Beyond 65 k bytes is impossible for a record length in NZ.
Though NZ box offers huge space, it would be really good idea to move with accurate space forecasting rather radomly spacing. Now in your requirement does all the attributes would mandatorily require a char(64000) or can be compacted with real-time data analysis. If further compacting can be done, then revisit on the attribute length .
Also during such requirements, never go with insert into char1 select * ....... statements because this will allow system to choose preferred datatypes and that will be of higher sizing ends which might not be necessary.
I'm running a performance comparison between using 1000 INSERT statements:
INSERT INTO T_TESTS (TestId, FirstName, LastName, Age)
VALUES ('6f3f7257-a3d8-4a78-b2e1-c9b767cfe1c1', 'First 0', 'Last 0', 0)
INSERT INTO T_TESTS (TestId, FirstName, LastName, Age)
VALUES ('32023304-2e55-4768-8e52-1ba589b82c8b', 'First 1', 'Last 1', 1)
...
INSERT INTO T_TESTS (TestId, FirstName, LastName, Age)
VALUES ('f34d95a7-90b1-4558-be10-6ceacd53e4c4', 'First 999', 'Last 999', 999)
..versus using single INSERT statement with 1000 values:
INSERT INTO T_TESTS (TestId, FirstName, LastName, Age)
VALUES
('db72b358-e9b5-4101-8d11-7d7ea3a0ae7d', 'First 0', 'Last 0', 0),
('6a4874ab-b6a3-4aa4-8ed4-a167ab21dd3d', 'First 1', 'Last 1', 1),
...
('9d7f2a58-7e57-4ed4-ba54-5e9e335fb56c', 'First 999', 'Last 999', 999)
To my big surprise, the results are the opposite of what I thought:
1000 INSERT statements: 290 msec.
1 INSERT statement with 1000 VALUES: 2800 msec.
The test is executed directly in MSSQL Management Studio with SQL Server Profiler used for measurement (and I've got similar results running it from C# code using SqlClient, which is even more suprising considering all the DAL layers roundtrips)
Can this be reasonable or somehow explained? How come, a supposedly faster method results in 10 times (!) worse performance?
Thank you.
EDIT: Attaching execution plans for both:
Addition: SQL Server 2012 shows some improved performance in this area but doesn't seem to tackle the specific issues noted below. This
should apparently be fixed in the next major version after
SQL Server 2012!
Your plan shows the single inserts are using parameterised procedures (possibly auto parameterised) so parse/compile time for these should be minimal.
I thought I'd look into this a bit more though so set up a loop (script) and tried adjusting the number of VALUES clauses and recording the compile time.
I then divided the compile time by the number of rows to get the average compile time per clause. The results are below
Up until 250 VALUES clauses present the compile time / number of clauses has a slight upward trend but nothing too dramatic.
But then there is a sudden change.
That section of the data is shown below.
+------+----------------+-------------+---------------+---------------+
| Rows | CachedPlanSize | CompileTime | CompileMemory | Duration/Rows |
+------+----------------+-------------+---------------+---------------+
| 245 | 528 | 41 | 2400 | 0.167346939 |
| 246 | 528 | 40 | 2416 | 0.162601626 |
| 247 | 528 | 38 | 2416 | 0.153846154 |
| 248 | 528 | 39 | 2432 | 0.157258065 |
| 249 | 528 | 39 | 2432 | 0.156626506 |
| 250 | 528 | 40 | 2448 | 0.16 |
| 251 | 400 | 273 | 3488 | 1.087649402 |
| 252 | 400 | 274 | 3496 | 1.087301587 |
| 253 | 400 | 282 | 3520 | 1.114624506 |
| 254 | 408 | 279 | 3544 | 1.098425197 |
| 255 | 408 | 290 | 3552 | 1.137254902 |
+------+----------------+-------------+---------------+---------------+
The cached plan size which had been growing linearly suddenly drops but CompileTime increases 7 fold and CompileMemory shoots up. This is the cut off point between the plan being an auto parametrized one (with 1,000 parameters) to a non parametrized one. Thereafter it seems to get linearly less efficient (in terms of number of value clauses processed in a given time).
Not sure why this should be. Presumably when it is compiling a plan for specific literal values it must perform some activity that does not scale linearly (such as sorting).
It doesn't seem to affect the size of the cached query plan when I tried a query consisting entirely of duplicate rows and neither affects the order of the output of the table of the constants (and as you are inserting into a heap time spent sorting would be pointless anyway even if it did).
Moreover if a clustered index is added to the table the plan still shows an explicit sort step so it doesn't seem to be sorting at compile time to avoid a sort at run time.
I tried to look at this in a debugger but the public symbols for my version of SQL Server 2008 don't seem to be available so instead I had to look at the equivalent UNION ALL construction in SQL Server 2005.
A typical stack trace is below
sqlservr.exe!FastDBCSToUnicode() + 0xac bytes
sqlservr.exe!nls_sqlhilo() + 0x35 bytes
sqlservr.exe!CXVariant::CmpCompareStr() + 0x2b bytes
sqlservr.exe!CXVariantPerformCompare<167,167>::Compare() + 0x18 bytes
sqlservr.exe!CXVariant::CmpCompare() + 0x11f67d bytes
sqlservr.exe!CConstraintItvl::PcnstrItvlUnion() + 0xe2 bytes
sqlservr.exe!CConstraintProp::PcnstrUnion() + 0x35e bytes
sqlservr.exe!CLogOp_BaseSetOp::PcnstrDerive() + 0x11a bytes
sqlservr.exe!CLogOpArg::PcnstrDeriveHandler() + 0x18f bytes
sqlservr.exe!CLogOpArg::DeriveGroupProperties() + 0xa9 bytes
sqlservr.exe!COpArg::DeriveNormalizedGroupProperties() + 0x40 bytes
sqlservr.exe!COptExpr::DeriveGroupProperties() + 0x18a bytes
sqlservr.exe!COptExpr::DeriveGroupProperties() + 0x146 bytes
sqlservr.exe!COptExpr::DeriveGroupProperties() + 0x146 bytes
sqlservr.exe!COptExpr::DeriveGroupProperties() + 0x146 bytes
sqlservr.exe!CQuery::PqoBuild() + 0x3cb bytes
sqlservr.exe!CStmtQuery::InitQuery() + 0x167 bytes
sqlservr.exe!CStmtDML::InitNormal() + 0xf0 bytes
sqlservr.exe!CStmtDML::Init() + 0x1b bytes
sqlservr.exe!CCompPlan::FCompileStep() + 0x176 bytes
sqlservr.exe!CSQLSource::FCompile() + 0x741 bytes
sqlservr.exe!CSQLSource::FCompWrapper() + 0x922be bytes
sqlservr.exe!CSQLSource::Transform() + 0x120431 bytes
sqlservr.exe!CSQLSource::Compile() + 0x2ff bytes
So going off the names in the stack trace it appears to spend a lot of time comparing strings.
This KB article indicates that DeriveNormalizedGroupProperties is associated with what used to be called the normalization stage of query processing
This stage is now called binding or algebrizing and it takes the expression parse tree output from the previous parse stage and outputs an algebrized expression tree (query processor tree) to go forward to optimization (trivial plan optimization in this case) [ref].
I tried one more experiment (Script) which was to re-run the original test but looking at three different cases.
First Name and Last Name Strings of length 10 characters with no duplicates.
First Name and Last Name Strings of length 50 characters with no duplicates.
First Name and Last Name Strings of length 10 characters with all duplicates.
It can clearly be seen that the longer the strings the worse things get and that conversely the more duplicates the better things get. As previously mentioned duplicates don't affect the cached plan size so I presume that there must be a process of duplicate identification when constructing the algebrized expression tree itself.
Edit
One place where this information is leveraged is shown by #Lieven here
SELECT *
FROM (VALUES ('Lieven1', 1),
('Lieven2', 2),
('Lieven3', 3))Test (name, ID)
ORDER BY name, 1/ (ID - ID)
Because at compile time it can determine that the Name column has no duplicates it skips ordering by the secondary 1/ (ID - ID) expression at run time (the sort in the plan only has one ORDER BY column) and no divide by zero error is raised. If duplicates are added to the table then the sort operator shows two order by columns and the expected error is raised.
It is not too surprising: the execution plan for the tiny insert is computed once, and then reused 1000 times. Parsing and preparing the plan is quick, because it has only four values to del with. A 1000-row plan, on the other hand, needs to deal with 4000 values (or 4000 parameters if you parameterized your C# tests). This could easily eat up the time savings you gain by eliminating 999 roundtrips to SQL Server, especially if your network is not overly slow.
The issue probably has to do with the time it takes to compile the query.
If you want to speed up the inserts, what you really need to do is wrap them in a transaction:
BEGIN TRAN;
INSERT INTO T_TESTS (TestId, FirstName, LastName, Age)
VALUES ('6f3f7257-a3d8-4a78-b2e1-c9b767cfe1c1', 'First 0', 'Last 0', 0);
INSERT INTO T_TESTS (TestId, FirstName, LastName, Age)
VALUES ('32023304-2e55-4768-8e52-1ba589b82c8b', 'First 1', 'Last 1', 1);
...
INSERT INTO T_TESTS (TestId, FirstName, LastName, Age)
VALUES ('f34d95a7-90b1-4558-be10-6ceacd53e4c4', 'First 999', 'Last 999', 999);
COMMIT TRAN;
From C#, you might also consider using a table valued parameter. Issuing multiple commands in a single batch, by separating them with semicolons, is another approach that will also help.
I ran into a similar situation trying to convert a table with several 100k rows with a C++ program (MFC/ODBC).
Since this operation took a very long time, I figured bundling multiple inserts into one (up to 1000 due to MSSQL limitations). My guess that a lot of single insert statements would create an overhead similar to what is described here.
However, it turns out that the conversion took actually quite a bit longer:
Method 1 Method 2 Method 3
Single Insert Multi Insert Joined Inserts
Rows 1000 1000 1000
Insert 390 ms 765 ms 270 ms
per Row 0.390 ms 0.765 ms 0.27 ms
So, 1000 single calls to CDatabase::ExecuteSql each with a single INSERT statement (method 1) are roughly twice as fast as a single call to CDatabase::ExecuteSql with a multi-line INSERT statement with 1000 value tuples (method 2).
Update: So, the next thing I tried was to bundle 1000 separate INSERT statements into a single string and have the server execute that (method 3). It turns out this is even a bit faster than method 1.
Edit: I am using Microsoft SQL Server Express Edition (64-bit) v10.0.2531.0