sql script cover all possible variable combinations? help, app, somthing? - sql-server

I'm sure somebody will know of a app or some website to help do this:
I need to run a script in the 'database engine tuning advisor', but would like to do all or most of the possible combinations of variables for my select statement/function.
For example, I have:
#RegionID, can be any value from SELECT EntityGroup.Id FROM EntityGroup (e.g. 1,2,3,4)
#LanguageId, can be any value from SELECT Language.Id (e.g. en-GB, tr-TR)
#Group1, can be 1,2,3,4,5,6,7
and so on.
then to have something generate an sql script such as
SELECT * From xyz (1, 'en-GB', 1)
SELECT * From xyz (1, 'tr-TR', 1)
SELECT * From xyz (2, 'en-GB', 1)
SELECT * From xyz (2, 'tr-TR', 1)
over and over with each of the possible combinations of variables.
Any tips?
thanks

You can run this query:
SELECT 'SELECT * FROM xyz (' + CAST(A.Id AS VARCHAR(10)) + ', ''' +
B.Id + ''', ' + CAST(C.Id AS VARCHAR(10)) + ')' AS Script
FROM EntityGroup A
CROSS JOIN [Language] B
CROSS JOIN [Group] C
And then copy the results to get your script. (Though you need to know that if there are more than just those values on the tables, the CROSS JOIN will rapidly grow in size).

Related

Microsoft SQL Server : splitting data and cross join - counter table - stuck

I am using Microsoft SQL.
First thing let me explain that the "splitter" command in this SQL script works when my data comes in without all the cn's and paths and xml data. If the data comes in as
testrole|testrole2|testrole3
I am trying to get it to work while bringing the data in raw. It will still come in pipe delimited but with a lot more string data after the role/ access name..So I can use it later for scripting in bulk.
I want the end result to be user ID - Role and also show duplicate user IDs for users with more than one role.
1 - TestRole
2 - TestRole
2 - TestApp
3 - TestApp
Any ideas on how to get this to work? You can see some users will have one role and other multiple.
if OBJECT_ID ('tempdb.dbo.#RolesStage2') IS NOT NULL
DROP TABLE #RolesStage2
DECLARE #CountTable TABLE (UserID BIGINT)
INSERT INTO #CountTable
VALUES (1), (2), (3), (4)
DECLARE #TempUserObjects TABLE
(
UserID BIGINT,
AssignedRoles NVARCHAR(MAX)
)
INSERT INTO #TempUserObjects (UserID, AssignedRoles)
VALUES
(1, 'cn=TestingApp1,cn=Application,cn=Base,cn=Level2,cn=Definitions,cn=Configuration,cn=Drivers,cn=UserApplication,cn=FakePath,<assignment><start_tm>123456789575</start_tm><req_tm>1234568789854</req_tm><req>cn=FakeAdmin,ou=Admin,ou=ESC,o=authorizationpathfake</req><req_desc>Import - Child Request</req_desc></assignment>|'),
(2, 'cn=TestingApp2,cn=Application,cn=Base,cn=Level2,cn=Definitions,cn=Configuration,cn=Drivers,cn=UserApplication,cn=FakePath,<assignment><start_tm>123456789575</start_tm><req_tm>1234568789854</req_tm><req>cn=FakeAdmin,ou=Admin,ou=ESC,o=authorizationpathfake</req><req_desc>Import - Child Request</req_desc></assignment>|'),
(3, 'cn=TestingApp3,cn=Application,cn=Base,cn=Level2,cn=Definitions,cn=Configuration,cn=Drivers,cn=UserApplication,cn=FakePath,<assignment><start_tm>123456789575</start_tm><req_tm>1234568789854</req_tm><req>cn=FakeAdmin,ou=Admin,ou=ESC,o=authorizationpathfake</req><req_desc>Import - Child Request</req_desc></assignment>|cn=RoleTest3,cn=Application,cn=Base,cn=Level2,cn=Definitions,cn=Configuration,cn=Drivers,cn=UserApplication,cn=FakePath,<assignment><start_tm>123456789575</start_tm><req_tm>1234568789854</req_tm><req>cn=FakeAdmin,ou=Admin,ou=ESC,o=authorizationpathfake</req><req_desc>Import - Child Request</req_desc></assignment>'),
(4, 'cn=RoleTest1,cn=Application,cn=Base,cn=Level2,cn=Definitions,cn=Configuration,cn=Drivers,cn=UserApplication,cn=FakePath,<assignment><start_tm>123456789575</start_tm><req_tm>1234568789854</req_tm><req>cn=FakeAdmin,ou=Admin,ou=ESC,o=authorizationpathfake</req><req_desc>Import - Child Request</req_desc></assignment>|cn=RoleTest5,cn=Application,cn=Base,cn=Level2,cn=Definitions,cn=Configuration,cn=Drivers,cn=UserApplication,cn=FakePath,<assignment><start_tm>123456789575</start_tm><req_tm>1234568789854</req_tm><req>cn=FakeAdmin,ou=Admin,ou=ESC,o=authorizationpathfake</req><req_desc>Import - Child Request</req_desc></assignment>')
SELECT
*
FROM
#TempUserObjects
--Splitter Statement
SELECT
tuo.UserID,
SUBSTRING('|' + tuo.AssignedRoles + '|', ct.UserID + 1, CHARINDEX('|', '|' + tuo.AssignedRoles + '|', ct.UserID + 1) - ct.UserID - 1) AS 'NRFAssignedRoles'
INTO
#RolesStage2
FROM
#CountTable ct
CROSS JOIN
#TempUserObjects tuo
WHERE
ct.UserID < LEN('|' + tuo.AssignedRoles + '|')
AND SUBSTRING('|' + tuo.AssignedRoles + '|', ct.UserID, 1) = '|'
--Shows SubstringCharindex works
SELECT
SUBSTRING(AssignedRoles, 4, CHARINDEX(',', AssignedRoles) - 4) AS 'SubstringCharindex'
FROM
#TempUserObjects
SELECT *
FROM #RolesStage2

Replace specials chars with HTML entities

I have the following in table TABLE
id content
-------------------------------------
1 Hellö world, I äm text
2 ènd there äré many more chars
3 that are speçial in my dat£base
I now need to export these records into HTML files, using bcp:
set #command = 'bcp "select [content] from [TABLE] where [id] = ' +
#id queryout +' + #filename + '.html" -S ' + #instance +
' -c -U ' + #username + ' -P ' + #password"
exec xp_cmdshell #command, no_ouput
To make the output look correct, I need to first replace all special characters with their respective HTML entities (pseudo)
insert into [#temp_html] ..
replace(replace([content], 'ö', 'ö'), 'ä', 'ä')
But by now, I have 30 nested replaces and it's starting to look insane.
After much searching, I found this post which uses a HTML conversion table but it is too advanced for me to understand:
The table does not list the special chars itself as they are in my text (ö, à etc) but UnicodeHex. Do I need to add them to the table to make the conversions that I need?
I am having trouble understanding how to update my script to replace all special chars. Can someone please show me a snippet of (pseudo) code?
One way to do that with a translation table is using a recursive cte to do the replaces, and one more cte to get only the last row of each translated value.
First, create and populate sample table (Please save us this step in your future questions):
DECLARE #T AS TABLE
(
id int,
content nvarchar(100)
)
INSERT INTO #T (id, content) VALUES
(1, 'Hellö world, I äm text'),
(2, 'ènd there äré many more chars'),
(3, 'that are speçial in my dat£base')
Then, create and populate the translation table (I don't know the HTML entities for these chars, so I've just used numbers [plus it's easier to see in the results]). Also, please note that this can be done using yet another cte in the chain.
DECLARE #Translations AS TABLE
(
str nchar(1),
replacement nvarchar(10)
)
INSERT INTO #Translations (str, replacement) VALUES
('ö', '-1-'),
('ä', '-2-'),
('è', '-3-'),
('ä', '-4-'),
('é', '-5-'),
('ç', '-6-'),
('£', '-7-')
Now, the first cte will do the replaces, and the second cte just adds a row_number so that for each id, the last value of lvl will get 1:
;WITH CTETranslations AS
(
SELECT id, content, 1 As lvl
FROM #T
UNION ALL
SELECT id, CAST(REPLACE(content, str, replacement) as nvarchar(100)), lvl+1
FROM CTETranslations
JOIN #Translations
ON content LIKE '%' + str + '%'
), cteNumberedTranslation AS
(
SELECT id, content, ROW_NUMBER() OVER(PARTITION BY Id ORDER BY lvl DESC) rn
FROM CTETranslations
)
Select from the second cte where rn = 1, I've joined the original table to show the source and translation side by side:
SELECT r.id, s.content, r.content
FROM #T s
JOIN cteNumberedTranslation r
ON s.Id = r.Id
WHERE rn = 1
ORDER BY Id
Results:
id content content
1 Hellö world, I äm text Hell-1- world, I -4-m text
2 ènd there äré many more chars -3-nd there -4-r-5- many more chars
3 that are speçial in my dat£base that are spe-6-ial in my dat-7-base
Please note that if your content have more that 100 special chars, you will need to add the maxrecursion 0 hint to the final select:
SELECT r.id, s.content, r.content
FROM #T s
JOIN cteNumberedTranslation r
ON s.Id = r.Id
WHERE rn = 1
ORDER BY Id
OPTION ( MAXRECURSION 0 );
See a live demo on rextester.

Query sql statistics without SET parameter

I try to capture some statistic parameters for logging purpose. "SET parameters" are no option (i.e. set statistics time on).
So I tried to query some DMV:
select '3AAAAAAAAAAA';
--no GO-statement here
select
total_worker_time/execution_count AS [Avg CPU Time],
total_elapsed_time as [Elapsed Time],
total_rows as [Total rows],
st.text,
(select cast(text as varchar(4000)) from ::fn_get_sql((select sql_handle from sys.sysprocesses where spid = ##spid)))
from sys.dm_exec_query_stats AS qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) AS st
--where ???
order by creation_time desc
The information captured here is almost what I need - but:
The query is only listed in the result of the DMV when it run in last executed GO-Block (not in the actual one). This is not what I need. I need something like ##error or ##rowcount what is available within the same GO-block and holds the elapsed time and CPU time. Any ideas how to query this information of the last statment?
If this can be solved: I would like to query the "last" statement execution within the session (##spid) without writing the statement twice.
Update on question:
This query is working "per session" and would list the requested values (although tirvial querys are missing). Top 1 would always bring back the values of the last Statement (not true if fired via exec #SQL what produces anonther session):
print 'hello';
select top 10 'my personal identifier: 1', * FROM sys.messages;
select top 20 'my personal identifier: 2', * FROM sys.messages;
print 'hello';
select 'hello';
select top 30 'my personal identifier: 3', * FROM sys.tables;
select top 1
total_worker_time/execution_count AS [Avg CPU Time],
total_elapsed_time as [Elapsed Time],
total_rows as [Total rows],
substring(st.text, (qs.statement_start_offset / 2) + 1, (case when qs.statement_end_offset = -1 then datalength(st.text) else qs.statement_end_offset end - qs.statement_start_offset ) / 2 + 5) as [executing statement]
from sys.dm_exec_query_stats AS qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) AS st
where st.text = (select cast(text as varchar(4000)) from ::fn_get_sql((select sql_handle from sys.sysprocesses where spid = ##spid)))
order by qs.statement_start_offset desc;
The filter (where-clause) seems to be crude and not very robust. Is there any way to improve this?
I try to answer myself (Jeroen Mostert - Thank you very much for your help!) - the question is unanswered (see below):
The follwing function should give you CPU, execution time, I/O, number or rows of the last statement that was executed in the actual session, if the statement is complex enough to invoke a SQL plan generation. That is, after simple print commands the resultset would be enpty. Even so after the execution of stored procedures if they open a new session (i.e. after exec sp_executesql the resultset will be empty).
For the "average" SQL-Statement the following query should result in a rowset holding the information that you would otherwise get via set statistice time on and set statistice io on.
drop function if exists dbo.ufn_lastsql_resources ;
go
CREATE FUNCTION dbo.ufn_lastsql_resources (#session_id int)
RETURNS TABLE
AS
return
select
top 1
convert(char(10), getdate(), 121) + ' ' + substring(convert(char(40), getdate(), 121), 12,12) + ',' as [Time stamp],
cast(cast((last_worker_time / execution_count / 1000. ) as numeric(9,2)) as varchar(100)) + ',' as [Avg CPU Time in ms],
cast(cast((last_elapsed_time / 1000. ) as numeric(9,2)) as varchar(100)) + ',' as [Elapsed Time in ms],
cast(last_rows as varchar(100)) + ',' as [Total rows],
cast(substring(st.text, (statement_start_offset / 2) + 1, (case when statement_end_offset = -1 then datalength(st.text) else statement_end_offset end - statement_start_offset ) / 2 + 2) as varchar(4000)) + ','
as [executing statement],
last_physical_reads + last_logical_reads as [Reads],
last_logical_writes as [Writes],
--last_grant_kb,
--last_used_grant_kb,
--last_ideal_grant_kb,
--last_reserved_threads,
--last_used_threads
#session_id as spid
from
(
select qs.*
from sys.dm_exec_query_stats as qs
inner join sys.dm_exec_requests as eq
on qs.sql_handle = eq.sql_handle
and qs.plan_handle = eq.plan_handle
and eq.session_id = #session_id
) a
cross apply sys.dm_exec_sql_text(a.sql_handle) AS st
where
substring(st.text, (statement_start_offset / 2) + 1, (case when statement_end_offset = -1 then datalength(st.text) else statement_end_offset end - statement_start_offset ) / 2 + 2) not like '%ufn_lastsql_resources%'
order by
last_execution_time desc, statement_start_offset desc
go
Most probably there are more elegant ways to do so. Maybe it is possible to write something that will work properly even with statements that use an option (recompile) or on exec (#sql) Anyway: I seems to work on SQL Server 2016 and 2012. You need VIEW SERVER STATE permission on the Server. To invoke the function, try:
drop table if exists #t1
select top 10 'statement 1' a, * into #t1 from sys.messages
select 1, * from dbo.ufn_lastsql_resources(##spid) option (recompile)
drop table if exists #t2
select top 20 'statement 2' a, * into #t2 from sys.messages
--select 2, * from dbo.ufn_lastsql_resources(##spid)
select top 3 'statement 3' a, * from sys.messages
select 3, * from dbo.ufn_lastsql_resources(##spid) option (recompile)
The question remins unanswered, as the way is not working properly. It is not sure to catch the right statement out of the batch (top 1 within the session ordered by last_execution time and last in batch. This seems to be the wrong order. As the plans are reused this is the only way, I figured out to work.)

SQL Server Regular expression extract pattern from DB colomn

I have a question about SQL Server: I have a database column with a pattern which is like this:
up to 10 digits
then a comma
up to 10 digits
then a semicolon
e.g.
100000161, 100000031; 100000243, 100000021;
100000161, 100000031; 100000243, 100000021;
and I want to extract within the pattern the first digits (up to 10) (1.) and then a semicolon (4.)
(or, in other words, remove everything from the semicolon to the next semicolon)
100000161; 100000243; 100000161; 100000243;
Can you please advice me how to establish this in SQL Server? Im not very familiar with regex and therefore have no clue how to fix this.
Thanks,
Alex
Try this
Declare #Sql Table (SqlCol nvarchar(max))
INSERT INTO #Sql
SELECT'100000161,100000031;100000243,100000021;100000161,100000031;100000243,100000021;'
;WITH cte
AS (SELECT Row_number()
OVER(
ORDER BY (SELECT NULL)) AS Rno,
split.a.value('.', 'VARCHAR(1000)') AS Data
FROM (SELECT Cast('<S>'
+ Replace( Replace(sqlcol, ';', ','), ',',
'</S><S>')
+ '</S>'AS XML) AS Data
FROM #Sql)AS A
CROSS apply data.nodes('/S') AS Split(a))
SELECT Stuff((SELECT '; ' + data
FROM cte
WHERE rno%2 <> 0
AND data <> ''
FOR xml path ('')), 1, 2, '') AS ExpectedData
ExpectedData
-------------
100000161; 100000243; 100000161; 100000243
I believe this will get you what you are after as long as that pattern truly holds. If not it's fairly easy to ensure it does conform to that pattern and then apply this
Select Substring(TargetCol, 1, 10) + ';' From TargetTable
You can take advantage of SQL Server's XML support to convert the input string into an XML value and query it with XQuery and XPath expressions.
For example, the following query will replace each ; with </b><a> and each , to </a><b> to turn each string into <a>100000161</a><a>100000243</a><a />. After that, you can select individual <a> nodes with /a[1], /a[2] :
declare #table table (it nvarchar(200))
insert into #table values
('100000161, 100000031; 100000243, 100000021;'),
('100000161, 100000031; 100000243, 100000021;')
select
xCol.value('/a[1]','nvarchar(200)'),
xCol.value('/a[2]','nvarchar(200)')
from (
select convert(xml, '<a>'
+ replace(replace(replace(it,';','</b><a>'),',','</a><b>'),' ','')
+ '</a>')
.query('a') as xCol
from #table) as tmp
-------------------------
A1 A2
100000161 100000243
100000161 100000243
value extracts a single value from an XML field. nodes returns a table of nodes that match the XPath expression. The following query will return all "keys" :
select
a.value('.','nvarchar(200)')
from (
select convert(xml, '<a>'
+ replace(replace(replace(it,';','</b><a>'),',','</a><b>'),' ','')
+ '</a>')
.query('a') as xCol
from #table) as tmp
cross apply xCol.nodes('a') as y(a)
where a.value('.','nvarchar(200)')<>''
------------
100000161
100000243
100000161
100000243
With 200K rows of data though, I'd seriously consider transforming the data when loading it and storing it in indivisual, indexable columns, or add a separate, related table. Applying string manipulation functions on a column means that the server can't use any covering indexes to speed up queries.
If that's not possible (why?) I'd consider at least adding a separate XML-typed column that would contain the same data in XML form, to allow the creation of an XML index.

Problems with Data Script Generation

I very rarely use SQL Server and in a professional context I keep well clear! I'm working on a pet project though and I'm have problems with a script creation.
I've got an online database that I need to extract everything out of. I use the Tasks > Generate Scripts option within SQL Server Management Studio. The following is an example of one insert statement the script creates (I have 1,000s of these inserts):
INSERT [dbo].[NewComics] ([NewComicId], [Title], [Subtitle], [ReleaseDate], [CollectionId]) VALUES (366, N'Hawk & Dove 1: ', N'First Strikes ', CAST(0x00009F6F00000000 AS DateTime), 248)
I have two issues with this:
(a) I want to strip all the whitespace from the two title elements
(b) I don't want a HEX date - I want something readable like 2006-09-01 (yyyy-mm-dd)
INSERT [dbo].[NewComics] ([NewComicId], [Title], [Subtitle], [ReleaseDate], [CollectionId]) VALUES (366, N'Hawk & Dove 1:', N'First Strikes', '2006-09-01', 248)
What would be the quickest way to change about 3,000 insert statements to this revised format?
FYI - this is the design of the table:
[NewComicId] [int] NOT NULL,
[Title] [nchar](100) NOT NULL,
[Subtitle] [nchar](100) NULL,
[ReleaseDate] [datetime] NOT NULL,
[CollectionId] [int] NOT NULL,
Thanks in advance!
Yes, generate scripts sadly scripts datetime columns as CONVERT(binary_value, Datetime). I'll try to get an answer as to why (or more importantly if there is a way to change the behavior). I suspect the reason is to avoid any issues with running the scripts on a different machine with different locale / regional settings etc. I don't know if there's a way to change that from happening in the meantime, but Management Studio isn't the only way to script your data... you could look into 3rd party products like Red-Gate's SQL Data Compare.
If it's really only 3,000 rows, and you intend to run the generated script on a different server, stop using the wizard and do this (on first glance this looks horrific, but it does several of the things you'll want - outputs a script ready to copy, paste and run, with nicely formatted and readable dates, inserts batched into multi-row VALUES by 1000 with GO commands in between, and even deals with potentially NULL values in title, subtitle and collectionid):
DECLARE #newtable SYSNAME = 'dbo.NewComics';
SET NOCOUNT ON;
;WITH x AS (SELECT TOP (4000) s = '('
+ CONVERT(VARCHAR(12), NewComicId) + ','
+ COALESCE('N''' + REPLACE(RTRIM(Title), '''', '''''') + '''', 'NULL') + ','
+ COALESCE('N''' + REPLACE(RTRIM(SubTitle), '''', '''''') + '''', 'NULL')
+ ', ''' + CONVERT(CHAR(8), ReleaseDate, 112) + ' '
+ CONVERT(CHAR(8), ReleaseDate, 108) + ''','
+ CONVERT(VARCHAR(12), COALESCE(CollectionId, 'NULL')) + ')',
rn = ROW_NUMBER() OVER (ORDER BY NewComicId)
FROM dbo.OldComics ORDER BY NewComicId
),
y AS
(
SELECT [/*a*/] = 1, [/*b*/] = 'SET NOCOUNT ON;
GO
INSERT ' + #newtable + ' VALUES'
UNION ALL
SELECT 2, s = CASE WHEN rn > 1 THEN ',' ELSE '' END + s
FROM x WHERE rn BETWEEN 1 AND 1000
UNION ALL
SELECT 3, 'GO' UNION ALL
SELECT 4, s = CASE WHEN rn > 1001 THEN ',' ELSE '' END + s
FROM x WHERE rn BETWEEN 1001 AND 2000
UNION ALL
SELECT 5, 'GO' UNION ALL
SELECT 6, s = CASE WHEN rn > 2001 THEN ',' ELSE '' END + s
FROM x WHERE rn BETWEEN 2001 AND 3000
UNION ALL
SELECT 7, 'GO' UNION ALL
SELECT 8, s = CASE WHEN rn > 3001 THEN ',' ELSE '' END + s
FROM x WHERE rn BETWEEN 3001 AND 4000
)
SELECT [/*b*/] FROM y ORDER BY [/*a*/];
(You might have to play with it if you have exactly 3000 or 3001 rows, or add another couple of unions if you have more than 4000, etc.)
If you are moving the data to a different table or different database on the same instance, use the script that #swasheck provided (and again, stop using the wizard).
You may have noticed a common trend here: stop using the generate scripts wizard if you don't like the binary format it outputs for dates.
So if this was me, what I'd do would be to build up the table structure in a separate database:
CREATE TABLE NewComics (
[NewComicId] [int] identity (0,1) NOT NULL,
[Title] [nvarchar](100) NOT NULL,
[Subtitle] [nvarchar](100) NULL,
[ReleaseDate] [datetime] NOT NULL,
[CollectionId] [int] NOT NULL
);
ALTER TABLE [NewComics]
ADD CONSTRAINT PK_NewComicsID PRIMARY KEY;
And then use SQL to clean the data like so:
INSERT INTO [NewDatabase].[dbo].[NewComics] (Title, Subtitle, ReleaseDate, CollectionID)
SELECT
LTRIM(RTRIM(Title))
, LTRIM(RTRIM(Subtitle))
, CAST(ReleaseDate as Datetime)
, CollectionID
FROM [OldDatabase].[dbo].[NewComics];
Alternatively, you can use this same SELECT query:
SELECT
NewComicID
, LTRIM(RTRIM(Title))
, LTRIM(RTRIM(Subtitle))
, CAST(ReleaseDate as Datetime)
, CollectionID
FROM [OldDatabase].[dbo].[NewComics];
as the source for an Import/Export Data Task (in the same menu that you've used to Generate Scripts). [OldDatabase] on this server would be the source and [NewDatabase] on this server would be the destination. Make sure you check the box to all identity inserts.

Resources