String column Search/Replace GUIDs - sql-server

i have a SQL Profiler trace saved to a table in SQL Server.
i want to perform sum/avg/count analysis of CPU/Reads/Duration on the queries in the trace. But most of the profiler data records calls to stored procedures with uniqueidentifer parameter(s):
EXECUTE GetTransactionCounts #BankGUID = '{231281D7-F6C2-4EAE-98AE-E9196D8016F0}', #SessionGUID='{7F34361F-CEEA-4CEA-8CBD-2704FFE92DEF}'
SELECT SUM(Total) AS Total FROM fn_BalancingAdditionsUS('{C08961DB-0B6A-4E67-A82B-5BBBA0A84A74}')
EXEC CreateCloser '{7F34361F-CEEA-4CEA-8CBD-2704FFE92DEF}', NULL , '{08E74DBB-3BC4-49A7-AA10-95AA6BD24784}'
EXECUTE GetMachineImpressmentForSession #SessionGUID = '{446881BA-1439-4AD8-B33B-C784120EFBA2}'
SELECT SUM(Total) AS Total FROM fn_BalancingAdditionsCanadian('{446881BA-1439-4AD8-B33B-C784120EFBA2}')
SELECT SUM(Total) AS Total FROM fn_BalancingSubtractionsUS('{446881BA-1439-4AD8-B33B-C784120EFBA2}')
So when i try to aggregate the profiler trace data to find the worst performing queries:
SELECT
Description,
COUNT(*) AS EventCount,
AVG(CPU) AS CPU, SUM(CPU) AS CpuTotal,
AVG(Reads) AS Reads, SUM(Reads) AS ReadsTotal,
AVG(Duration) AS Duration, SUM(Duration) AS DurationTotal
FROM SlowQueriesTrace
GROUP BY Description
then no aggregation occurs, because every GUID is unique. What i need is some way to replace the uniqueidentifier parameters with a generic %g marker:
EXECUTE GetTransactionCounts #BankGUID = %g, #SessionGUID=%g
SELECT SUM(Total) AS Total FROM fn_BalancingAdditionsUS(%g)
EXEC CreateCloser %g, NULL , %g
EXECUTE GetMachineImpressmentForSession #SessionGUID = %g
SELECT SUM(Total) AS Total FROM fn_BalancingAdditionsCanadian(%g)
SELECT SUM(Total) AS Total FROM fn_BalancingSubtractionsUS(%g)
Then my aggregation will work.
Aside from exporting the table to Excel and hand editing all 10,270 events, can anything think of any way to perform GUID search & replace pattern matching inside SQL Server?
Other hacks i tried:
Trim description to first 40 characters (i.e. CAST(description AS varchar(40))):
EXECUTE GetTransactionCounts #BankGUID =
SELECT SUM(Total) AS Total FROM fn_Balan
EXEC CreateCloser '{7F34361F-CEEA-4CEA-8
EXECUTE GetMachineImpressmentForSession
SELECT SUM(Total) AS Total FROM fn_Balan
SELECT SUM(Total) AS Total FROM fn_Balan
Except that merges items that shouldn't be merged, and other items that should be merged are not.
Use SoundEx:
E223
S423
E220
E223
S423
Except that you can see lines that are completely different are given the same soundex. Also i am unable to determine what query S338 corresponds to.
The hack i ended up using was to create a new Category column, initally null. i then spent two hours with carefully selected LIKE clauses to pick out a particular query and then "tag" them all with the query. e.g.:
UPDATE QueryTrace
SET Category = 'EXECUTE GetTransactionCounts #BankGUID ='
WHERE Description LIKE 'EXECUTE GetTransactionCounts #BankGUID =%'
and
UPDATE QueryTrace
SET Category = 'SELECT SUM(Total) AS Total FROM fn_BalancingAdditionsCanadian'
WHERE Description LIKE '%FROM fn_BalancingAdditionsCanadian%'
That doesn't mean i don't need a solution using this question.

Have you tried using ClearTrace which performs certain query parameterisations/normalisations?
Another option is to use a CLR function: Determining Poorly Performing Queries for Tuning from SQL Server Workload Trace Files
Whenever you gather workload traces to
identify poorly performing queries,
you need to import this data into a
database table, and to "normalise" and
aggregate this information to identify
the worst offenders. This can be done
in a variety of ways. One way is to
define a regular expression such as
this SQL CLR method based on work done
by Itzik Ben-Gan and modified by Adam
Machanic:
[Microsoft.SqlServer.Server.SqlFunction(IsDeterministic = true)]
public static SqlString sqlsig(SqlString querystring)
{
return (SqlString)Regex.Replace(
querystring.Value,
#"([\s,(=<>!](?![^\]]+[\]]))(?:(?:(?:(?:(?# expression coming
)(?:([N])?(')(?:[^']'')*('))(?# character
)(?:0x[\da-fA-F]*)(?# binary
)(?:[-+]?(?:(?:[\d]*\.[\d]*[\d]+)(?# precise number
)(?:[eE]?[\d]*)))(?# imprecise number
)(?:[~]?[-+]?(?:[\d]+))(?# integer
)(?:[nN][uU][lL][lL])(?# null
))(?:[\s]?[\+\-\*\/\%\&\\^][\s]?)?)+(?# operators
)))",
#"$1$2$3#$4");
}
Edit by OP: i had not heard of ClearTrace. i tried it:
Edit: Did you use the right trace template to gather the trace?

Related

Power BI Microsoft SQL: Incorrect syntax near the keyword 'EXEC'. Incorrect syntax near ')' on stored proc with no parameters (DirectQuery)

I have looked at the articles on stackoverflow about this issue.
I have also reviewed the article for calling stored procedures with parameters at https://www.c-sharpcorner.com/article/execute-sql-server-stored-procedure-with-user-parameter-in-power-bi/.
In my case, I have a stored procedure with no parameters.
I am unclear on how I would apply a fix-up to the M script in Power Query Editor to call a stored procedure with no parameters so that the stored procedure can be recognized and used by Power BI.
Could someone provide guidance for my scenario and steps below?
Scenario
I am using a Power BI with DirectQuery.
I need an ordered list or rows from my database. So I created a stored procedure in my SQL database that simply wraps a SQL SELECT statement with an ORDER BY clause.
The stored procedure has no parameters.
Steps
In SQL Server Management Studio, I create and test my stored procedure.
CREATE PROCEDURE [dbo].[pbiGetFileInfo]
AS
BEGIN
SET NOCOUNT ON;
SELECT dbo.CurrentReport.JobId AS CurrentJobId,
dbo.jobs.id AS JobId,
dbo.JobInstruments.Id AS JobInstrumentId,
dbo.JobInstruments.InstrumentDescription,
dbo.JobInstruments.Notes,
dbo.JobInstruments.Latitude,
dbo.JobInstruments.Longitude,
dbo.JobInstruments.Depth,
dbo.jobinstrumentimport.filename,
dbo.jobinstrumentimport.mindate AS FromDate,
dbo.jobinstrumentimport.maxdate AS ToDate,
DATEDIFF(hour, dbo.jobinstrumentimport.mindate, dbo.jobinstrumentimport.maxdate) AS duration_hours
FROM dbo.CurrentReport INNER JOIN
dbo.jobs ON dbo.CurrentReport.JobId = dbo.jobs.id INNER JOIN
dbo.JobInstruments ON dbo.jobs.id = dbo.JobInstruments.JobId INNER JOIN
dbo.jobinstrumentimport ON dbo.JobInstruments.Id = dbo.jobinstrumentimport.jobinstrumentid
ORDER BY JobInstruments.Id, FromDate
END
GO
In Power BI, I click the Transform Data button to launch the Power Query Editor.
Under queries, I right-click the first empty entry in the Queries pane and highlight the New Query item and click SQL Server from the context menu.
In the SQL Server Database dialog, I enter the Server and Database.
In the SQL Server Database dialog, I click the Advanced Options link to expand the dialog and show the SQL statement (optional, requires database) field.
In the SQL statement (optional, requires database), I enter EXEC [dbo].[pbiGetFileInfo] and click the OK button.
A truncated preview of the data returned by the stored procedure is displayed.
I click OK at the bottom of the preview.
A new entry Query1 appears in the Queries pane.
I right-click the new Query1 entry and rename it to pbiGetFileInfo. The M syntax that appears for the query at this point is:
= Sql.Database("Server Name", "NWBDatabase", [Query="EXEC [dbo].[pbiGetFileInfo]"])
At this point, if I click "Apply" from the Power Query Editor Ribbon, I will get the error message:
Incorrect syntax near the keyword 'EXEC'. Incorrect syntax near ')'
I click the Advanced Editor button on the toolbar. The M script for the pbiGetFileInfo query is:
let
Source = Sql.Database("Server Name", "NWBDatabase", [Query="EXEC [dbo].[pbiGetFileInfo]"])
in
Source
At this point, I am stuck.
My questions are:
The stored procedure has no parameters. Do I need to add a SQLSource prefix to the M script? If I do need a SQLSource, what would that look like?
let
SQLSource ...
let
Source = Sql.Database("Server Name", "NWBDatabase", [Query="EXEC [dbo].[pbiGetFileInfo]"])
in
Source
in
SQLSource
One thought is to create a view in SQL that calls the stored procedure. I have tried this and found that the view returns the same warning in Power BI as you would see if you tried to create a View with an ORDER BY in SQL. Calling views from Power BI is problematic at best.
Is there any way to write a stored procedure in SQL that minimizes the workarounds required to use them from Power BI?
Updates
I cannot call a stored procedure from Power BI under DirectQuery. It returns the same error `Incorrect syntax near 'EXEC' message. I need to see the DAX that is created to find the source of this error.
If I try the raw SQL Select from the stored procedure that I am trying to call, I get the following error:
Microsoft SQL: The ORDER BY clause is invalid in views.
Note: this is using straight SQL SELECT. The word VIEW is does not exist in the SQL SYNTAX at all.
A SQL Select that calls a VIEW only works if the calling outer SELECT contains a TOP (100) PERCENT clause. For example:
My view named [pbiGetFileInfo] contains the following SELECT statement:
SELECT dbo.CurrentReport.JobId AS CurrentJobId,
dbo.jobs.id AS JobId,
dbo.JobInstruments.Id AS JobInstrumentId,
dbo.JobInstruments.InstrumentDescription,
dbo.JobInstruments.Notes,
dbo.JobInstruments.Latitude,
dbo.JobInstruments.Longitude,
dbo.jobinstrumentimport.filename,
dbo.jobinstrumentimport.mindate AS FromDate,
dbo.jobinstrumentimport.maxdate AS ToDate,
DATEDIFF(hour, dbo.jobinstrumentimport.mindate, dbo.jobinstrumentimport.maxdate) AS duration_hours
FROM dbo.CurrentReport INNER JOIN
dbo.jobs ON dbo.CurrentReport.JobId = dbo.jobs.id INNER JOIN
dbo.JobInstruments ON dbo.jobs.id = dbo.JobInstruments.JobId INNER JOIN
dbo.jobinstrumentimport ON dbo.JobInstruments.Id = dbo.jobinstrumentimport.jobinstrumentid
The view itself does not contain an ORDER BY clause.
When I try to call this from a SQL SELECT statement:
SELECT * FROM [dbo].[pbiGetFileInfo] ORDER BY Id,FromDate
I get the error:
Microsoft SQL: The ORDER BY clause is invalid in views...
It works if I revise the SELECT to:
SELECT TOP (100) PERCENT * FROM [dbo].[pbiGetFileInfo] ORDER BY Id,FromDate
But, I am not sure it this will work correctly in Power BI DirectQuery.
My first thought is that Power BI seems to treat everything as a SQL VIEW. So all data sources are subject to the limitations of views. None of the advantages of sorting on a SQL Server are actually available in Power BI under DirectQuery. If you have to set the sort order in Power BI, there may be significant performance penalties.
I am experimenting with Table-Valued Functions (but have no faith that this will work in Power BI).

Get a list of tables involved in a process

I'd like to run a series of queries (a couple hundred ETL statements) and get a list of which tables are selected from. Is there a way to do this in snowflake? I was wondering if I could set my connection to a certain role/warehouse and pare the information down that way or some such, but am not sure what clever ways there might be to get this information.
Thank you kindly!
To obtain the SELECT statements from your ETLs:
At the start of your ETL, set the QUERY_TAG or save the SESSION_ID:
alter session set query_tag='MY_ETL'; -- Tag queries
select current_session(); -- Or save this SESSION_ID
Then filter history by QUERY_TAG:
select * from table(information_schema.query_history());
select query_text from table(result_scan(-1))
where query_type='SELECT' and query_tag='MY_ETL'
order by start_time;
or by SESSION_ID:
select * from table(information_schema.query_history_by_session(session_id=>298348393433));
select query_text from table(result_scan(-1))
where query_type='SELECT'
order by start_time;
;
To get the list of tables and other objects, you could then execute EXPLAIN for each SELECT statement returned above, and check the OBJECTS column. (This has caveats -- fore example, it's based on the logical plan, not actual execution.)
If that's too heavy, a trick is to inject metadata, like table names, into comments:
select /* metadata here */ 1;
Then extract the metadata from the QUERY_TEXT:
select * from table(information_schema.query_history());
select regexp_substr(query_text, '/\\*(.*?)\\*/', 1, 1, 'e') metadata, *
from table(result_scan(-1))
where query_type='SELECT' and query_tag='MY_ETL'
order by start_time desc;
But this will miss tables buried in views and functions.
Hope that's helpful

Verify the columns (name and amount) returned by a SQL query

I've a thirdy-part plugin of my program that execute SQL queries (mostly are select). These queries must return a default column order and amount, such:
PACKAGEID (guid), REFDATE (datetime), MODIFYDATE (datetime), PROG (int)
Sometimes happens that some query omit one of the column specified above. In order to avoid furthers errors in the program, I would execute a sort of check just to be sure that each query executed returns the default columns.
I've already use the SQL syntax SET NOEXEC ON and SET NOEXEC OFF and might be useful also in this case. I'm currently using SQL SERVER 2008.
Any hints?
If you're able to put the result set into a temporary table, you can easily count number of columns of the table by using something like:
Select *
From tempdb.Information_Schema.COLUMNS
where TABLE_NAME like '%#temptable%'

Why is a T-SQL variable comparison slower than GETDATE() function-based comparison?

I have a T-SQL statement that I am running against a table with many rows. I am seeing some strange behavior. Comparing a DateTime column against a precalculated value is slower than comparing each row against a calculation based on the GETDATE() function.
The following SQL takes 8 secs:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO
DECLARE #TimeZoneOffset int = -(DATEPART("HH", GETUTCDATE() - GETDATE()))
DECLARE #LowerTime DATETIME = DATEADD("HH", ABS(#TimeZoneOffset), CONVERT(VARCHAR, GETDATE(), 101) + ' 17:00:00')
SELECT TOP 200 Id, EventDate, Message
FROM Events WITH (NOLOCK)
WHERE EventDate > #LowerTime
GO
This alternate strangely returns instantly:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO
SELECT TOP 200 Id, EventDate, Message
FROM Events WITH (NOLOCK)
WHERE EventDate > GETDATE()-1
GO
Why is the second query so much faster?
EDITED: I updated the SQL to accurately reflect other settings I am using
After doing a lot of reading and researching, I've discovered the issue here is parameter sniffing. Sql Server attempts to determine how best to use indexes based on the where clause, but in this case it isnt doing a very good job.
See the examples below :
Slow version:
declare #dNow DateTime
Select #dNow=GetDate()
Select *
From response_master_Incident rmi
Where rmi.response_date between DateAdd(hh,-2,#dNow) AND #dNow
Fast version:
Select *
From response_master_Incident rmi
Where rmi.response_date between DateAdd(hh,-2,GetDate()) AND GetDate()
The "Fast" version runs around 10x faster than the slow version. The Response_Date field is indexed and is a DateTime type.
The solution is to tell Sql Server how best to optimise the query. Modifying the example as follows to include the OPTIMIZE option resulted in it using the same execution plan as the "Fast Version". The OPTMIZE option here explicitly tells sql server to treat the local #dNow variable as a date (as if declaring it as DateTime wasnt enough :s )
Care should be taken when doing this however because in more complicated WHERE clauses you could end up making the query perform worse than Sql Server's own optimisations.
declare #dNow DateTime
SET #dNow=GetDate()
Select ID, response_date, call_back_phone
from response_master_Incident rmi
where rmi.response_date between DateAdd(hh,-2,#dNow) AND #dNow
-- The optimizer does not know too much about the variable so assumes to should perform a clusterd index scann (on the clustered index ID) - this is slow
-- This hint tells the optimzer that the variable is indeed a datetime in this format (why it does not know that already who knows)
OPTION(OPTIMIZE FOR (#dNow = '99991231'));
The execution plans must be different, because SQL Server does not evaluate the value of the variable when creating the execution plan in execution time. So, it uses average statistics from all the different dates that can be stored in the table.
On the other hand, the function getdate is evaluated in execution time, so the execution plan is created using statistics for that specific date, which of course, are more realistic that the previous ones.
If you create a stored procedure with #LowerTime as a parameter, you will get better results.

Hidden Features of SQL Server

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are some hidden features of SQL Server?
For example, undocumented system stored procedures, tricks to do things which are very useful but not documented enough?
Answers
Thanks to everybody for all the great answers!
Stored Procedures
sp_msforeachtable: Runs a command with '?' replaced with each table name (v6.5 and up)
sp_msforeachdb: Runs a command with '?' replaced with each database name (v7 and up)
sp_who2: just like sp_who, but with a lot more info for troubleshooting blocks (v7 and up)
sp_helptext: If you want the code of a stored procedure, view & UDF
sp_tables: return a list of all tables and views of database in scope.
sp_stored_procedures: return a list of all stored procedures
xp_sscanf: Reads data from the string into the argument locations specified by each format argument.
xp_fixeddrives:: Find the fixed drive with largest free space
sp_help: If you want to know the table structure, indexes and constraints of a table. Also views and UDFs. Shortcut is Alt+F1
Snippets
Returning rows in random order
All database User Objects by Last Modified Date
Return Date Only
Find records which date falls somewhere inside the current week.
Find records which date occurred last week.
Returns the date for the beginning of the current week.
Returns the date for the beginning of last week.
See the text of a procedure that has been deployed to a server
Drop all connections to the database
Table Checksum
Row Checksum
Drop all the procedures in a database
Re-map the login Ids correctly after restore
Call Stored Procedures from an INSERT statement
Find Procedures By Keyword
Drop all the procedures in a database
Query the transaction log for a database programmatically.
Functions
HashBytes()
EncryptByKey
PIVOT command
Misc
Connection String extras
TableDiff.exe
Triggers for Logon Events (New in Service Pack 2)
Boosting performance with persisted-computed-columns (pcc).
DEFAULT_SCHEMA setting in sys.database_principles
Forced Parameterization
Vardecimal Storage Format
Figuring out the most popular queries in seconds
Scalable Shared Databases
Table/Stored Procedure Filter feature in SQL Management Studio
Trace flags
Number after a GO repeats the batch
Security using schemas
Encryption using built in encryption functions, views and base tables with triggers
In Management Studio, you can put a number after a GO end-of-batch marker to cause the batch to be repeated that number of times:
PRINT 'X'
GO 10
Will print 'X' 10 times. This can save you from tedious copy/pasting when doing repetitive stuff.
A lot of SQL Server developers still don't seem to know about the OUTPUT clause (SQL Server 2005 and newer) on the DELETE, INSERT and UPDATE statement.
It can be extremely useful to know which rows have been INSERTed, UPDATEd, or DELETEd, and the OUTPUT clause allows to do this very easily - it allows access to the "virtual" tables called inserted and deleted (like in triggers):
DELETE FROM (table)
OUTPUT deleted.ID, deleted.Description
WHERE (condition)
If you're inserting values into a table which has an INT IDENTITY primary key field, with the OUTPUT clause, you can get the inserted new ID right away:
INSERT INTO MyTable(Field1, Field2)
OUTPUT inserted.ID
VALUES (Value1, Value2)
And if you're updating, it can be extremely useful to know what changed - in this case, inserted represents the new values (after the UPDATE), while deleted refers to the old values before the UPDATE:
UPDATE (table)
SET field1 = value1, field2 = value2
OUTPUT inserted.ID, deleted.field1, inserted.field1
WHERE (condition)
If a lot of info will be returned, the output of OUTPUT can also be redirected to a temporary table or a table variable (OUTPUT INTO #myInfoTable).
Extremely useful - and very little known!
Marc
sp_msforeachtable: Runs a command with '?' replaced with each table name.
e.g.
exec sp_msforeachtable "dbcc dbreindex('?')"
You can issue up to 3 commands for each table
exec sp_msforeachtable
#Command1 = 'print ''reindexing table ?''',
#Command2 = 'dbcc dbreindex(''?'')',
#Command3 = 'select count (*) [?] from ?'
Also, sp_MSforeachdb
Connection String extras:
MultipleActiveResultSets=true;
This makes ADO.Net 2.0 and above read multiple, forward-only, read-only results sets on a single database connection, which can improve performance if you're doing a lot of reading. You can turn it on even if you're doing a mix of query types.
Application Name=MyProgramName
Now when you want to see a list of active connections by querying the sysprocesses table, your program's name will appear in the program_name column instead of ".Net SqlClient Data Provider"
TableDiff.exe
Table Difference tool allows you to discover and reconcile differences between a source and destination table or a view. Tablediff Utility can report differences on schema and data. The most popular feature of tablediff is the fact that it can generate a script that you can run on the destination that will reconcile differences between the tables.
Link
A less known TSQL technique for returning rows in random order:
-- Return rows in a random order
SELECT
SomeColumn
FROM
SomeTable
ORDER BY
CHECKSUM(NEWID())
In Management Studio, you can quickly get a comma-delimited list of columns for a table by :
In the Object Explorer, expand the nodes under a given table (so you will see folders for Columns, Keys, Constraints, Triggers etc.)
Point to the Columns folder and drag into a query.
This is handy when you don't want to use heinous format returned by right-clicking on the table and choosing Script Table As..., then Insert To... This trick does work with the other folders in that it will give you a comma-delimited list of names contained within the folder.
Row Constructors
You can insert multiple rows of data with a single insert statement.
INSERT INTO Colors (id, Color)
VALUES (1, 'Red'),
(2, 'Blue'),
(3, 'Green'),
(4, 'Yellow')
If you want to know the table structure, indexes and constraints:
sp_help 'TableName'
HashBytes() to return the MD2, MD4, MD5, SHA, or SHA1 hash of its input.
Figuring out the most popular queries
With sys.dm_exec_query_stats, you can figure out many combinations of query analyses by a single query.
Link
with the commnad
select * from sys.dm_exec_query_stats
order by execution_count desc
The spatial results tab can be used to create art.
enter link description here http://michaeljswart.com/wp-content/uploads/2010/02/venus.png
EXCEPT and INTERSECT
Instead of writing elaborate joins and subqueries, these two keywords are a much more elegant shorthand and readable way of expressing your query's intent when comparing two query results. New as of SQL Server 2005, they strongly complement UNION which has already existed in the TSQL language for years.
The concepts of EXCEPT, INTERSECT, and UNION are fundamental in set theory which serves as the basis and foundation of relational modeling used by all modern RDBMS. Now, Venn diagram type results can be more intuitively and quite easily generated using TSQL.
I know it's not exactly hidden, but not too many people know about the PIVOT command. I was able to change a stored procedure that used cursors and took 2 minutes to run into a speedy 6 second piece of code that was one tenth the number of lines!
useful when restoring a database for Testing purposes or whatever. Re-maps the login ID's correctly:
EXEC sp_change_users_login 'Auto_Fix', 'Mary', NULL, 'B3r12-36'
Drop all connections to the database:
Use Master
Go
Declare #dbname sysname
Set #dbname = 'name of database you want to drop connections from'
Declare #spid int
Select #spid = min(spid) from master.dbo.sysprocesses
where dbid = db_id(#dbname)
While #spid Is Not Null
Begin
Execute ('Kill ' + #spid)
Select #spid = min(spid) from master.dbo.sysprocesses
where dbid = db_id(#dbname) and spid > #spid
End
Table Checksum
Select CheckSum_Agg(Binary_CheckSum(*)) From Table With (NOLOCK)
Row Checksum
Select CheckSum_Agg(Binary_CheckSum(*)) From Table With (NOLOCK) Where Column = Value
I'm not sure if this is a hidden feature or not, but I stumbled upon this, and have found it to be useful on many occassions. You can concatonate a set of a field in a single select statement, rather than using a cursor and looping through the select statement.
Example:
DECLARE #nvcConcatonated nvarchar(max)
SET #nvcConcatonated = ''
SELECT #nvcConcatonated = #nvcConcatonated + C.CompanyName + ', '
FROM tblCompany C
WHERE C.CompanyID IN (1,2,3)
SELECT #nvcConcatonated
Results:
Acme, Microsoft, Apple,
If you want the code of a stored procedure you can:
sp_helptext 'ProcedureName'
(not sure if it is hidden feature, but I use it all the time)
A stored procedure trick is that you can call them from an INSERT statement. I found this very useful when I was working on an SQL Server database.
CREATE TABLE #toto (v1 int, v2 int, v3 char(4), status char(6))
INSERT #toto (v1, v2, v3, status) EXEC dbo.sp_fulubulu(sp_param1)
SELECT * FROM #toto
DROP TABLE #toto
In SQL Server 2005/2008 to show row numbers in a SELECT query result:
SELECT ( ROW_NUMBER() OVER (ORDER BY OrderId) ) AS RowNumber,
GrandTotal, CustomerId, PurchaseDate
FROM Orders
ORDER BY is a compulsory clause. The OVER() clause tells the SQL Engine to sort data on the specified column (in this case OrderId) and assign numbers as per the sort results.
Useful for parsing stored procedure arguments: xp_sscanf
Reads data from the string into the argument locations specified by each format argument.
The following example uses xp_sscanf
to extract two values from a source
string based on their positions in the
format of the source string.
DECLARE #filename varchar (20), #message varchar (20)
EXEC xp_sscanf 'sync -b -fproducts10.tmp -rrandom', 'sync -b -f%s -r%s',
#filename OUTPUT, #message OUTPUT
SELECT #filename, #message
Here is the result set.
-------------------- --------------------
products10.tmp random
Return Date Only
Select Cast(Floor(Cast(Getdate() As Float))As Datetime)
or
Select DateAdd(Day, 0, DateDiff(Day, 0, Getdate()))
dm_db_index_usage_stats
This allows you to know if data in a table has been updated recently even if you don't have a DateUpdated column on the table.
SELECT OBJECT_NAME(OBJECT_ID) AS DatabaseName, last_user_update,*
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID( 'MyDatabase')
AND OBJECT_ID=OBJECT_ID('MyTable')
Code from: http://blog.sqlauthority.com/2009/05/09/sql-server-find-last-date-time-updated-for-any-table/
Information referenced from:
SQL Server - What is the date/time of the last inserted row of a table?
Available in SQL 2005 and later
Here are some features I find useful but a lot of people don't seem to know about:
sp_tables
Returns a list of objects that can be
queried in the current environment.
This means any object that can appear
in a FROM clause, except synonym
objects.
Link
sp_stored_procedures
Returns a list of stored procedures in
the current environment.
Link
Find records which date falls somewhere inside the current week.
where dateadd( week, datediff( week, 0, TransDate ), 0 ) =
dateadd( week, datediff( week, 0, getdate() ), 0 )
Find records which date occurred last week.
where dateadd( week, datediff( week, 0, TransDate ), 0 ) =
dateadd( week, datediff( week, 0, getdate() ) - 1, 0 )
Returns the date for the beginning of the current week.
select dateadd( week, datediff( week, 0, getdate() ), 0 )
Returns the date for the beginning of last week.
select dateadd( week, datediff( week, 0, getdate() ) - 1, 0 )
Not so much a hidden feature but setting up key mappings in Management Studio under Tools\Options\Keyboard:
Alt+F1 is defaulted to sp_help "selected text" but I cannot live without the adding Ctrl+F1 for sp_helptext "selected text"
Persisted-computed-columns
Computed columns can help you shift the runtime computation cost to data modification phase. The computed column is stored with the rest of the row and is transparently utilized when the expression on the computed columns and the query matches. You can also build indexes on the PCC’s to speed up filtrations and range scans on the expression.
Link
There are times when there's no suitable column to sort by, or you just want the default sort order on a table and you want to enumerate each row. In order to do that you can put "(select 1)" in the "order by" clause and you'd get what you want. Neat, eh?
select row_number() over (order by (select 1)), * from dbo.Table as t
Simple encryption with EncryptByKey

Resources