I am working on a vb.net application, the management wants me to change the applications data source from SQL Server to XML.
I have a class called WebData.vb in the old application I need to somehow find a way to replace the stored procedures in it and make it read xml. So I was thinking of getting the xml structure from the returning result set of the stored procedure. I looked online and they said that for normal select statement you can do something like this:
FOR xml path ('Get_Order'),ROOT ('Get_Orders')
I am looking for something like
EXEC dbo.spMML_GET_ORDERS_FOR_EXPORT
FOR xml path ('Get_Order'),ROOT ('Get_Orders')
so now that I have the structure I can pass that data to a datatable and then return that datatable to the method.
Also if there is an alternative way in creating a XML stored procedure please let me know thanks coders.
Assuming you can't modify the stored proc (due to other dependencies or some other reason) to have the SELECT within the proc have the FOR XML syntax, you can use INSERT/EXEC to insert the results of the stored proc into a temp table or table variable, then apply your FOR XML onto a query of those results.
Something like this should work:
DECLARE #Data TABLE (...) -- Define table to match results of stored proc
INSERT #Data
EXEC dbo.spMML_GET_ORDERS_FOR_EXPORT
SELECT * FROM #Data FOR xml path ('Get_Order'),ROOT ('Get_Orders')
There are a few methods, one adding namespaces using WITH XMLNAMESPACES(<STRING> AS <NAMESPACE string>). XMLNAMESPACES can embed appropriate XML markers to your tables for use with other applications (which hopefully is a factor here), making documentation a little easier.
Depending on your application use, you can use FOR XML {RAW, PATH, AUTO, or EXPLICIT} in your query, as well as XQUERY methods...but for your needs, stick to the simpler method like XML PATH or XML AUTO.
XML PATH is very flexible, however you lose the straightforward identification of the column datatypes.
XMLNAMESPACE
WITH XMLNAMESPACES('dbo.MyTableName' AS SQL)
SELECT DENSE_RANK() OVER (ORDER BY Name ASC) AS 'Management_ID'
, Name AS [Name]
, Began AS [Team/#Began]
, Ended AS [Team/#Ended]
, Team AS [Team]
, [Role]
FROM dbo.SSIS_Owners
FOR XML PATH, ELEMENTS, ROOT('SQL')
XML AUTO
Because you might want to return to the database, I suggest using XML AUTO with XMLSCHEMA, where the sql datatypes are kept in the XML.
SELECT DENSE_RANK() OVER (ORDER BY Name ASC) AS 'Management_ID'
, Name AS [Name]
, Began AS [Team/#Began]
, Ended AS [Team/#Ended]
, Team AS [Team]
, [Role]
FROM dbo.SSIS_Owners
FOR XML AUTO, ELEMENTS, XMLSCHEMA('SSIS_Owners')
Downside is XMLNAMESPACES is not an option, but you can get around this through solutions like XML SCHEMA COLLECTIONS or in the query itself as I showed.
You can also just use XML PATH directly without the namespace, but again, that depends on your application use as you are transforming everything to XML files.
Also note how I defined the embedded attributes. A learning point here, but think about the query in the same order that the XML would appear. That is why I defined the variable attributes first before I then stated what the text for that node was.
Lastly, I think you'll find Paparazzi has a question on this topic that covers quite. TSQL FOR XML PATH Attribute On , Type
I am using SQL Server 2012. I guess what I am asking is should I continue on the path of researching the ability to create a SP (or UDF, but with #Temp tables probably involved, I was thinking SP) in order to have a reusable object to determine the median?
I hope this isn't too generic of a question, and is hosed, but I have spent some time researching the ability to determine a median value. Some possible hurdles include the need to pass in a string representation of the query that will return the data that I wish to perform the median on.
Anyone attempt this in the past?
Here is a stored proc I use to generate some quick stats.
Simply pass a Source, Measure and/or Filter.
CREATE PROCEDURE [dbo].[prc-Dynamic-Stats](#Table varchar(150),#Fld varchar(50), #Filter varchar(500))
-- Syntax: Exec [dbo].[prc-Dynamic-Stats] '[Chinrus-Series].[dbo].[DS_Treasury_Rates]','TR_Y10','Year(TR_Date)>2001'
As
Begin
Set NoCount On;
Declare #SQL varchar(max) =
'
;with cteBase as (
Select RowNr=Row_Number() over (Order By ['+#Fld+'])
,Measure = ['+#Fld+']
From '+#Table+'
Where '+case when #Filter='' then '1=1' else #Filter end+'
)
Select RecordCount = Count(*)
,DistinctCount = Count(Distinct A.Measure)
,SumTotal = Sum(A.Measure)
,Minimum = Min(A.Measure)
,Maximum = Max(A.Measure)
,Mean = Avg(A.Measure)
,Median = Max(B.Measure)
,Mode = Max(C.Measure)
,StdDev = STDEV(A.Measure)
From cteBase A
Join (Select Measure From cteBase where RowNr=(Select Cnt=count(*) from cteBase)/2) B on 1=1
Join (Select Top 1 Measure,Hits=count(*) From cteBase Group By Measure Order by 2 desc ) C on 1=1
'
Exec(#SQL)
End
Returns
RecordCount DistinctCount SumTotal Minimum Maximum Mean Median Mode StdDev
3615 391 12311.81 0.00 5.44 3.4057 3.57 4.38 1.06400795277565
You may want to take a look at a response that I had to this post. In short, if you're comfortable with C# or VB .NET, you could create a user defined CLR aggregate. We use CLR implementations for quite a few things, especially statistical methods that you may see in other platforms like SAS, R, etc.
This is easily accomplished by creating a User-Defined Aggregate (UDA) via SQLCLR. If you want to see how to do it, or even just download the UDA, check out the article I wrote about it on SQL Server Central: Getting The Most Out of SQL Server 2005 UDTs and UDAs (please note that the site requires free registration in order to read their content).
Or, it is also available in the Free version of the SQL# SQLCLR library (which I created, but again, it is free) available at http://SQLsharp.com/. It is called Agg_Median.
If using SQL Server 2008 or newer (which you are), you can write a function that accepts a table-valued parameter as input.
Create Type MedianData As Table ( DataPoint Int )
Create Function CalculateMedian ( #MedianData MedianData ReadOnly )
Returns Int
As
Begin
-- do something with #MedianData which is a table
End
I have the following scenario/requirements for which I am not sure what is the best way to address to perform in the fastest way possible, looking for some guidance of features to use and examples of them, if available
I will receive anywhere between 10k to 100k of entities (in XML format) from a web service that I want to upsert (some rows might exist, others might not).
here are some of the requirements:
The source of the XML is a web service that I'm calling from C# code. Actually two different methods. For one of the methods, the return schema will be something flat that I can map directly to one of my tables. For the other, it will return an XML representation that I might need to work with in C# in order to be able to map it to flat entities for my tables. In that scenario, would it be best to do the modifications needed and then write to file to an XML to use as source?
The returned XML can contain up to 150k entities in XML, that may or may not exist in my tables yet, so I'm looking to upsert them. The files, when written to disk, can weight up to 20 megabytes. I asked if they could do JSON instead of XML, but apparently that's not a choice.
The SQL database is on a different server than my IIS server, so I rather avoid having the SQL server retrieve the XML from a file, I rather pass it from C# as a string or as a Table Value Parameter.
The tables are rather simple and don't have indexes other than the PK ones.
I've never been big on XML, although it got way easier with LINQ to XML, which I was initially using to parse each record and send individual inserts but the performance was just bad, so based on some research I've been doing, I'm thinking I could use:
Upserts from SQL server through MERGE statements.
Pass the whole XML as a parameter and use OPENXML to use as source in the MERGE statement.
Or, somehow generate a Table Value Parameter in C# and pass that to SQL to use on the MERGE.
I read on this similar question (which didnt have access to upsert/merge) that instead of trying to upsert directly from the XML, that it might be better to insert everything to a temporary table and do the merge/upsert against the temporary table?
Would this work and be considerably fast?
If anyone has had a similar scenario, can you share your thoughts/ideas about what combination of features would be best?
Thanks.
You are on the right track. I have a similar setup using XML to transfer data between an online portal and the client-server application. The rest of the setup is very similar to what you have.
The fact that your tables are not indexed is a bit of a concern, if you are comparing any fields that are not PK Fields, regardless of how you index the temp tables. It is important to have either one index with all of the fields used in the merge match clause, or an index for each of them - I find the former yields better performance. Beyond that, using an XML parameter, OpenXML and temp tables is the way to go.
The following code has not been tested, so may need a bit of debugging, but it will put you on the right track. A couple of notes: If all of the fields in the OpenXML WITH clause are attributes, then you can drop the last parameter (i.e. ", 2") and field source specifiers (i.e. "#id" for the detail table). Although the data in your description is flat, in which case you will only need one table, I do often need to import into linked records. I have included a simple master-detail relationship example in the code below, just for the sake of completeness.
CREATE PROCEDURE usp_ImportFromXML (#data XML) AS
BEGIN
/*
<root>
<data>
<match_field_1>1</match_field_1>
<match_field_2>val2</match_field_2>
<data_1>val3</data_1>
<data_2>val4</data_2>
<detail_records>
<detail_data id="detailID1">
<detail_1>blah1<detail_1>
<detail_2>blah2<detail_2>
</detail_data>
<detail_data id="detailID2">
<detail_1>blah3<detail_1>
<detail_2>blah4<detail_2>
</detail_data>
</detail_records>
</data>
<data>
...
</root>
*/
DECLARE #iDoc INT
EXEC sp_xml_preparedocument #iDoc OUTPUT, #data
SELECT * INTO #temp
FROM OpenXML(#iDoc, '/root/data', 2) WITH (
match_field_1 INT,
match_field_2 VARCHAR(50),
data_1 VARCHAR(50),
data_2 VARCHAR(50)
)
SELECT * INTO #detail
FROM OpenXML(#iDoc, '/root/data/detail_data', 2) WITH (
match_field_1 INT '../../match_field_1',
match_field_2 VARCHAR(50) '../../match_field_2',
detail_id VARCHAR(50) '#id',
detail_1 VARCHAR(50),
detail_2 VARCHAR(50)
)
EXEC sp_xml_removedocument #iDoc
CREATE INDEX #IX_temp ON #temp(match_field_1, match_field_2)
CREATE INDEX #IX_detail ON #detail(match_field_1, match_field_2, detail_id)
MERGE data_table a
USING #temp ta
ON ta.match_field_1 = a.match_field_1 AND ta.match_field_2 = a.match_field_2
WHEN MATCHED THEN
UPDATE SET data_1 = ta.data_1, data_2 = ta.data_2
WHEN NOT MATCHED THEN
INSERT (match_field_1, match_field_2, data_1, data_2) VALUES (ta.match_field_1, ta.match_field_2, ta.data_1, ta.data_2)
MERGE detail_table a
USING (SELECT d.*, p._key FROM #detail d, data_table p WHERE d.match_field_1 = p.match_field_1 AND d.match_field_2 = p.match_field_2) ta
ON a.id = ta.id AND a.parent_key = ta._key
WHEN MATCHED THEN
UPDATE SET detail_1 = ta.detail_1, detail2 = ta.detail_2
WHEN NOT MATCHED THEN
INSERT (parent_key, id, detail_1, detail_2) VALUES (ta._key, ta.id, ta.detail_1, ta.detail_2)
DROP TABLE #temp
DROP TABLE #detail
END
Use (3). Process the data ready for upset in C#. C# is made for this kind of algorithmic work. It is both the right programming language as well as the faster programming language. T-SQL is not the right tool. You do not want to use XML with T-SQL for very high performance stuff because it burns CPU like crazy. Instead use the fast TDS protocol to send TVP or bulk data.
Then, send the data to the server using either a TVP or a bulk-insert (SqlBulkCopy) to a temp table. The latter technique is great for very many rows (>10k?). Bulk insert uses special TDS features. It does not use SQL batches to transfer the data. It does not get faster than this.
Then use the MERGE statement as you described. Use big batch sizes, potentially all rows in one batch.
The best way I've found is to bulk insert into a temp table from your C# code, then issue the merge once the data is in SQL Server. I have an example here on my blog SQL Server Bulk Upsert
I use this in production to insert millions of rows daily, and have yet to find a faster way to do it. Give it a try, I think you will be impressed with the performance of the solution.
I've currently got a C# application that responds to HTTP requests. The body of the HTTP request (XML) is passed to SQL Server, at which time the database engine performs the correct instruction. One of the instructions is used to load information about Invoices using the id of the customer(InvoiceLoad):
<InvoiceLoad ControlNumber="12345678901">
<Invoice>
<CustomerID>johndoe#gmail.com</CustomerID>
</Invoice>
</InvoiceLoad>
I need to perform a SELECT operation against the invoice table (which contains the associated email address).
I've tried using:
SELECT 'Date', 'Status', 'Location'
FROM Invoices
WHERE Email_Address = Invoice.A.value(.)
using an xml.nodes('InvoiceLoad/Invoice/CustomerId') Invoice(A)
command.
However, as this query may run THOUSANDS of times per minute, I want to make it as fast as possible. I'm hearing that one way to do this may be to use CROSS APPLY (which I have never used). Is that the solution? If not, how exactly would I go about making this query as fast as possible? Any and all suggestions are greatly appreciated!
I don't see why you would need a call to .nodes() at all - from what I understand, each XML fragment has just a single entry - right?
So given this XML:
<InvoiceLoad ControlNumber="12345678901">
<Invoice>
<CustomerID>johndoe#gmail.com</CustomerID>
</Invoice>
</InvoiceLoad>
you can use this SQL to get the value of the <CustomerID> node:
DECLARE #xmlvar XML
SET #xmlvar = '<InvoiceLoad ControlNumber="12345678901">
<Invoice>
<CustomerID>johndoe#gmail.com</CustomerID>
</Invoice>
</InvoiceLoad>'
SELECT
#xmlvar.value('(/InvoiceLoad/Invoice/CustomerID)[1]', 'varchar(100)')
and you can join this against your customer table or whatever you need to do.
If you have the XML stored in a table, and you always need to extract that value from <CustomerID>, you could also think about creating a computed, persisted column on that table that would extract that e-mail address into a separate column, which you could then use for easy joining. This requires a little bit of work - a stored function taking the XML as input - but it's really quite a nice way to "surface" certain important snippets of data from your XML.
Step 1: create your function
CREATE FUNCTION dbo.ExtractCustomer (#input XML)
RETURNS VARCHAR(255)
WITH SCHEMABINDING
AS BEGIN
DECLARE #Result VARCHAR(255)
SELECT
#Result = #Input.value('(/InvoiceLoad/Invoice/CustomerID)[1]', 'varchar(255)')
RETURN #result
END
So given your XML, you get the one <CustomerID> node and extract its "inner text" and return it as a VARCHAR(255).
Step 2: add a computed, persisted column to your table
ALTER TABLE dbo.YourTableWithTheXML
ADD CustomerID AS dbo.ExtractCustomer(YourXmlColumnHere) PERSISTED
Now, your table that has the XML column has a new column - CustomerID - which will automagically contain the contents of the <CustomerID> as a VARCHAR(255). The value is persisted, i.e. as long as the XML doesn't change, it doesn't have to be re-computed. You can use that column like any other on your table, and you can even index it to speed up any joins on it!
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are some hidden features of SQL Server?
For example, undocumented system stored procedures, tricks to do things which are very useful but not documented enough?
Answers
Thanks to everybody for all the great answers!
Stored Procedures
sp_msforeachtable: Runs a command with '?' replaced with each table name (v6.5 and up)
sp_msforeachdb: Runs a command with '?' replaced with each database name (v7 and up)
sp_who2: just like sp_who, but with a lot more info for troubleshooting blocks (v7 and up)
sp_helptext: If you want the code of a stored procedure, view & UDF
sp_tables: return a list of all tables and views of database in scope.
sp_stored_procedures: return a list of all stored procedures
xp_sscanf: Reads data from the string into the argument locations specified by each format argument.
xp_fixeddrives:: Find the fixed drive with largest free space
sp_help: If you want to know the table structure, indexes and constraints of a table. Also views and UDFs. Shortcut is Alt+F1
Snippets
Returning rows in random order
All database User Objects by Last Modified Date
Return Date Only
Find records which date falls somewhere inside the current week.
Find records which date occurred last week.
Returns the date for the beginning of the current week.
Returns the date for the beginning of last week.
See the text of a procedure that has been deployed to a server
Drop all connections to the database
Table Checksum
Row Checksum
Drop all the procedures in a database
Re-map the login Ids correctly after restore
Call Stored Procedures from an INSERT statement
Find Procedures By Keyword
Drop all the procedures in a database
Query the transaction log for a database programmatically.
Functions
HashBytes()
EncryptByKey
PIVOT command
Misc
Connection String extras
TableDiff.exe
Triggers for Logon Events (New in Service Pack 2)
Boosting performance with persisted-computed-columns (pcc).
DEFAULT_SCHEMA setting in sys.database_principles
Forced Parameterization
Vardecimal Storage Format
Figuring out the most popular queries in seconds
Scalable Shared Databases
Table/Stored Procedure Filter feature in SQL Management Studio
Trace flags
Number after a GO repeats the batch
Security using schemas
Encryption using built in encryption functions, views and base tables with triggers
In Management Studio, you can put a number after a GO end-of-batch marker to cause the batch to be repeated that number of times:
PRINT 'X'
GO 10
Will print 'X' 10 times. This can save you from tedious copy/pasting when doing repetitive stuff.
A lot of SQL Server developers still don't seem to know about the OUTPUT clause (SQL Server 2005 and newer) on the DELETE, INSERT and UPDATE statement.
It can be extremely useful to know which rows have been INSERTed, UPDATEd, or DELETEd, and the OUTPUT clause allows to do this very easily - it allows access to the "virtual" tables called inserted and deleted (like in triggers):
DELETE FROM (table)
OUTPUT deleted.ID, deleted.Description
WHERE (condition)
If you're inserting values into a table which has an INT IDENTITY primary key field, with the OUTPUT clause, you can get the inserted new ID right away:
INSERT INTO MyTable(Field1, Field2)
OUTPUT inserted.ID
VALUES (Value1, Value2)
And if you're updating, it can be extremely useful to know what changed - in this case, inserted represents the new values (after the UPDATE), while deleted refers to the old values before the UPDATE:
UPDATE (table)
SET field1 = value1, field2 = value2
OUTPUT inserted.ID, deleted.field1, inserted.field1
WHERE (condition)
If a lot of info will be returned, the output of OUTPUT can also be redirected to a temporary table or a table variable (OUTPUT INTO #myInfoTable).
Extremely useful - and very little known!
Marc
sp_msforeachtable: Runs a command with '?' replaced with each table name.
e.g.
exec sp_msforeachtable "dbcc dbreindex('?')"
You can issue up to 3 commands for each table
exec sp_msforeachtable
#Command1 = 'print ''reindexing table ?''',
#Command2 = 'dbcc dbreindex(''?'')',
#Command3 = 'select count (*) [?] from ?'
Also, sp_MSforeachdb
Connection String extras:
MultipleActiveResultSets=true;
This makes ADO.Net 2.0 and above read multiple, forward-only, read-only results sets on a single database connection, which can improve performance if you're doing a lot of reading. You can turn it on even if you're doing a mix of query types.
Application Name=MyProgramName
Now when you want to see a list of active connections by querying the sysprocesses table, your program's name will appear in the program_name column instead of ".Net SqlClient Data Provider"
TableDiff.exe
Table Difference tool allows you to discover and reconcile differences between a source and destination table or a view. Tablediff Utility can report differences on schema and data. The most popular feature of tablediff is the fact that it can generate a script that you can run on the destination that will reconcile differences between the tables.
Link
A less known TSQL technique for returning rows in random order:
-- Return rows in a random order
SELECT
SomeColumn
FROM
SomeTable
ORDER BY
CHECKSUM(NEWID())
In Management Studio, you can quickly get a comma-delimited list of columns for a table by :
In the Object Explorer, expand the nodes under a given table (so you will see folders for Columns, Keys, Constraints, Triggers etc.)
Point to the Columns folder and drag into a query.
This is handy when you don't want to use heinous format returned by right-clicking on the table and choosing Script Table As..., then Insert To... This trick does work with the other folders in that it will give you a comma-delimited list of names contained within the folder.
Row Constructors
You can insert multiple rows of data with a single insert statement.
INSERT INTO Colors (id, Color)
VALUES (1, 'Red'),
(2, 'Blue'),
(3, 'Green'),
(4, 'Yellow')
If you want to know the table structure, indexes and constraints:
sp_help 'TableName'
HashBytes() to return the MD2, MD4, MD5, SHA, or SHA1 hash of its input.
Figuring out the most popular queries
With sys.dm_exec_query_stats, you can figure out many combinations of query analyses by a single query.
Link
with the commnad
select * from sys.dm_exec_query_stats
order by execution_count desc
The spatial results tab can be used to create art.
enter link description here http://michaeljswart.com/wp-content/uploads/2010/02/venus.png
EXCEPT and INTERSECT
Instead of writing elaborate joins and subqueries, these two keywords are a much more elegant shorthand and readable way of expressing your query's intent when comparing two query results. New as of SQL Server 2005, they strongly complement UNION which has already existed in the TSQL language for years.
The concepts of EXCEPT, INTERSECT, and UNION are fundamental in set theory which serves as the basis and foundation of relational modeling used by all modern RDBMS. Now, Venn diagram type results can be more intuitively and quite easily generated using TSQL.
I know it's not exactly hidden, but not too many people know about the PIVOT command. I was able to change a stored procedure that used cursors and took 2 minutes to run into a speedy 6 second piece of code that was one tenth the number of lines!
useful when restoring a database for Testing purposes or whatever. Re-maps the login ID's correctly:
EXEC sp_change_users_login 'Auto_Fix', 'Mary', NULL, 'B3r12-36'
Drop all connections to the database:
Use Master
Go
Declare #dbname sysname
Set #dbname = 'name of database you want to drop connections from'
Declare #spid int
Select #spid = min(spid) from master.dbo.sysprocesses
where dbid = db_id(#dbname)
While #spid Is Not Null
Begin
Execute ('Kill ' + #spid)
Select #spid = min(spid) from master.dbo.sysprocesses
where dbid = db_id(#dbname) and spid > #spid
End
Table Checksum
Select CheckSum_Agg(Binary_CheckSum(*)) From Table With (NOLOCK)
Row Checksum
Select CheckSum_Agg(Binary_CheckSum(*)) From Table With (NOLOCK) Where Column = Value
I'm not sure if this is a hidden feature or not, but I stumbled upon this, and have found it to be useful on many occassions. You can concatonate a set of a field in a single select statement, rather than using a cursor and looping through the select statement.
Example:
DECLARE #nvcConcatonated nvarchar(max)
SET #nvcConcatonated = ''
SELECT #nvcConcatonated = #nvcConcatonated + C.CompanyName + ', '
FROM tblCompany C
WHERE C.CompanyID IN (1,2,3)
SELECT #nvcConcatonated
Results:
Acme, Microsoft, Apple,
If you want the code of a stored procedure you can:
sp_helptext 'ProcedureName'
(not sure if it is hidden feature, but I use it all the time)
A stored procedure trick is that you can call them from an INSERT statement. I found this very useful when I was working on an SQL Server database.
CREATE TABLE #toto (v1 int, v2 int, v3 char(4), status char(6))
INSERT #toto (v1, v2, v3, status) EXEC dbo.sp_fulubulu(sp_param1)
SELECT * FROM #toto
DROP TABLE #toto
In SQL Server 2005/2008 to show row numbers in a SELECT query result:
SELECT ( ROW_NUMBER() OVER (ORDER BY OrderId) ) AS RowNumber,
GrandTotal, CustomerId, PurchaseDate
FROM Orders
ORDER BY is a compulsory clause. The OVER() clause tells the SQL Engine to sort data on the specified column (in this case OrderId) and assign numbers as per the sort results.
Useful for parsing stored procedure arguments: xp_sscanf
Reads data from the string into the argument locations specified by each format argument.
The following example uses xp_sscanf
to extract two values from a source
string based on their positions in the
format of the source string.
DECLARE #filename varchar (20), #message varchar (20)
EXEC xp_sscanf 'sync -b -fproducts10.tmp -rrandom', 'sync -b -f%s -r%s',
#filename OUTPUT, #message OUTPUT
SELECT #filename, #message
Here is the result set.
-------------------- --------------------
products10.tmp random
Return Date Only
Select Cast(Floor(Cast(Getdate() As Float))As Datetime)
or
Select DateAdd(Day, 0, DateDiff(Day, 0, Getdate()))
dm_db_index_usage_stats
This allows you to know if data in a table has been updated recently even if you don't have a DateUpdated column on the table.
SELECT OBJECT_NAME(OBJECT_ID) AS DatabaseName, last_user_update,*
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID( 'MyDatabase')
AND OBJECT_ID=OBJECT_ID('MyTable')
Code from: http://blog.sqlauthority.com/2009/05/09/sql-server-find-last-date-time-updated-for-any-table/
Information referenced from:
SQL Server - What is the date/time of the last inserted row of a table?
Available in SQL 2005 and later
Here are some features I find useful but a lot of people don't seem to know about:
sp_tables
Returns a list of objects that can be
queried in the current environment.
This means any object that can appear
in a FROM clause, except synonym
objects.
Link
sp_stored_procedures
Returns a list of stored procedures in
the current environment.
Link
Find records which date falls somewhere inside the current week.
where dateadd( week, datediff( week, 0, TransDate ), 0 ) =
dateadd( week, datediff( week, 0, getdate() ), 0 )
Find records which date occurred last week.
where dateadd( week, datediff( week, 0, TransDate ), 0 ) =
dateadd( week, datediff( week, 0, getdate() ) - 1, 0 )
Returns the date for the beginning of the current week.
select dateadd( week, datediff( week, 0, getdate() ), 0 )
Returns the date for the beginning of last week.
select dateadd( week, datediff( week, 0, getdate() ) - 1, 0 )
Not so much a hidden feature but setting up key mappings in Management Studio under Tools\Options\Keyboard:
Alt+F1 is defaulted to sp_help "selected text" but I cannot live without the adding Ctrl+F1 for sp_helptext "selected text"
Persisted-computed-columns
Computed columns can help you shift the runtime computation cost to data modification phase. The computed column is stored with the rest of the row and is transparently utilized when the expression on the computed columns and the query matches. You can also build indexes on the PCC’s to speed up filtrations and range scans on the expression.
Link
There are times when there's no suitable column to sort by, or you just want the default sort order on a table and you want to enumerate each row. In order to do that you can put "(select 1)" in the "order by" clause and you'd get what you want. Neat, eh?
select row_number() over (order by (select 1)), * from dbo.Table as t
Simple encryption with EncryptByKey