My stored procedure in SQL Server 2008 when is execute with JOB its fails, but without its working great - sql-server

I have this stored procedure wich its like this :
ALTER PROCEDURE [dbo].[P_ALIMENTATION_VolumeVentes]
AS
BEGIN
SELECT EDS, NomEDS,AgenceEDS, AgenceNomEDS,SecteurEDS, SecteurNomEDS,DirectionEDS,DirectionNomEDS,
(SELECT count(*) FROM CPListeVentesNonConformes WHERE CPListeVentesNonConformes.EDS = CPRT.EDS AND TypePart='PP') AS ListeVenteNC_PP,
(SELECT count(*) FROM CEListeVentesNonConformes WHERE CEListeVentesNonConformes.EDS = CPRT.EDS AND TypePart='ET') AS ListeVenteNC_ET,
(SELECT count(*) FROM CPListeVentesNonConformes WHERE CPListeVentesNonConformes.EDS = CPRT.EDS AND TypePart='PP' OR TypePart='ET') AS ListeVenteNC_PPET,
(SELECT count(*) FROM ListeVentes WHERE IDES01 = CPRT.EDS AND TypePart='PP') AS ListeVentes
INTO VolumeVentes
FROM CPR CPRT
GROUP BY EDS, NomEDS,AgenceEDS, AgenceNomEDS,SecteurEDS, SecteurNomEDS,DirectionEDS,DirectionNomEDS,TypePart
END
When i execute with the command line EXEC [dbo].[P_ALIMENTATION_VolumeVentes]
that work super great my table is create.
But when i use SQL Agent to schedule a job i have a nice surprise to have this error :
Executed as user: ZRES\CSAPREP10IUCRADM. The conversion of a varchar
data type to a datetime data type resulted in an out-of-range value.
[SQLSTATE 22007] (Error 242) The statement has been terminated.
[SQLSTATE 01000] (Error 3621). The step failed.
The structure table who will be create VolumetVentes have no fields with a type as datetime
Here is the structure of the tableVolumeVentes
I don't understand exactly where is the error ?
Thank you for help

Actually it should never work since you already have VolumeVente table.
SELECT INTO creates new table with columns described in select statement
https://msdn.microsoft.com/en-us/library/ms188029(v=sql.120).aspx
You should modify this code to become INSERT SELECT.
But you will probably still get the same conversion error because (I guess) column order is not correct in select statment and does not match column order in existing table. That is why you should always explicitly define column list in INSERT INTO clause, so the final script will look like:
INSERT INTO VolumeVentes(EDS, NomEDS,AgenceEDS, AgenceNomEDS,SecteurEDS, ...)
SELECT EDS, NomEDS,AgenceEDS, AgenceNomEDS,SecteurEDS
FROM ...

Use
INSERT INTO... Statement
instead of
SELECT INTO FROM... Statement

Related

SQL Server linked-server supplied inconsistent metadata for a column

I've encountered a problem executing a T-SQL statement (inserting and updating) on linked Oracle servers when executing the query twice.
The first execution works like a charm, but when trying a second time, I get this error:
The OLE DB provider "OraOLEDB.Oracle" for linked server "Linked_Server" supplied inconsistent metadata. An extra column was supplied during execution that was not found at compile time.
I've already looked it up and tried several approaches:
Use of openquery instead of original statement as suggested here: MSSQL Linked Server error: The OLE DB provider "OraOLEDB.Oracle" for linked server supplied inconsistent metadata for a column
Use OPTION (RECOMPILE) at the end of original statement as well as on the openquery statement as suggested here: https://dba.stackexchange.com/questions/275605/linked-server-throws-metadata-error or here https://learn.microsoft.com/en-us/answers/questions/98208/linked-server-throws-metadata-error.html
One thing I've encountered is when using OPTION (RECOMPILE) the statement is infinite and I had to cancel it manually when executing the query the 2nd time.
Original statement:
declare #insert_cmd varchar(3000) = dbo.substringProc('
insert into LINKED_SERVER.TABLE (E_KNZ_NR, E_KNZZTRH_PERIODE_DAT, E_KNZZTRH_IST_WT, E_KNZZTRH_ERSTERF_TS)
select
id_dwh_knz,
monitor_dat,
monitor_wert,
SYSDATETIME()
from temp_nes_kennzahl zr
left outer join (
select
E_KNZ_NR,
periode_dat=cast(E_KNZZTRH_PERIODE_DAT as date),
E_KNZZTRH_IST_WT
from LINKED_SERVER2.TABLE
) dwh_zr
on zr.id_dwh_knz = dwh_zr.E_KNZ_NR
and zr.monitor_dat = dwh_zr.periode_dat
where dwh_zr.periode_dat is null
order by monitor_dat asc, id_dwh_knz asc ') -- OPTION (RECOMPILE)
exec (#insert_cmd);
Statement with openquery:
declare #insert_cmd varchar(3000) = dbo.substringProc('
insert into openquery (LINKED_SERVER, ''SELECT E_KNZ_NR, E_KNZZTRH_PERIODE_DAT, E_KNZZTRH_IST_WT, E_KNZZTRH_ERSTERF_TS FROM LINKED_SERVER.TABLE'')
select
id_dwh_knz,
monitor_dat,
monitor_wert,
SYSDATETIME()
from temp_nes_kennzahl zr
left outer join (
select * from openquery (LINKED_SERVER2,''SELECT E_KNZ_NR,CAST(E_KNZZTRH_PERIODE_DAT as DATE) AS periode_dat,E_KNZZTRH_IST_WT FROM LINKED_SERVER2.TABLE'')
) dwh_zr
on zr.id_dwh_knz = dwh_zr.E_KNZ_NR
and zr.monitor_dat = dwh_zr.periode_dat
where dwh_zr.periode_dat is null
order by monitor_dat asc, id_dwh_knz asc') -- OPTION (RECOMPILE)
exec (#insert_cmd);
(The 'substringProc' is just a helper method to retrieve the related user/database for production or test environment and 'dwh' stands for the datawarehouse at the linked oracle server)
Would be nice if someone got a solution for that, since I'm struggling with this error for quite a long time and both of the queries work - but only once. I've also read that there might be a problem with the execution plan stored in the cache but I have no clue how to workaround such an issue.
Thanks in advance

SSIS Pass variable to Execute SQL Update [duplicate]

I have ssis package in that I'm taking values from flat file and insert it into table.
I have taken one Execute SQL Task in that creating one temptable
CREATE TABLE [tempdb].dbo.##temptable
(
date datetime,
companyname nvarchar(50),
price decimal(10,0),
PortfolioId int,
stype nvarchar(50)
)
Insert into [tempdb].dbo.##temptable (date,companyname,price,PortfolioId,stype)
SELECT date,companyname,price,PortfolioId,stype
FROM ProgressNAV
WHERE (Date = '2011-09-30') AND (PortfolioId = 5) AND (stype in ('Index'))
ORDER BY CompanyName
Now in above query I need to pass (Date = '2011-09-30') AND (PortfolioId = 5) AND (stype in ('Index'))
these 3 parameter using variable name I have created variables in package so that I become dynamic.
In your Execute SQL Task, make sure SQLSourceType is set to Direct Input, then your SQL Statement is the name of the stored proc, with questionmarks for each paramter of the proc, like so:
Click the parameter mapping in the left column and add each paramter from your stored proc and map it to your SSIS variable:
Now when this task runs it will pass the SSIS variables to the stored proc.
The EXCEL and OLED DB connection managers use the parameter names 0 and 1.
I was using a oledb connection and wasted couple of hours trying to figure out the reason why the query was not working or taking the parameters. the above explanation helped a lot
Thanks a lot.
Along with #PaulStock's answer, Depending on your connection type, your variable names and SQLStatement/SQLStatementSource Changes
https://learn.microsoft.com/en-us/sql/integration-services/control-flow/execute-sql-task
SELECT, INSERT, UPDATE, and DELETE commands frequently include WHERE clauses to specify filters that define the conditions each row in the source tables must meet to qualify for an SQL command. Parameters provide the filter values in the WHERE clauses.
You can use parameter markers to dynamically provide parameter values. The rules for which parameter markers and parameter names can be used in the SQL statement depend on the type of connection manager that the Execute SQL uses.
The following table lists examples of the SELECT command by connection manager type. The INSERT, UPDATE, and DELETE statements are similar. The examples use SELECT to return products from the Product table in AdventureWorks2012 that have a ProductID greater than and less than the values specified by two parameters.
EXCEL, ODBC, and OLEDB
SELECT* FROM Production.Product WHERE ProductId > ? AND ProductID < ?
ADO
SELECT * FROM Production.Product WHERE ProductId > ? AND ProductID < ?
ADO.NET
SELECT* FROM Production.Product WHERE ProductId > #parmMinProductID
AND ProductID < #parmMaxProductID
The examples would require parameters that have the following names:
The EXCEL and OLED DB connection managers use the parameter names 0 and 1. The ODBC connection type uses 1 and 2.
The ADO connection type could use any two parameter names, such as Param1 and Param2, but the parameters must be mapped by their ordinal position in the parameter list.
The ADO.NET connection type uses the parameter names #parmMinProductID and #parmMaxProductID.
A little late to the party, but this is how I did it for an insert:
DECLARE #ManagerID AS Varchar (25) = 'NA'
DECLARE #ManagerEmail AS Varchar (50) = 'NA'
Declare #RecordCount AS int = 0
SET #ManagerID = ?
SET #ManagerEmail = ?
SET #RecordCount = ?
INSERT INTO...

Verify the columns (name and amount) returned by a SQL query

I've a thirdy-part plugin of my program that execute SQL queries (mostly are select). These queries must return a default column order and amount, such:
PACKAGEID (guid), REFDATE (datetime), MODIFYDATE (datetime), PROG (int)
Sometimes happens that some query omit one of the column specified above. In order to avoid furthers errors in the program, I would execute a sort of check just to be sure that each query executed returns the default columns.
I've already use the SQL syntax SET NOEXEC ON and SET NOEXEC OFF and might be useful also in this case. I'm currently using SQL SERVER 2008.
Any hints?
If you're able to put the result set into a temporary table, you can easily count number of columns of the table by using something like:
Select *
From tempdb.Information_Schema.COLUMNS
where TABLE_NAME like '%#temptable%'

SQL Server: Find out what row caused the TSQL to fail (SSIS)

SQL Server 2005 Question:
I'm working on a data conversion project where I'm taking 80k+ rows and moving them from one table to another. When I run the TSQL, it bombs with various errors having to do with converting types, or whatever. Is there a way to find out what row caused the error?
=====================
UPDATE:
I'm performing an INSERT INTO TABLE1 (...) SELECT ... FROM TABLE2
Table2 is just a bunch of varchar fields where TABLE1 has the right types.
This script will be put into a sproc and executed from an SSIS package. The SSIS package first imports 5 large flat files into TABLE2.
Here is a sample error message: "The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value."
There are many date fields. In TABLE2, there are data values like '02/05/1075' for Birthdate. I want to examine each row that is causing the error, so I can report to the department responsible for the bad data so they can correct it.
This is not the way to do it with SSIS. You should have the data flow from your source, to your destination, with whatever transformations you need in the middle. You'll be able to get error details, and in fact, error rows by using the error output of the destination.
I often send the error output of a destination to another destination - a text file, or a table set up to permit everything, including data that would not have been valid in the real destination.
Actually, if you do this the standard way in SSIS, then data type mismatches should be detected at design time.
What I do is split the rowset in half with a WHERE clause:
INSERT MyTable(id, datecol) SELECT id, datecol FROM OtherTable WHERE ID BETWEEN 0 AND 40,000
and then keep changing the values on the between part of the where clause. I've done this by hand many times, but it occurs to me that you could automate the splitting with a little .Net code in a loop, trapping exceptions and then narrowing it down to just the row throwing the exception, little by little.
I assume you do the update with the INSERT INTO ...
Instead try to do the update with the cursor, use exception handling to catch the error and log all you need: the row number it failed on etc.
Not exactly a cursor, but as effective - I had over 4 million rows to examine with multiple conversion failrues. Here is what I used, and it resulted in a two temp tables one with all my values and assigned rows and one that simply contained a list of rows in the first temp table that failed to convert.
select row_number() over (order by TimeID) as rownum,timeID into #TestingTable from MyTableWithBadData
set nocount on
declare #row as int
declare #last as int
set #row=0
select #last = count(*) from #TestingTable
declare #timeid as decimal(24,0)
create table #fails (rownum int)
while #row<=#last
begin
Begin Try
select #timeid=cast(timeID as decimal(24,0)) from #TestingTable where rownum = #row
end try
begin catch
print cast(#row as varchar(25)) + ' : failed'
insert into #fails(rownum) values(#row)
end catch
set #row = #row+1
end
if you are looping, add prints in the loop.
if you are using set based operations, add a restrictive WHERE condition and run it. Keep running it (each time making it more and more restrictive) until you can find the row in the data. if you could run it for blocks of N rows, then just select out those rows and look at them.
ADD CASE statements to catch the problems (converting that bad value to NULL or whetever) and put a value in a new FlagColumn telling you the type of problem:
CASE WHEN ISNUMERIC(x)!=1 then NULL ELSE x END as x
,CASE WHEN ISNUMERIC(x)!=1 then 'not numeric' else NULL END AS FlagColumn
then select out the new converted data where FlagColumn IS NOT NULL
you could try using select statements with isnumeric() or isdate() functions on the various columns of the source data
EDIT
There are many date fields. In TABLE2,
there are data values like
'02/05/1075' for Birthdate. I want to
examine each row that is causing the
error, so I can report to the
department responsible for the bad
data so they can correct it.
Use this to return all bad date rows:
SELECT * FROM YourTable WHERE ISDATE(YourDateColumn)!=1
If you are working with cursors, yes and is trivial. If you are not working with cursors, I don't think so because SQL operations are ACID, or transactions per se.
John Sauders has the right idea, there are better ways to do this kind of processing using SSIS. However, learning SSIS and redoing your package to completely change the process may not be an option at this time, so I offer this advice. You appear to be having trouble with the dates being incorrect. So first run a query to identify those records which are bad and insert them into an execptions table. Then do you insert only of those records that are left. Something like:
insert exceptiontable (field1, field2)
select field1, field2 from table2 where isdate(field2) = 0
insert table1 (field1, field2)
select field1, field2 from table2 where isdate(field2) = 1
Then of course you can send the contents of the exception table to the people who provide the bad data.

Hidden Features of SQL Server

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are some hidden features of SQL Server?
For example, undocumented system stored procedures, tricks to do things which are very useful but not documented enough?
Answers
Thanks to everybody for all the great answers!
Stored Procedures
sp_msforeachtable: Runs a command with '?' replaced with each table name (v6.5 and up)
sp_msforeachdb: Runs a command with '?' replaced with each database name (v7 and up)
sp_who2: just like sp_who, but with a lot more info for troubleshooting blocks (v7 and up)
sp_helptext: If you want the code of a stored procedure, view & UDF
sp_tables: return a list of all tables and views of database in scope.
sp_stored_procedures: return a list of all stored procedures
xp_sscanf: Reads data from the string into the argument locations specified by each format argument.
xp_fixeddrives:: Find the fixed drive with largest free space
sp_help: If you want to know the table structure, indexes and constraints of a table. Also views and UDFs. Shortcut is Alt+F1
Snippets
Returning rows in random order
All database User Objects by Last Modified Date
Return Date Only
Find records which date falls somewhere inside the current week.
Find records which date occurred last week.
Returns the date for the beginning of the current week.
Returns the date for the beginning of last week.
See the text of a procedure that has been deployed to a server
Drop all connections to the database
Table Checksum
Row Checksum
Drop all the procedures in a database
Re-map the login Ids correctly after restore
Call Stored Procedures from an INSERT statement
Find Procedures By Keyword
Drop all the procedures in a database
Query the transaction log for a database programmatically.
Functions
HashBytes()
EncryptByKey
PIVOT command
Misc
Connection String extras
TableDiff.exe
Triggers for Logon Events (New in Service Pack 2)
Boosting performance with persisted-computed-columns (pcc).
DEFAULT_SCHEMA setting in sys.database_principles
Forced Parameterization
Vardecimal Storage Format
Figuring out the most popular queries in seconds
Scalable Shared Databases
Table/Stored Procedure Filter feature in SQL Management Studio
Trace flags
Number after a GO repeats the batch
Security using schemas
Encryption using built in encryption functions, views and base tables with triggers
In Management Studio, you can put a number after a GO end-of-batch marker to cause the batch to be repeated that number of times:
PRINT 'X'
GO 10
Will print 'X' 10 times. This can save you from tedious copy/pasting when doing repetitive stuff.
A lot of SQL Server developers still don't seem to know about the OUTPUT clause (SQL Server 2005 and newer) on the DELETE, INSERT and UPDATE statement.
It can be extremely useful to know which rows have been INSERTed, UPDATEd, or DELETEd, and the OUTPUT clause allows to do this very easily - it allows access to the "virtual" tables called inserted and deleted (like in triggers):
DELETE FROM (table)
OUTPUT deleted.ID, deleted.Description
WHERE (condition)
If you're inserting values into a table which has an INT IDENTITY primary key field, with the OUTPUT clause, you can get the inserted new ID right away:
INSERT INTO MyTable(Field1, Field2)
OUTPUT inserted.ID
VALUES (Value1, Value2)
And if you're updating, it can be extremely useful to know what changed - in this case, inserted represents the new values (after the UPDATE), while deleted refers to the old values before the UPDATE:
UPDATE (table)
SET field1 = value1, field2 = value2
OUTPUT inserted.ID, deleted.field1, inserted.field1
WHERE (condition)
If a lot of info will be returned, the output of OUTPUT can also be redirected to a temporary table or a table variable (OUTPUT INTO #myInfoTable).
Extremely useful - and very little known!
Marc
sp_msforeachtable: Runs a command with '?' replaced with each table name.
e.g.
exec sp_msforeachtable "dbcc dbreindex('?')"
You can issue up to 3 commands for each table
exec sp_msforeachtable
#Command1 = 'print ''reindexing table ?''',
#Command2 = 'dbcc dbreindex(''?'')',
#Command3 = 'select count (*) [?] from ?'
Also, sp_MSforeachdb
Connection String extras:
MultipleActiveResultSets=true;
This makes ADO.Net 2.0 and above read multiple, forward-only, read-only results sets on a single database connection, which can improve performance if you're doing a lot of reading. You can turn it on even if you're doing a mix of query types.
Application Name=MyProgramName
Now when you want to see a list of active connections by querying the sysprocesses table, your program's name will appear in the program_name column instead of ".Net SqlClient Data Provider"
TableDiff.exe
Table Difference tool allows you to discover and reconcile differences between a source and destination table or a view. Tablediff Utility can report differences on schema and data. The most popular feature of tablediff is the fact that it can generate a script that you can run on the destination that will reconcile differences between the tables.
Link
A less known TSQL technique for returning rows in random order:
-- Return rows in a random order
SELECT
SomeColumn
FROM
SomeTable
ORDER BY
CHECKSUM(NEWID())
In Management Studio, you can quickly get a comma-delimited list of columns for a table by :
In the Object Explorer, expand the nodes under a given table (so you will see folders for Columns, Keys, Constraints, Triggers etc.)
Point to the Columns folder and drag into a query.
This is handy when you don't want to use heinous format returned by right-clicking on the table and choosing Script Table As..., then Insert To... This trick does work with the other folders in that it will give you a comma-delimited list of names contained within the folder.
Row Constructors
You can insert multiple rows of data with a single insert statement.
INSERT INTO Colors (id, Color)
VALUES (1, 'Red'),
(2, 'Blue'),
(3, 'Green'),
(4, 'Yellow')
If you want to know the table structure, indexes and constraints:
sp_help 'TableName'
HashBytes() to return the MD2, MD4, MD5, SHA, or SHA1 hash of its input.
Figuring out the most popular queries
With sys.dm_exec_query_stats, you can figure out many combinations of query analyses by a single query.
Link
with the commnad
select * from sys.dm_exec_query_stats
order by execution_count desc
The spatial results tab can be used to create art.
enter link description here http://michaeljswart.com/wp-content/uploads/2010/02/venus.png
EXCEPT and INTERSECT
Instead of writing elaborate joins and subqueries, these two keywords are a much more elegant shorthand and readable way of expressing your query's intent when comparing two query results. New as of SQL Server 2005, they strongly complement UNION which has already existed in the TSQL language for years.
The concepts of EXCEPT, INTERSECT, and UNION are fundamental in set theory which serves as the basis and foundation of relational modeling used by all modern RDBMS. Now, Venn diagram type results can be more intuitively and quite easily generated using TSQL.
I know it's not exactly hidden, but not too many people know about the PIVOT command. I was able to change a stored procedure that used cursors and took 2 minutes to run into a speedy 6 second piece of code that was one tenth the number of lines!
useful when restoring a database for Testing purposes or whatever. Re-maps the login ID's correctly:
EXEC sp_change_users_login 'Auto_Fix', 'Mary', NULL, 'B3r12-36'
Drop all connections to the database:
Use Master
Go
Declare #dbname sysname
Set #dbname = 'name of database you want to drop connections from'
Declare #spid int
Select #spid = min(spid) from master.dbo.sysprocesses
where dbid = db_id(#dbname)
While #spid Is Not Null
Begin
Execute ('Kill ' + #spid)
Select #spid = min(spid) from master.dbo.sysprocesses
where dbid = db_id(#dbname) and spid > #spid
End
Table Checksum
Select CheckSum_Agg(Binary_CheckSum(*)) From Table With (NOLOCK)
Row Checksum
Select CheckSum_Agg(Binary_CheckSum(*)) From Table With (NOLOCK) Where Column = Value
I'm not sure if this is a hidden feature or not, but I stumbled upon this, and have found it to be useful on many occassions. You can concatonate a set of a field in a single select statement, rather than using a cursor and looping through the select statement.
Example:
DECLARE #nvcConcatonated nvarchar(max)
SET #nvcConcatonated = ''
SELECT #nvcConcatonated = #nvcConcatonated + C.CompanyName + ', '
FROM tblCompany C
WHERE C.CompanyID IN (1,2,3)
SELECT #nvcConcatonated
Results:
Acme, Microsoft, Apple,
If you want the code of a stored procedure you can:
sp_helptext 'ProcedureName'
(not sure if it is hidden feature, but I use it all the time)
A stored procedure trick is that you can call them from an INSERT statement. I found this very useful when I was working on an SQL Server database.
CREATE TABLE #toto (v1 int, v2 int, v3 char(4), status char(6))
INSERT #toto (v1, v2, v3, status) EXEC dbo.sp_fulubulu(sp_param1)
SELECT * FROM #toto
DROP TABLE #toto
In SQL Server 2005/2008 to show row numbers in a SELECT query result:
SELECT ( ROW_NUMBER() OVER (ORDER BY OrderId) ) AS RowNumber,
GrandTotal, CustomerId, PurchaseDate
FROM Orders
ORDER BY is a compulsory clause. The OVER() clause tells the SQL Engine to sort data on the specified column (in this case OrderId) and assign numbers as per the sort results.
Useful for parsing stored procedure arguments: xp_sscanf
Reads data from the string into the argument locations specified by each format argument.
The following example uses xp_sscanf
to extract two values from a source
string based on their positions in the
format of the source string.
DECLARE #filename varchar (20), #message varchar (20)
EXEC xp_sscanf 'sync -b -fproducts10.tmp -rrandom', 'sync -b -f%s -r%s',
#filename OUTPUT, #message OUTPUT
SELECT #filename, #message
Here is the result set.
-------------------- --------------------
products10.tmp random
Return Date Only
Select Cast(Floor(Cast(Getdate() As Float))As Datetime)
or
Select DateAdd(Day, 0, DateDiff(Day, 0, Getdate()))
dm_db_index_usage_stats
This allows you to know if data in a table has been updated recently even if you don't have a DateUpdated column on the table.
SELECT OBJECT_NAME(OBJECT_ID) AS DatabaseName, last_user_update,*
FROM sys.dm_db_index_usage_stats
WHERE database_id = DB_ID( 'MyDatabase')
AND OBJECT_ID=OBJECT_ID('MyTable')
Code from: http://blog.sqlauthority.com/2009/05/09/sql-server-find-last-date-time-updated-for-any-table/
Information referenced from:
SQL Server - What is the date/time of the last inserted row of a table?
Available in SQL 2005 and later
Here are some features I find useful but a lot of people don't seem to know about:
sp_tables
Returns a list of objects that can be
queried in the current environment.
This means any object that can appear
in a FROM clause, except synonym
objects.
Link
sp_stored_procedures
Returns a list of stored procedures in
the current environment.
Link
Find records which date falls somewhere inside the current week.
where dateadd( week, datediff( week, 0, TransDate ), 0 ) =
dateadd( week, datediff( week, 0, getdate() ), 0 )
Find records which date occurred last week.
where dateadd( week, datediff( week, 0, TransDate ), 0 ) =
dateadd( week, datediff( week, 0, getdate() ) - 1, 0 )
Returns the date for the beginning of the current week.
select dateadd( week, datediff( week, 0, getdate() ), 0 )
Returns the date for the beginning of last week.
select dateadd( week, datediff( week, 0, getdate() ) - 1, 0 )
Not so much a hidden feature but setting up key mappings in Management Studio under Tools\Options\Keyboard:
Alt+F1 is defaulted to sp_help "selected text" but I cannot live without the adding Ctrl+F1 for sp_helptext "selected text"
Persisted-computed-columns
Computed columns can help you shift the runtime computation cost to data modification phase. The computed column is stored with the rest of the row and is transparently utilized when the expression on the computed columns and the query matches. You can also build indexes on the PCC’s to speed up filtrations and range scans on the expression.
Link
There are times when there's no suitable column to sort by, or you just want the default sort order on a table and you want to enumerate each row. In order to do that you can put "(select 1)" in the "order by" clause and you'd get what you want. Neat, eh?
select row_number() over (order by (select 1)), * from dbo.Table as t
Simple encryption with EncryptByKey

Resources