How to backup statistics in sql? - sql-server

Let's just say I have a table A with some data on it in SSMS. There are sub tables such as columns, constraints,triggers,indexes and statistics etc.
I want to create a similar table with same properties as table A. I know I need to go to Script Table As-> Create To-> New Query Window to duplicate the table structure.
However, after doing that, I realized the statistics in my new table is empty when there are statistics in table A. Did I miss out something?

You can script the statistics blob only with the following bit of powershell (which I yoinked from an old blog post of mine):
pushd;
import-module sqlps -disablenamechecking;
popd;
$opts = new-object Microsoft.SqlServer.Management.SMO.ScriptingOptions;
$opts.OptimizerData = $true;
$server = new-object Microsoft.SqlServer.Management.SMO.Server ".";
$database = $server.Databases["AdventureWorks2008R2"];
foreach ($table in $database.Tables) {
foreach ($stat in $table.Statistics) {
$stat.Script($opts);
}
}
The above will script out all statistics (including the histogram data) for all tables in the AdventureWorks2008R2 database. You should be able to tailor it to your needs.

Related

How to bulk import many Word documents into SQL Server database table

I need to import ~50,000 Word documents (.doc and .docx) from a single directory into a SQL Server 2016 database table so that I can use full text indexing and then search the documents' contents.
Since this is a one-off task and the database won't be required for long I'm not concerned with performance or the arguments for using FILESTREAM or FileTables.
I've just created a database with a single table:
CREATE TABLE [dbo].[MyDocument]
(
[ID] INT IDENTITY(1,1) NOT NULL,
[DocumentName] NVARCHAR(255) NOT NULL,
[Extension] NCHAR(10) NOT NULL,
[DocumentContent] VARBINARY(MAX) NOT NULL,
CONSTRAINT [PK_MyDocument] PRIMARY KEY CLUSTERED ([ID] ASC)
)
Now I'm looking for a way to get my documents into the table. There are plenty of examples online for importing a single document into a SQL Server database table using OPENROWSET, but they require me to specify a name for the file, which is obviously no use for my requirements.
I can't believe there isn't a well-documented and straightforward way to do this but a couple of hours of searching haven't turned anything up, which is starting to make me doubt this is even possible, but surely it is?
Can anybody give me an example snippet of T-SQL for importing multiple files into the database? Or suggest how else it might be achieved?
Below is a PowerShell script to import all ".docx" files in the specified folder using a parameterized query along with a FileStream parameter value to stream file contents to the database rather than loading the entire file contents into client memory.
# import all documents in specified directory using file stream parameter
try {
$timer = [System.Diagnostics.Stopwatch]::StartNew()
$insertQuery = #"
INSERT INTO dbo.MyDocument (DocumentName, Extension, DocumentContent)
VALUES(#DocumentName, #Extension, #DocumentContent);
"#
$connection = New-Object System.Data.SqlClient.SqlConnection("Data Source=.;Initial Catalog=YourDatabase;Integrated Security=SSPI")
$command = New-Object System.Data.SqlClient.SqlCommand($insertQuery, $connection)
$documentNameParameter = $command.Parameters.Add("#DocumentName", [System.Data.SqlDbType]::NVarChar, 255)
$documentExtensionParameter = $command.Parameters.Add("#Extension", [System.Data.SqlDbType]::NVarChar, 10)
$documentContentParameter = $command.Parameters.Add("#DocumentContent", [System.Data.SqlDbType]::VarBinary, -1)
$connection.Open()
$filesToImport = Get-ChildItem "E:\DocumentsToImport\*.docx"
$importedFileCount = 0
foreach($fileToImport in $filesToImport) {
$documentContentStream = [System.IO.File]::Open($fileToImport.FullName, [System.IO.FileMode]::Open)
$documentNameParameter.Value = [System.IO.Path]::GetFileNameWithoutExtension($fileToImport.FullName)
$documentExtensionParameter.Value = [System.IO.Path]::GetExtension($fileToImport.Name)
$documentContentParameter.Value = $documentContentStream
[void]$command.ExecuteNonQuery()
$documentContentStream.Close()
$importedFileCount += 1
}
$connection.Close()
$timer.Stop()
Write-Host "$importedFileCount files imported. Duration $($timer.Elapsed)."
}
catch {
throw
}

ColumnMapping.Add alternative

I have my PowerShell script:
$dataSource = "HPVIE1APARBLD21";
$database = "tempdb"
$connection = New-Object System.Data.SqlClient.SqlConnection; #setting connection
$connection.ConnectionString = "Server=$dataSource;Database=$database;Integrated Security=True;";
$connection.Open();
$command = New-Object System.Data.SqlClient.SqlCommand;
$command.Connection = $connection;
$as = New-Object System.Data.SqlClient.SqlDataAdapter;
$as.SelectCommand = $command;
$filename = 'C:\XML\apc.xml'; #file to be loaded into sql server database table
$ds = New-Object System.Data.DataSet;
$ds.ReadXml($filename); #reading from the file -- line gives output InferSchema
$dt = New-Object System.Data.DataTable;
$dt=$ds.Tables[0];
$bcp = New-Object "Data.SqlClient.SqlBulkCopy" $connection; #bulkcopy to the destination table.
$bcp.DestinationTableName = "dbo.tempor";
$bcp.ColumnMappings.Count;
$bcp.ColumnMappings.Clear();
$bcp.ColumnMappings.Add("StartTime","StartTime");
$bcp.ColumnMappings.Add("EndTime","EndTime");
$bcp.ColumnMappings.Add("Status","Status");
$bcp.ColumnMappings.Add("ProcessedTime","ProcessedTime");
$bcp.ColumnMappings.Add("ScheduleId","ScheduleId");
$bcp.ColumnMappings.Add("Model","Model");
$bcp.ColumnMappings.Add("ModelVersion","ModelVersion");
$bcp.ColumnMappings.Add("ModelVersionState","ModelVersionState");
$bcp.ColumnMappings.Add("ModifiedTime","ModifiedTime");
$bcp.ColumnMappings.Add("WriteBackLastRunTime","WriteBackLastRunTime");
$bcp.ColumnMappings.Add("ModifiedBy","ModifiedBy");
$bcp.ColumnMappings.Add("ModelType","ModelType");
$bcp.ColumnMappings.Add("IsTimeRange","IsTimeRange");
$bcp.ColumnMappings.Add("WriteBackStatus","WriteBackStatus");
$bcp.ColumnMappings.Add("RetryWriteBack","RetryWriteBack");
$bcp.ColumnMappings.Add("NumOfRetry","NumOfRetry");
$bcp.ColumnMappings.Add("FrequencyUnits","FrequencyUnits");
$bcp.ColumnMappings.Add("ScheduleType","ScheduleType");
$bcp.ColumnMappings.Add("CustomType","CustomType");
$bcp.ColumnMappings.Add("WritebackProcessedTime","WritebackProcessedTime");
$bcp.ColumnMappings.Add("DiagStatus","DiagStatus");
$bcp.ColumnMappings.Add("AccountingPeriodCase_PK_ID","AccountingPeriodCase_PK_ID");
$bcp.ColumnMappings.Count;
if ($connection.State -ne [Data.ConnectionState]::Open) {
"Connection to DB is not open."
Exit
}
$bcp.WriteToServer("$dt"); #writing to server -- not loading content of xml file to sql server table
$connection.Close();
Now for this script, my output is:
I want to insert the values of the following xml file: apc.xml
<?xml version="1.0" standalone="yes"?>
<NewDataSet>
<Table>
<StartTime>2017-01-14T18:30:00+05:30</StartTime>
<EndTime>2017-01-15T18:30:00+05:30</EndTime>
<Status>1</Status>
<ProcessedTime>2017-04-06T05:51:18.843+05:30</ProcessedTime>
<ScheduleId>db6f3178-4702-456c-904b-2cd833b54efa</ScheduleId>
<Model>Gasco_Hotfix</Model>
<ModelVersion>G3</ModelVersion>
<ModelVersionState>1</ModelVersionState>
<ModifiedTime>2017-04-06T05:51:58.48+05:30</ModifiedTime>
<ModifiedBy>l4-dc\ilbinstall</ModifiedBy>
<ModelType>1</ModelType>
<IsTimeRange>true</IsTimeRange>
<RetryWriteBack>false</RetryWriteBack>
<NumOfRetry>0</NumOfRetry>
<FrequencyUnits>2</FrequencyUnits>
<ScheduleType>1</ScheduleType>
<CustomType>0</CustomType>
<DiagStatus>2</DiagStatus>
<AccountingPeriodCase_PK_ID>9544940d-8d1a-e711-80d3-0050569c2fb6</AccountingPeriodCase_PK_ID>
</Table>
<Table>
<StartTime>2017-01-15T18:30:00+05:30</StartTime>
<EndTime>2017-01-16T18:30:00+05:30</EndTime>
<Status>1</Status>
<ProcessedTime>2017-04-06T05:51:18.843+05:30</ProcessedTime>
<ScheduleId>db6f3178-4702-456c-904b-2cd833b54efa</ScheduleId>
<Model>Gasco_Hotfix</Model>
<ModelVersion>G3</ModelVersion>
<ModelVersionState>1</ModelVersionState>
<ModifiedTime>2017-04-06T05:52:29.073+05:30</ModifiedTime>
<ModifiedBy>l4-dc\ilbinstall</ModifiedBy>
<ModelType>1</ModelType>
<IsTimeRange>true</IsTimeRange>
<RetryWriteBack>false</RetryWriteBack>
<NumOfRetry>0</NumOfRetry>
<FrequencyUnits>2</FrequencyUnits>
<ScheduleType>1</ScheduleType>
<CustomType>0</CustomType>
<DiagStatus>2</DiagStatus>
<AccountingPeriodCase_PK_ID>9644940d-8d1a-e711-80d3-0050569c2fb6</AccountingPeriodCase_PK_ID>
</Table>
</NewDataSet>
Into the table dbo.tempor in the database tempdb.
Could anyone help me up with the error?
And also, i would like to know if there is a method to not add column mappings manually by writing each column names. I mean, is there a way to dynamically add columns without having to manually type up each column name?
The purpose is sometimes in future i would have to add more columns to the original table, then again i can't come back to the script and do changes manually. So if there is an alternative method please do help me up with it.
Thank you.
I think, that this appraoch cannot get the XMLs schema automatically. I'd try an approach like this:
DECLARE #xml XML=
N'<?xml version="1.0" standalone="yes"?>
<NewDataSet>
<Table>
<StartTime>2017-01-14T18:30:00+05:30</StartTime>
<EndTime>2017-01-15T18:30:00+05:30</EndTime>
<Status>1</Status>
<ProcessedTime>2017-04-06T05:51:18.843+05:30</ProcessedTime>
<ScheduleId>db6f3178-4702-456c-904b-2cd833b54efa</ScheduleId>
<Model>Gasco_Hotfix</Model>
<ModelVersion>G3</ModelVersion>
<ModelVersionState>1</ModelVersionState>
<ModifiedTime>2017-04-06T05:51:58.48+05:30</ModifiedTime>
<ModifiedBy>l4-dc\ilbinstall</ModifiedBy>
<ModelType>1</ModelType>
<IsTimeRange>true</IsTimeRange>
<RetryWriteBack>false</RetryWriteBack>
<NumOfRetry>0</NumOfRetry>
<FrequencyUnits>2</FrequencyUnits>
<ScheduleType>1</ScheduleType>
<CustomType>0</CustomType>
<DiagStatus>2</DiagStatus>
<AccountingPeriodCase_PK_ID>9544940d-8d1a-e711-80d3-0050569c2fb6</AccountingPeriodCase_PK_ID>
</Table>
<Table>
<StartTime>2017-01-15T18:30:00+05:30</StartTime>
<EndTime>2017-01-16T18:30:00+05:30</EndTime>
<Status>1</Status>
<ProcessedTime>2017-04-06T05:51:18.843+05:30</ProcessedTime>
<ScheduleId>db6f3178-4702-456c-904b-2cd833b54efa</ScheduleId>
<Model>Gasco_Hotfix</Model>
<ModelVersion>G3</ModelVersion>
<ModelVersionState>1</ModelVersionState>
<ModifiedTime>2017-04-06T05:52:29.073+05:30</ModifiedTime>
<ModifiedBy>l4-dc\ilbinstall</ModifiedBy>
<ModelType>1</ModelType>
<IsTimeRange>true</IsTimeRange>
<RetryWriteBack>false</RetryWriteBack>
<NumOfRetry>0</NumOfRetry>
<FrequencyUnits>2</FrequencyUnits>
<ScheduleType>1</ScheduleType>
<CustomType>0</CustomType>
<DiagStatus>2</DiagStatus>
<AccountingPeriodCase_PK_ID>9644940d-8d1a-e711-80d3-0050569c2fb6</AccountingPeriodCase_PK_ID>
</Table>
</NewDataSet>';
--The query will read all columns and insert this to the target table
INSERT INTO dbo.tempor(StartTime,EndTime,[Status],ProcessedTime /*more columns*/)
SELECT t.value(N'(StartTime/text())[1]',N'datetime') AS StartTime
,t.value(N'(EndTime/text())[1]',N'datetime') AS EndTime
,t.value(N'(Status/text())[1]',N'int') AS [Status]
,t.value(N'(ProcessedTime/text())[1]',N'datetime') AS ProcessedTime
--and so on
FROM #xml.nodes(N'/NewDataSet/Table') AS A(t);
Some explanation
To test this, you can copy my script (together with the XML declaration) into a new SSMS query window and execute it. .nodes() returns all (repeated) <Table> nodes as derived table.
Using SQL-Server's native XML support allows for fast and typesafe queries. Your XML looks clean and neat, so this should be rather easy.
If this is possible for you, I'd create a stored procedure accepting the XML as input (either NVARCHAR(MAX) or XML). You can call this from PS as a one-liner.
If you expect issues (wrong/missing data) it is recommended to use a tolerant staging table, use this for validation, cleansing, and do the final import from there.
The purpose is sometimes in future i would have to add more columns to
the original table, then again i can't come back to the script and do
changes manually.
When you come back to add more columns to the table, it is easy to add them to the SP too...

Why my Azure SQL Database indexes are still fragmented?

My company has committed the sin of using GUIDs as Primary Keys on our Azure SQL Database tables (it is actually worse than that: we used VARCHAR(36) instead of UNIQUEIDENTIFIER). As such, we end up with fragmented indexes. They looked like this:
CREATE TABLE OldTable (
Id VARCHAR(36) PRIMARY KEY CLUSTERED NOT NULL DEFAULT NEWID(),
CreateTime DATETIME2 NOT NULL,
...
)
I "fixed" the problem by creating new tables. This time, I used an immutable, ever-increasing DATETIME2 (e.g. CreateTime) column for CLUSTERED INDEX, and kept the VARCHAR(36) as PRIMARY KEY but this time, NONCLUSTERED. Like this:
CREATE TABLE NewTable (
Id VARCHAR(36) PRIMARY KEY NONCLUSTERED NOT NULL DEFAULT NEWID(),
CreateTime DATETIME2 NOT NULL INDEX IX_NewTable_CreateTime CLUSTERED,
)
Then I "copied" rows from old table to new table using INSERT INTO NewTable SELECT * FROM OldTable. Finally, I renamed tables and dropped the old one. Life seemed good.
For my surprise, couple of weeks later, I found out NewTable has many fragmented indexes, with avg fragmentation as high as 80%! Even the IX_NewTable_CreateTime reports fragmentation of 18%.
Did the INSERT INTO fragmented the index? Will REBUILD index solve the problem, for good?
Fragmentation will depend on the insert/update frequency on the indexed fields and the size of the Index page.
For maintenance purposes, you can use Azure Automation and create a recurring script that checks for fragmented indexes and optimizes it.
There's a Runbook in the Gallery just for that:
The best thing about this is that Automation is free as long as you don't go over the 500 running minutes per month, time your executions well and you won't have to pay :)
I made a custom improvement to the gallery script, feel free to use it too:
<#
.SYNOPSIS
Indexes tables in a database if they have a high fragmentation
.DESCRIPTION
This runbook indexes all of the tables in a given database if the fragmentation is
above a certain percentage.
It highlights how to break up calls into smaller chunks,
in this case each table in a database, and use checkpoints.
This allows the runbook job to resume for the next chunk of work even if the
fairshare feature of Azure Automation puts the job back into the queue every 30 minutes
.PARAMETER SqlServer
Name of the SqlServer
.PARAMETER Database
Name of the database
.PARAMETER SQLCredentialName
Name of the Automation PowerShell credential setting from the Automation asset store.
This setting stores the username and password for the SQL Azure server
.PARAMETER FragPercentage
Optional parameter for specifying over what percentage fragmentation to index database
Default is 20 percent
.PARAMETER RebuildOffline
Optional parameter to rebuild indexes offline if online fails
Default is false
.PARAMETER Table
Optional parameter for specifying a specific table to index
Default is all tables
.PARAMETER SqlServerPort
Optional parameter for specifying the SQL port
Default is 1433
.EXAMPLE
Update-SQLIndexRunbook -SqlServer "server.database.windows.net" -Database "Finance" -SQLCredentialName "FinanceCredentials"
.EXAMPLE
Update-SQLIndexRunbook -SqlServer "server.database.windows.net" -Database "Finance" -SQLCredentialName "FinanceCredentials" -FragPercentage 30
.EXAMPLE
Update-SQLIndexRunbook -SqlServer "server.database.windows.net" -Database "Finance" -SQLCredentialName "FinanceCredentials" -Table "Customers" -RebuildOffline $True
.NOTES
AUTHOR: Matias Quaranta
LASTEDIT: Jan 10th, 2015
#>
workflow MyRunBook
{
param(
[parameter(Mandatory=$True)]
[string] $SqlServer,
[parameter(Mandatory=$True)]
[string] $Database,
[parameter(Mandatory=$True)]
[string] $SQLCredentialName,
[parameter(Mandatory=$False)]
[int] $FragPercentage = 20,
[parameter(Mandatory=$False)]
[int] $SqlServerPort = 1433,
[parameter(Mandatory=$False)]
[boolean] $RebuildOffline = $False,
[parameter(Mandatory=$False)]
[string] $Table
)
# Get the stored username and password from the Automation credential
$SqlCredential = Get-AutomationPSCredential -Name $SQLCredentialName
if ($SqlCredential -eq $null)
{
throw "Could not retrieve '$SQLCredentialName' credential asset. Check that you created this first in the Automation service."
}
$SqlUsername = $SqlCredential.UserName
$SqlPass = $SqlCredential.GetNetworkCredential().Password
InlineScript{
# Define the connection to the SQL Database
$Conn = New-Object System.Data.SqlClient.SqlConnection("Server=tcp:$using:SqlServer,$using:SqlServerPort;Database=$using:Database;User ID=$using:SqlUsername;Password=$using:SqlPass;Trusted_Connection=False;Encrypt=True;Connection Timeout=30;")
# Open the SQL connection
$Conn.Open()
# SQL command to find tables and their average fragmentation
$SQLCommandString = #"
SELECT a.object_id, b.name, (select name from sys.tables t where t.object_id = b.object_id) as tablename, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (
DB_ID(N'$Database')
, OBJECT_ID(0)
, NULL
, NULL
, NULL) AS a
JOIN sys.indexes AS b
ON a.object_id = b.object_id AND a.index_id = b.index_id;
"#
# Return the tables with their corresponding average fragmentation
$Cmd=new-object system.Data.SqlClient.SqlCommand($SQLCommandString, $Conn)
$Cmd.CommandTimeout=120
# Execute the SQL command
$FragmentedTable=New-Object system.Data.DataSet
$Da=New-Object system.Data.SqlClient.SqlDataAdapter($Cmd)
[void]$Da.fill($FragmentedTable)
# Return the table names that have high fragmentation
ForEach ($FragTable in $FragmentedTable.Tables[0])
{
If ($FragTable.avg_fragmentation_in_percent -ge $Using:FragPercentage)
{
Write-Verbose ("Index found : " + $FragTable.name + " on table:" + $FragTable.tablename)
$SQLCommandString = "EXEC('ALTER INDEX "+$FragTable.name+" ON "+$FragTable.tablename+" REBUILD')"
$Cmd2=new-object system.Data.SqlClient.SqlCommand($SQLCommandString, $Conn)
# Set the Timeout to be less than 30 minutes since the job will get queued if > 30
# Setting to 25 minutes to be safe.
$Cmd2.CommandTimeout=1500
Try
{
$Ds=New-Object system.Data.DataSet
$Da=New-Object system.Data.SqlClient.SqlDataAdapter($Cmd2)
[void]$Da.fill($Ds)
}
Catch
{
Write-Verbose ($FragTable.name +" on table "+$FragTable.tablename+" could NOT be indexed.")
}
}
}
$Conn.Close()
}
Write-Verbose "Finished Indexing"
}

Pass a powershell variable into a SQL value during out-datatable (invoke-sqlcmd2)

I want to insert a PowerShell variable value with a Select as I build a datatable from a SQL query.
Borrowed function invoke-sqlcmd2 from TechNet gallery and dot-sourced it in.
$NewSequenceID = invoke-sqlcmd2 -ServerInstance "MyServer" -Database "MyDB" -Query "INSERT INTO [Sequence] (TimeStarted) SELECT GETDATE(); SELECT max(SequenceID) as SequenceID FROM [Sequence]" | foreach { $_.SequenceID }
This generates a new sequence ID and stamps the time we started the batch. Results in a single number which will identify this run. Verified with 'write $NewSequenceID'.
I want to keep later results from queries together with this SequenceID for analysis.
Then I have this:
$PollTime = Get-Date -format "yyyy-MM-dd HH:mm:ss"
Then I want to do this: (Edit: This statement is not working - error message at the bottom)
$AuditUserOutput = invoke-sqlcmd2 -ServerInstance "MyServer2" -Database "MyDB2" -Query "SELECT $NewSequenceID, $PollTime, [USERID], [PID], [UDATE] FROM [MyTable]" -As 'Datatable'
And do some things with the table, then write it after with write-datatable.
If I select NULL for the first two values and grab the other three from the existing table, it works fine. I want to add the $NewSequenceID and $PollTime from the previous statements.
I've read a dozen pages about using ` (backtick), $, {}, and on and on, but I haven't gotten it right. Can someone help with the correct syntax for inserting these variable values into the selection?
PS Error is: Exception calling "Fill" with "1" argument(s): "Invalid pseudocolumn "$NewSequenceID"."
You're interpolating the variables correctly in PowerShell. If I'm understanding this correctly, the problem is with your SQL query. I'm going to make an inference here, but I think this is probably what you want:
$AuditUserOutput = invoke-sqlcmd2 -ServerInstance "MyServer2" -Database "MyDB2" -Query "SELECT [NewSequenceID], [PollTime], [USERID], [PID], [UDATE] FROM [MyTable] WHERE NewSequenceID = '$NewSequenceID' AND PollTime = '$PollTime'" -As 'Datatable'
If not, please clarify by responding to the questions above.
I was able to work around this by first creating a variable to store the query text, which allowed for the natural substitution I needed:
$AuditUserQuery = "SELECT '$NewSequenceID', '$PollTime', [USERID], [PID], [UDATE] FROM [AUDITUSER]"
Then calling that variable as the $query when building the datatable.
This avoided the parameterization problem experienced before.

T-SQL query to show table definition?

What is a query that will show me the full definition, including indexes and keys for a SQL Server table? I want a pure query - and know that SQL Studio can give this to me, but I am often on "wild" computers that have only the most bare-bones apps and I have no rights to install studio. But SQLCMD is always an option.
UPDATE: I have tried sp_help, but is just yields one record which shows Name, Owner, Type and Created_Datetime. Is there something else I am missing with sp_help?
Here is what I call:
sp_help airports
Note that I really do want the DDL that defines the table.
There is no easy way to return the DDL. However you can get most of the details from Information Schema Views and System Views.
SELECT ORDINAL_POSITION, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
, IS_NULLABLE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Customers'
SELECT CONSTRAINT_NAME
FROM INFORMATION_SCHEMA.CONSTRAINT_TABLE_USAGE
WHERE TABLE_NAME = 'Customers'
SELECT name, type_desc, is_unique, is_primary_key
FROM sys.indexes
WHERE [object_id] = OBJECT_ID('dbo.Customers')
Have you tried sp_help?
sp_help 'TableName'
Visit http://www.stormrage.com/SQLStuff/sp_GetDDL_Latest.txt.
You will find the code of sp_getddl procedure for SQL Server.
The purpose of the procedure is script any table, temp table or object.
USAGE:
exec sp_GetDDL GMACT
or
exec sp_GetDDL 'bob.example'
or
exec sp_GetDDL '[schemaname].[tablename]'
or
exec sp_GetDDL #temp
I tested it on SQL Server 2012, and it does an excellent job.
I'm not the author of the procedure. Any improvement you make to it send to Lowell Izaguirre (scripts#stormrage.com).
The easiest and quickest way I can think of would be to use sp_help
sp_help 'TableName'
Use this little Windows command-line app that gets the CREATE TABLE script (with constraints) for any table. I've written it in C#. Just compile it and carry it on a memory stick. Perhaps someone can port it to Powershell.
using System;
using System.Linq;
using Microsoft.SqlServer.Management.Common;
using Microsoft.SqlServer.Management.Smo;
namespace ViewSource
{
public class ViewSource
{
public static void Main(string[] args)
{
if (args.Length != 6)
{
Console.Error.WriteLine("Syntax: ViewSource.exe <server>" +
" <user> <password> <database> <schema> <table>");
}
Script(args[0], args[1], args[2], args[3], args[4], args[5]);
}
private static void Script(string server, string user,
string password, string database, string schema, string table)
{
new Server(new ServerConnection(server, user, password))
.Databases[database]
.Tables[table, schema]
.Script(new ScriptingOptions { SchemaQualify = true,
DriAll = true })
.Cast<string>()
.Select(s => s + "\n" + "GO")
.ToList()
.ForEach(Console.WriteLine);
}
}
}
Since SQL 2012 you can run the following statement:
Exec sp_describe_first_result_set #tsql= N'Select * from <yourtable>'
If you enter a complex select statement (joins, subselects, etc), it will give you the definition of the result set. This is very handy, if you need to create a new table (or temp table) and you don't want to check every single field definition manually.
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql
sp_help 'YourTableName'
This will return columns, datatypes, and indexes defined on the table:
--List all tables in DB
select * from sysobjects where xtype = 'U'
--Table Definition
sp_help TableName
This will return triggers defined on the table:
--Triggers in SQL Table
select * from sys.triggers where parent_id = object_id(N'SQLTableName')
I know it's an old question, but exactly what I was looking for. Because I want to batch script some tables, I rewrote the C# code from Anthony Faull for PowerShell.
This one is uses Integrated Security:
Import-Module sqlps
$serverInstance = "<server>"
$database = "<database>"
$table = "<table>"
$schema = "<schema>"
$options = New-Object -TypeName Microsoft.SqlServer.Management.Smo.ScriptingOptions
$options.DriAll = $true
$options.SchemaQualify = $true
$connection = New-Object -TypeName Microsoft.SqlServer.Management.Common.ServerConnection `
-ArgumentList $serverInstance
$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server `
-ArgumentList $connection
$server.Databases.Item($database).Tables.Item($table, $schema).Script($options) `
| ForEach-Object -Process { $_ + "`nGO"}
And here with username and password:
Import-Module sqlps
$serverInstance = "<server>"
$user = "<user>"
$password = "<pasword>"
$database = "<database>"
$table = "<table>"
$schema = "<schema>"
$options = New-Object -TypeName Microsoft.SqlServer.Management.Smo.ScriptingOptions
$options.DriAll = $true
$options.SchemaQualify = $true
$connection = New-Object -TypeName Microsoft.SqlServer.Management.Common.ServerConnection `
-ArgumentList $serverInstance
$connection.LoginSecure = $false
$connection.Login = $user
$connection.Password = $password
$server = New-Object -TypeName Microsoft.SqlServer.Management.Smo.Server `
-ArgumentList $connection
$server.Databases.Item($database).Tables.Item($table, $schema).Script($options) `
| ForEach-Object -Process { $_ + "`nGO"}
Simply type the table name and select it and press ATL + F1
Say your table name is Customer then open a new query window, type and select the table name and press ALT + F1
It will show the complete definition of table.
A variation of #Anthony Faull's answer for those using LINQPad:
new Server(new ServerConnection(this.Connection.DataSource))
.Databases[this.Connection.Database]
.Tables["<table>", "dbo"]
?.Script(new ScriptingOptions {
SchemaQualify = true,
DriAll = true,
})
You'll need to reference 2 assemblies:
Microsoft.SqlServer.ConnectionInfo.dll
Microsoft.SqlServer.Smo.dll
And add namespace references as mentioned in Anthony's snippet.
SELECT ORDINAL_POSITION, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
, IS_NULLABLE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'EMPLOYEES'
Try the sp_help stored procedure.
sp_help <>
Another way is to execute sp_columns procedure.
EXEC sys.sp_columns #TABLE_NAME = 'YourTableName'
As an addition to Barry's answer. The sp_help can also be used by itself to iterate all of the objects in a particular database. You also have sp_helptext for your arsenal, which scripts out programmatic elements, like stored procedures.
General table definition is shown using stored procedure sp_help, as said before:
sp_help 'table_name_in_current_db_context'
When using tables of multiple contexts is useful to prefix the command above with the desired context, instead of changing them with use db_xyz:
DB_Products..sp_help 'MyTable' -- for definition of MyTable in DB_Products
And it works with temp tables also:
tempdb..sp_help '#TempTable' -- for definition of #TempTable in current context.
There is an easy way to get DDL scripts for any database object.
Open SQL Server Management Studio
Connect to the source database server.
Expand the databases tree.
Right-click on the database with the tables to export.
In the sub-menu, expand "Tasks".
In the sub-menu, choose "Generate Scripts..."
Use the wizard to choose the objects to export.
For the scripting options, choose 1 script per object.
This will export a file for every table, view, stored proc, user, role, or schema you chose. Take that to the destination computer.

Resources