I run docker for Windows 18.03 on Windows 10. I use the microsoft/mssql-server-windows-express (from Windows Server) image from a docker compose file in my own VS 2017 project. What I try to accomplish here is to initialize the database from a script. I tried using the "command" switch in the docker.compose.yml but without much success...
Here's the docker compose file :
myscustomservice:
image: myscustomservice
build:
context: .\myscustomservice
dockerfile: Dockerfile
db:
image: microsoft/mssql-server-windows-express
volumes:
- ".\\data:C:\\data"
#command: --init-file C:\\data\\CreateLocalDB.sql
#command: "sqlcmd -U sa -P sUper45!pas5word -i C:\\data\\CreateLocalDB.sql"
restart: always
ports:
- "1533:1433"
environment:
- "sa_password=sUper45!pas5word"
- "ACCEPT_EULA=Y"
volumes:
db-data:
Note that I have tried the 2 command lines that are commented. First one fails saying it doesn't find the file and second one just replace the normal command line by that one, so the container doesn't start (or doesn't stay up).
On my local drive, I have a C:\myscustomservice\data drive with the file CreateLocalDB.sql in it. It is mounted on the container in the C:\data folder (I see it when I run powershell inside the container).
The sql file looks like this :
USE MASTER
CREATE DATABASE [customDB_test]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'customDB_test', FILENAME = N'C:\data\customDB_test.mdf' , SIZE = 1594752KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
LOG ON
( NAME = N'customDB_test_log', FILENAME = N'C:\data\customDB_test.ldf' , SIZE = 3584KB , MAXSIZE = 2048GB , FILEGROWTH = 10240KB )
GO
Does anyone have an idea how could I do this ? All examples on the net are from linux containers, and this image is from Windows Server container.
OK, just so you know, I finally had to create anothe service that depends on "db" to call a powershell script that check for database existence. If it doesn't exists, I call an mssql script to create it.
Here's the dockerfile :
FROM microsoft/wcf:4.7.1
ARG source
# Creates a directory for custom application
RUN mkdir C:\MyCustomService
COPY . c:\\MyCustomService
# Remove existing default web site
RUN powershell -NoProfile -Command \
Import-module WebAdministration; \
Remove-WebSite -Name "'Default Web Site'"
# Configure the new site in IIS. Binds it to port 80 otherwise it won't work because it needs a default app listening on this port
RUN powershell -NoProfile -Command \
Import-module IISAdministration; \
New-IISSite -Name "MyCustomService" -PhysicalPath C:\MyCustomService -BindingInformation "*:80:";
# Add net.tcp support on the new site and change it to web aplication.
RUN Import-Module WebAdministration; Set-ItemProperty "IIS:\\Sites\\MyCustomService" -name bindings -value (#{protocol='net.tcp';bindingInformation='808:*'},#{protocol='http';bindingInformation='*:80:'});
RUN windows\system32\inetsrv\appcmd.exe set app 'MyCustomService/' /enabledProtocols:"http,net.tcp"
# This instruction tells the container to listen on port 83.
EXPOSE 80
EXPOSE 808
Here's the new docker-compose file :
myscustomservice:
image: myscustomservice
build:
context: .\myscustomservice
dockerfile: Dockerfile
ports:
- "83:80"
- "1010:808"
depends_on:
- db
- db-init
db:
image: microsoft/mssql-server-windows-express
volumes:
- ".\\data:C:\\data"
ports:
- "1533:1433"
environment:
- "sa_password=sUper45!pas5word"
- "ACCEPT_EULA=Y"
- 'attach_dbs=[{"dbName":"customDB_test","dbFiles":["C:\\data\\customDB_test.mdf","C:\\data\\customDB_test.ldf"]}]'
volumes:
db-data:
db-init:
image: microsoft/mssql-server-windows-express
volumes:
- ".\\data:C:\\data"
command: powershell -executionpolicy bypass "C:\\data\\initialize_db.ps1 -insertTestData"
environment:
- "sa_password=sUper45!pas5word"
- "ACCEPT_EULA=Y"
depends_on:
- db
Note the "attach_dbs" environment variable in the db service. This way, it tries to bind to existing files, so the script run by db_init service will find the database and will not recreate it.
The powershell script "initialize_db.ps1" :
param([switch]$insertTestData)
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoExtended") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.ConnectionInfo") | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SmoEnum") | Out-Null
$server = New-Object ("Microsoft.SqlServer.Management.Smo.Server") "db\SQLEXPRESS"
$database = "customDB_test"
$dbs = $server.Databases
$exists = $false
#This sets the connection to mixed-mode authentication
$server.ConnectionContext.LoginSecure=$false;
#This sets the login name
$server.ConnectionContext.set_Login("sa");
#This sets the password
$server.ConnectionContext.set_Password("sUper45!pas5word")
try
{
foreach ($db in $dbs)
{
Write-Host $db.Name
if($db.Name -eq $database)
{
Write-Host "Database already exist"
$exists = $true
}
}
}
catch
{
Write-Error "Failed to connect to $server"
}
if(-not $exists)
{
Write-Host "Database doesn't exist"
$StopWatch = [System.Diagnostics.Stopwatch]::StartNew()
sqlcmd -S 'db' -U sa -P 'sUper45!pas5word' -i 'C:\\data\\CreateLocalDB_schema.sql'
Write-Host "Database created"
$StopWatch.Elapsed
if($insertTestData)
{
Write-Host "Begining data insertion..."
sqlcmd -S 'db' -U sa -P 'sUper45!pas5word' -i 'C:\\data\\CreateLocalDB_data.sql'
Write-Host "Data inserted"
$StopWatch.Elapsed
}
$StopWatch.Stop()
}
sqlcmd -S 'db' -U sa -P 'sUper45!pas5word' -i 'C:\\data\\CreateLocalDB_user.sql'
Ths script runs at least 1 and up to 3 SQL script :
CreateLocalDB_user.sql - Recreates the custom login and the db user so that you can connect with this user
CreateLocalDB_schema.sql - creates the database itself if it doesn't exist
CreateLocalDB_data.sql - add startup data if you specified the "insertTestData" switch in the call (in your docker compose under db_init service)
The dockerfile of the service exposes ports 80 for http and 808 for net.tcp and my web.config file for "myscustomservice" exposes the wcf service like this :
<host>
<baseAddresses>
<add baseAddress="net.tcp://localhost:1010/myscustomservice/Customer.svc"/>
</baseAddresses>
</host>
Related
The end game is to create a database when building a docker container, and persist the data so that if the container is removed, I can start the container again and have my database with my persisted data.
I'm using microsoft/mssql-server-windows-developer with Windows containers with docker-compose.
The relevant part of my docker-compose file is (other services removed):
version: "3.9" services:
db:
build:
context: .
dockerfile: Database/Dockerfile
volumes:
- C:\temp:C:\temp
ports:
- 1433:1433
Basically, the db Dockerfile runs a powershell script (very similar to https://github.com/microsoft/mssql-docker/blob/master/windows/mssql-server-windows-developer/start.ps1). My powershell script starts MSSQLSERVER then runs sql files to create a database, run create table, create procs, etc scripts.
All of this works. docker-compose build then docker-compose up will create and run my database on localhost and everything is great. But, if I manipulate the data at all and remove the database then call docker-compose up again, my data is gone.
Everything I've read about persisting data includes using attach_db. I would like to do some sort of if exists, attach_db else create database.
The question (finally)... Why don't I have an mdf file after I create the database? Am I supposed to? I've messed with different ways to add volumes but my volume is always empty. It doesn't appear I'm creating an mdf file to add to my volume.
EDIT - Adding Dockerfile and ps script Dockerfile calls
Dockerfile:
FROM microsoft/mssql-server-windows-developer
ENV sa_password="nannynannybooboo" \
ACCEPT_EULA="Y" \
db1="db1" \
db2="db2"
EXPOSE 1433
RUN mkdir -p ./db1
RUN mkdir -p ./db2
COPY /Database/startsql.ps1 .
COPY /Database/db1/ ./db1
COPY /Database/db2/ ./db2
HEALTHCHECK CMD [ "sqlcmd", "-Q", "select 2" ]
RUN .\startsql -sa_password $env:sa_password -ACCEPT_EULA $env:ACCEPT_EULA -db_name $env:db2 -Verbose
RUN .\startsql -sa_password $env:sa_password -ACCEPT_EULA $env:ACCEPT_EULA -db_name $env:db1 -Verbose
startsql.ps1
# based off https://github.com/microsoft/mssql-docker/blob/master/windows/mssql-server-windows-developer/start.ps1
param(
[Parameter(Mandatory=$false)]
[string]$sa_password,
[Parameter(Mandatory=$false)]
[string]$ACCEPT_EULA,
[Parameter(Mandatory=$true)]
[string]$db_name
)
if($ACCEPT_EULA -ne "Y" -And $ACCEPT_EULA -ne "y")
{
Write-Verbose "ERROR: You must accept the End User License Agreement before this container can start."
Write-Verbose "Set the environment variable ACCEPT_EULA to 'Y' if you accept the agreement."
exit 1
}
# start the service
Write-Verbose "Starting SQL Server"
start-service MSSQLSERVER
if($sa_password -eq "_") {
if (Test-Path $env:sa_password_path) {
$sa_password = Get-Content -Raw $secretPath
}
else {
Write-Verbose "WARN: Using default SA password, secret file not found at: $secretPath"
}
}
Write-Verbose $sa_password
if($sa_password -ne "_")
{
Write-Verbose "Changing SA login credentials"
$sqlcmd = "ALTER LOGIN sa with password=" +"'" + $sa_password + "'" + ";ALTER LOGIN sa ENABLE;"
& sqlcmd -Q $sqlcmd
}
Write-Verbose "Started SQL Server"
Write-Verbose "Starting set up scripts..."
Write-Verbose $db_name
$exists = $true
$exists = #($sqlServer.Databases | % { $_.Name }) -contains $db_name
$creation = ".\"+$db_name+"\creation.sql"
$creation_rpt = ".\"+$db_name+"\creation.rpt"
$userdefined = ".\"+$db_name+"\userdefined.sql"
$userdefined_rpt = ".\"+$db_name+"\userdefined.rpt"
$presetup = ".\"+$db_name+"\pre.setup.sql"
$presetup_rpt = ".\"+$db_name+"\presetup.rpt"
$tables = ".\"+$db_name+"\tables.sql"
$tables_rpt = ".\"+$db_name+"\tables.rpt"
$procs = ".\"+$db_name+"\procs.sql"
$procs_rpt = ".\"+$db_name+"\procs.rpt"
$triggers = ".\"+$db_name+"\triggers.sql"
$triggers_rpt = ".\"+$db_name+"\triggers.rpt"
Write-Verbose $creation
Write-Verbose $exists
if ($exists -ne $true){
Write-Verbose "Starting creation script..."
Invoke-Sqlcmd -InputFile $creation | Out-File -FilePath $creation_rpt
Write-Verbose "Starting user defined script..."
Invoke-Sqlcmd -InputFile $userdefined | Out-File -FilePath $userdefined_rpt
Write-Verbose "Starting pre.setup script..."
Invoke-Sqlcmd -InputFile $presetup | Out-File -FilePath $presetup_rpt
Write-Verbose "Starting tables script..."
Invoke-Sqlcmd -InputFile $tables | Out-File -FilePath $tables_rpt
Write-Verbose "Starting triggers script..."
Invoke-Sqlcmd -InputFile $triggers | Out-File -FilePath $triggers_rpt
Write-Verbose "Starting procs script..."
Invoke-Sqlcmd -InputFile $procs | Out-File -FilePath $procs_rpt
}
Get-EventLog -LogName Application -Source "MSSQL*" -After (Get-Date).AddSeconds(-2) | Select-Object TimeGenerated, EntryType, Message
I can't share the sql files startsql calls, but 99% of the sql
is SSMS generate scripts from an existing DB that I am replicating. The 1% that isn't generated by SSMS is a command to link the two databases being created.
Volumes
You're spot on, volumes can (And should!) be used to persist your data.
Microsoft themselves have docs on how to persist data from containerised SQL servers, including the required commands:
https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-docker-container-configure?view=sql-server-ver15&pivots=cs1-bash#persist
However, this is for Linux, not Windows, so the paths will be different (Very likely the defaults for non-containerised work)
To find that location, you could probably use a query found below, or hop into the container while it is running (using docker exec) and navigate around:
https://www.netwrix.com/how_to_view_sql_server_database_file_location.html
When using volumes with docker-compose the spec can be found here, and is really simple to follow:
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose
(Edit) Proof of Concept
I played around with the Windows container and managed to get the volumes working fine.
I ditched your Dockerfile, and just used the base container image, see below.
version: "3.9"
services:
db:
image: microsoft/mssql-server-windows-developer
volumes:
- .\db:C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA
ports:
- 1433:1433
environment:
SA_PASSWORD: "Abc12345678"
ACCEPT_EULA: "Y"
This works for me because I specified the MDF file location upon database creation:
/*
https://learn.microsoft.com/en-us/sql/relational-databases/databases/create-a-database?view=sql-server-ver15
*/
USE master ;
GO
CREATE DATABASE Sales
ON
( NAME = Sales_dat,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\saledat.mdf',
SIZE = 10,
MAXSIZE = 50,
FILEGROWTH = 5 )
LOG ON
( NAME = Sales_log,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\salelog.ldf',
SIZE = 5MB,
MAXSIZE = 25MB,
FILEGROWTH = 5MB ) ;
GO
EXEC sp_databases ;
You can see that the filepath in the container there correlates to the volume path in the docker-compose file. When I stopped the container, I could successfully see the mdf file in the .\db folder of my project. If you managed to locate the filepath from running your query, you can simply add that to the volume spec in the same fashion as above.When restarting the container, everything loaded fine, and the SP returned a valid list of all DB's.
Windows Containers
I knew they were regarded as a bad idea, but me oh my, did I not expect the base image to be 15GB.
This is ridiculously large, and depending on your use case, will present issues with the development, and deployment process, simply in terms of the time required to download the image.
If you can use Linux containers for your purposes, I highly recommend it as they are production ready, small, lightweight, and better supported. They can even be ran as the developer edition, and the Microsoft docs clearly state how to persist data from these containers
Linux: https://hub.docker.com/_/microsoft-mssql-server
Windows: https://hub.docker.com/r/microsoft/mssql-server-windows-developer/
Ex:
# Using Windows containers in Powershell
PS> docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
microsoft/mssql-server-windows-developer latest 19873f41b375 3 years ago 15.1GB
# Using Linux containers in WSL
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
mcr.microsoft.com/mssql/server 2019-latest 56beb1db7406 10 days ago 1.54GB
I need to send a list of commands to several devices. For each IP, open an SSH-connection with the given credentials from User and Password textboxes, run all of the commands and return the output to the Output textbox.
Normally I'd use
plink.exe -ssh admin#172.16.17.18 -pw PassW0rd "command"
Unfortunately, the remote host does not let me do that:
Sent password
Access granted
Opening session as main channel
Opened main channel
Server refused to start a shell/command
FATAL ERROR: Server refused to start a shell/command
However, if I connect without handing over a command:
Sent password
Access granted
Opening session as main channel
Opened main channel
Allocated pty (ospeed 38400bps, ispeed 38400bps)
Started a shell/command
Welcome to XXX
System_Name>#
Now, I can type my commands and have them executed. I tried PoshSSH, which lets me connect but any command times out.
I broke down the lines from the IP- and Command-boxes into string-arrays and made for loops. Then I tried several approaches with Start-Job and SystemDiagnostics.Process* without success.
Now I'm a bit clueless and would appreciate any help:
for ($a=0; $a -lt $IPArray.Length; $a++){
# Open an interactive Session with plink like
# plink.exe -ssh ' + $User + '#' + $IPArray[$a] + ' -pw ' + $Passwd
for ($b=0; $b -lt $CommandArray.Length; $b++){
# Send $CommandArray[$b] to plink-Session
# Wait for command to finish
# Read output and send it to the textbox
}
}
Edit: Thanks to Martin Prikryl's answer I'm a step further:
for ($a=0; $a -lt $IPArray.Length; $a++){
$User = $UserTextBox.text
$IP = $IPArray[$a]
# $Passwd = $PwdTextBox.Text
$Outputtext= $Outputtext + "~~~~~ " + $IP + " ~~~~~" + "`r`n"
$isSession = New-SSHSession -ComputerName $IP -Credential $User
$isStream = $isSession.Session.CreateShellStream("PS-SSH", 0, 0, 0, 0, 1000)
for ($b=0; $b -lt $CommandArray.Length; $b++){
$Command = $CommandArray[$b]
$isStream.WriteLine("$Command")
Start-Sleep -Seconds 1
}
$isReturn = $isStream.Read()
$Outputtext= $Outputtext + $isReturn + "`r`n"
$outputBox.Text = $Outputtext
}
returns:
~~~~~ 172.16.17.18 ~~~~~
Welcome to XXX
System_18>#echo 1
1
System_18>#echo 2
2
System_18>#ping 8.8.8.8
PING 8.8.8.8 56 data bytes
~~~~~ 172.16.17.19 ~~~~~
Welcome to XXX
System_19>#echo 1
1
System_19>#echo 2
2
System_19>#ping 8.8.8.8
PING 8.8.8.8 56 data bytes
~~~~~ 172.16.17.20 ~~~~~
Welcome to XXX
System_20>#echo 1
1
System_20>#echo 2
2
System_20>#ping 8.8.8.8
PING 8.8.8.8 56 data bytes
Now I need to achieve two things:
Get the credentials from the corresponding input fields (Currently, I need to type in the password once for each IP)
Done:
$User = $UserTextBox.text
$Passwd = ConvertTo-SecureString $PwdTextBox.Text -AsPlainText -Force
$SSHCred = New-Object System.Management.Automation.PSCredential -ArgumentList ($User, $Passwd)
# ...
$isSession = New-SSHSession -ComputerName $IP -Credential $SSHCred
Make it wait for a command to finish, before sending the next one. (Currently, it just waits 1 second)
However, I'm happy that the remote hosts now talk to me, at least.
Thank you.
Should I open new questions, if I need further help with the script or continue to log the progress, here?
You are obviously connecting to some "device", not a full server. Embedded SSH implementations usually do not implement SSH "exec" channel. You have to use "shell" channel (what is otherwise not recommended for command automation).
With Plink you can achieve that by writing the "command" to Plink input, instead of using the -m switch.
See also Executing command using Plink does not work, but does in PuTTY.
Though you should not run an external application to implement SSH. Use a native .NET SSH implementation, like SSH.NET. Pseudo code to execute command with SSH.NET in "shell" channel:
var client = new SshClient(...);
client.Connect();
var shell = client.CreateShellStream(...);
shell.WriteLine("command");
var output = shell.Read();
The "shell" channel is a black box with an input and an output. There is no reliable way to use it to execute a command, read its output, and execute other commands. You have no API to tell when a command execution has ended. That is why I wrote above that the "shell" channel is not recommended for command automation. All you can do is to parse the shell output and look for your device's command prompt: System_20>.
I want to change the collation of SQL Server instance programmatically using powershell script. Followings are the manual steps:
Stop the SQL Server instance
Go to directory location: "C:\Program Files\Microsoft SQL Server\MSSQL14.SQL2017\MSSQL\Binn"
Execute following command: sqlservr -c -m -T4022 -T3659 -s"SQL2017" -q"SQL_Latin1_General_CP1_CI_AS"
After the execution of the above command, following message displayed: "The default collation was successfully changed."
Then I need to press ctrl+c to stop further execution. How can I do
this programmatically?
When we execute the command to change the SQL Server Collation, it logs the execution details in event viewer application logs. Using loop we can check the event viewer application logs for SqlServr.exe continuously, and when it generates the following log message: "The default collation was successfully changed", we can kill the process.
#Take the time stamp before execution of Collation Change Command
$StartDateTime=(Get-Date).AddMinutes(-1)
# Execute the Collation Change Process
Write-Host "Executing SQL Server Collation Change Command"
$CollationChangeProcess=Start-Process -FilePath $SQLRootDirectory -ArgumentList
"-c -m -T 4022 -T 3659 -s $JustServerInstanceName -q $NewCollationName" -
NoNewWindow -passthru
Do
{
$log=Get-WinEvent -FilterHashtable #{logname='application';
providername=$SQLServiceName; starttime = $StartDateTime} | Where-Object -
Property Message -Match 'The default collation was successfully changed.'
IF($log.count -gt 0 -and $log.TimeCreated -gt $StartDateTime )
{
Stop-Process -ID $CollationChangeProcess.ID
write-host 'Collation Change Process Completed Successfully.'
break
}
$DateTimeNow=(Get-Date)
$Duration=$DateTimeNow-$StartDateTime
write-host $Duration.totalminutes
Start-Sleep -Seconds 2
IF ($Duration.totalminutes -gt 2)
{
write-host 'Collation Change Process Failed.'
break
}
}while (1 -eq 1)
Thanks #Murali Dhar Darshan. I've made some changes to your solution to make it easier to use. (I don't have high enough reputation to add this as a comment to your answer).
# Params
$NewCollationName="Danish_Norwegian_CI_AS"
$SQLRootDirectory="C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Binn\sqlservr.exe"
$SQLServiceName="MSSQLSERVER"
# Stop running SQL instance
net stop $SQLServiceName
#Take the time stamp before execution of Collation Change Command
$StartDateTime=(Get-Date).AddMinutes(-1)
# Execute the Collation Change Process
Write-Host "Executing SQL Server Collation Change Command"
$CollationChangeProcess=Start-Process -FilePath $SQLRootDirectory -ArgumentList "-c -m -T 4022 -T 3659 -q $NewCollationName" -NoNewWindow -passthru
Do
{
$log=Get-WinEvent -FilterHashtable #{logname='application';
providername=$SQLServiceName; starttime = $StartDateTime} | Where-Object -Property Message -Match 'The default collation was successfully changed.'
IF($log.count -gt 0 -and $log.TimeCreated -gt $StartDateTime )
{
Stop-Process -ID $CollationChangeProcess.ID
write-host 'Collation Change Process Completed Successfully.'
# Start SQL instance again
net start $SQLServiceName
break
}
$DateTimeNow=(Get-Date)
$Duration=$DateTimeNow-$StartDateTime
write-host $Duration.totalminutes
Start-Sleep -Seconds 2
IF ($Duration.totalminutes -gt 2)
{
write-host 'Collation Change Process Failed.'
Stop-Process -ID $CollationChangeProcess.ID
break
}
}while (1 -eq 1)
On SQL Server 2016 I have setup a job that executes a powershell script that resides on a remote app server. When I execute the powershell script via the app server using the Powershell ISE app my script works without issue. When I had setup the job and enter this command:
powershell.exe -ExecutionPolicy Bypass -file "\\serverapp1\c$\coverageverifier_scripts\SFTP_CoverageVerifier.ps1" in Step 1.
When I look at the VIEW HISTORY I see the error below but I cannot figure out why the script now cannot load the file or assembly.
Any help/direction would be appreciated. Here is the error:
The job script encountered the following errors. These errors did not stop the script: A job step received an error at line 1 in a PowerShell script. The corresponding line is 'powershell.exe -ExecutionPolicy Bypass -File "\empqaapp1\c$\coverageverifier_scripts\SFTP_CoverageVerifier.ps1"'. Correct the script and reschedule the job. The error information returned by PowerShell is: 'Add-Type : Could not load file or assembly '
Here is my powershell script as well:
# Load WinSCP .NET assembly
#Add-Type -Path "WinSCPnet.dll"
Add-Type -Path (Join-Path $PSScriptRoot "WinSCPnet.dll")
# Declare variables
$date = Get-Date
$dateStr = $date.ToString("yyyyMMdd")
# Define $filePath
$filePath = "C:\coverageverifier_scripts\TEST_cvgver.20190121.0101"
# Write-Output $filePath
# Set up session options for VERISK TEST/ACCEPTANCE SFTP SERVER
$sessionOptions = New-Object WinSCP.SessionOptions -Property #{
Protocol = [WinSCP.Protocol]::Sftp
HostName = "secureftpa.iso.com"
UserName = "account"
Password = "pw"
SshHostKeyFingerprint = "ssh-rsa 1111 xxx/xxxxxxxxx+3wuWNIkMY5GGgRObJisCPM9c9l7yw="
}
$session = New-Object WinSCP.Session
$session.ExecutablePath = "C:\WinSCP\WinSCP.exe"
try
{
# Connect
$session.Open($sessionOptions)
# Transfer files
$session.PutFiles($filePath,"/").Check()
}
finally
{
$session.Dispose()
Your Add-Type call does not include the path to the WinSCPnet.dll.
If you have the WinSCPnet.dll in the same folder as your PowerShell script, use this syntax:
Add-Type -Path (Join-Path $PSScriptRoot "WinSCPnet.dll")
Or use a full path.
See also Loading assembly section in WinSCP .NET assembly article on Using WinSCP .NET Assembly from PowerShell.
Every week, I have to run a script that truncates a bunch of tables. Then I use the export data task to move the data to another server (same database name).
The servers aren't linked, I can't save the export job, and my permissions/settings are limited by the DBA (I am an admin on the databases). I have windows authentication on both servers only. The servers are different versions (2005/2008).
My question is is there a way to automate this with my limited ability to modify the servers? Perhaps using Powershell?
Selecting all these tables and stuff in the export wizard week after week is a pain.
If you have access to the SQL Server Management console apps, try something this from a different system.
C:\> bcp ExportImportFile.inp out prod.dbo.[Table] -b 10000 -S %SQLSERVER% -U %USERNAME% -P %PASSWORD% -T -c > C:\Temp\ExportImport.log
C:\> sqlcmd -S %SQLSERVER% -U %USERNAEM% -P %PASSWORD% -Q "Use Prod;TRUNCATE TABLE [Table];" >> C:\Temp\ExportImport.log
C:\> bcp prod.dbo.[Table] in ExportImportFile.inp -b 10000 -S %SQLSERVER% -U %USERNAME% -P %PASSWORD% -T -c >> C:\Temp\ExportImport.log
You can use DBATOOLS:
$splat = #{
SqlInstance = '{source instance}'
Database = 'tempdb'
Destination = '{dest instance}'
DestinationDatabase = 'tempdb'
Table = 'table1' # you can provide a list of tables
AutoCreateTable = $true
Truncate = $true
}
Copy-DbaDbTableData #splat
If you don't have dbatools: https://dbatools.io/getting-started/