I want to do a very simple thing with DSC (Desired State Configuration):
Stop a windows service, deploy files and finally start the service again. Thus I had the following:
Service ServicesStop
{
Name = "TheTestService"
State = "Stopped"
}
File CopyDeploymentBits
{
Ensure = "Present"
Type = "Directory"
Recurse = $true
SourcePath = $applicationPath
DestinationPath = $deployOutputPath
}
Service ServicesStart
{
Name = "TheTestService"
StartupType = "Automatic"
State = "Running"
}
But unfortunately this is not working as it is not allowed to have the same name (Name = "TheTestService") in a configuration twice (Why? In this case it would totally make sense) as a workaround I tried something like this
Configuration MyTestConfig {
Node $env:COMPUTERNAME {
Service ServicesStop
{
Name = "TheTestService"
State = "Stopped"
}
File CopyDeploymentBits
{
Ensure = "Present"
Type = "Directory"
Recurse = $true
SourcePath = $applicationPath
DestinationPath = $deployOutputPath
}
}
}
Configuration MyTestConfig2 {
Node $env:COMPUTERNAME {
Service ServicesStart
{
Name = "TheTestService"
StartupType = "Automatic"
State = "Running"
}
}
}
MyTestConfig
MyTestConfig2
Looks insane - but it works!
Unfortunately, I am not using plain DSC I am using it with Microsoft Release Management and here, it seems that the 'MyTestConfig2' is not executed anymore (or something else goes wrong that is not mentioned in the logs).
How can I realize this simple scenario with dsc within the context of release management? Or is there even a better way to do something like this?
The simplest way is to create a service resource that takes both Name and State as key. Feel free to extend this simple service resource (I will try to get to it when I find time) https://github.com/nanalakshmanan/nServiceManager
Afer Daniel Mann's post I came up with this minimal solution:
[DscResource()]
class InstallStopCopyStartServiceResource {
[DscProperty(Key)]
[string]$ServiceName
[DscProperty(Mandatory)]
[string] $SourcePath
[DscProperty(Mandatory)]
[string] $DestinationPath
[void] Set()
{
$needsInstallation = $false;
$testService = Get-Service | Where-Object {$_.Name -eq $this.ServiceName}
if ($testService -eq $null)
{
$needsInstallation = $true;
} elseif ($testService.Status -eq "Running")
{
Stop-Service $this.ServiceName
}
# Due to stupid Copy-Item behavior we first delete all old files
# (https://social.technet.microsoft.com/Forums/office/en-US/20b9d259-90d9-4e51-a125-c0f3dafb498c/copyitem-not-overwriting-exising-files-but-creating-additional-subfolder?forum=winserverpowershell)
Remove-Item -Path $this.DestinationPath -Recurse -Force -ErrorAction SilentlyContinue
# Copy files
Copy-Item -Path $this.SourcePath -Destination $this.DestinationPath -Recurse -Force
if ($needsInstallation -eq $true)
{
# Install service
$param = 'C:\Windows\Microsoft.NET\Framework\v4.0.30319\installUtil ' + $this.DestinationPath + '\service.exe'
Invoke-Expression $param
}
# Start the service
Start-Service $this.ServiceName
# Configure service
Set-Service $this.ServiceName -StartupType "Automatic"
}
[bool] Test()
{
# Always perform all steps
return $false
}
[InstallStopCopyStartServiceResource] Get()
{
return $this
}
}
At least for me the following works best:
# Configure the Service 1st
Service Servicewuauserv {
Name = 'wuauserv'
BuiltInAccount = 'LocalSystem'
State = 'Running'
}
And then:
# Ensure it is running
ServiceSet wuauserv {
Name = 'wuauserv'
BuiltInAccount = 'LocalSystem'
State = 'Running'
}
Yep, makes it more complex, but the split seems to work best with some services.
Related
I have a script from here, this is the job :
function CaptureWeight {
Start-Job -Name WeightLog -ScriptBlock {
filter timestamp {
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
try {
$sw = [System.IO.StreamWriter]::new("$using:LogDir\$FileName$(Get-Date -f MM-dd-yyyy).txt")
& "$using:PlinkDir\plink.exe" -telnet $using:SerialIP -P $using:SerialPort | TimeStamp
}
finally {
$sw.ForEach('Flush')
$sw.ForEach('Dispose')
}
}
}
I'd like to get his to run against a list of IP addresses while also having a name associated with the IP to set the file name for each file. I was thinking something like $Name = Myfilename and $name.IP = 1.1.1.1 and using those in place of $FileName and $SerialIP, but have yet to be able get anything close to working or find an example close enough to what I'm trying for.
Thanks
Here is one way you could do it with a hash table as Theo mentioned in his helpful comment. Be aware that Jobs don't have a Threshold / ThrottleLimit parameter as opposed to Start-ThreadJob or ForEach-Object -Parallel since jobs run in a different process as you have already commented instead of instances / runspaces, there is no built-in way to control how many Jobs can run at the same time. If you wish have control over this you would need to code it yourself.
# define IPs as Key and FileName as Value
$lookup = #{
'1.2.3.4' = 'FileNameForThisIP'
'192.168.1.15' = 'AnotherFileNameForTHatIP'
}
# path to directory executable
$plink = 'path\to\plinkdirectory'
# path to log directory
$LogDir = 'path\to\logDirectory'
# serial port
$serialport = 123
$jobs = foreach($i in $lookup.GetEnumerator()) {
Start-Job -Name WeightLog -ScriptBlock {
filter timestamp {
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
try {
$path = Join-Path $using:LogDir -ChildPath ('{0}{1}.txt' -f $using:i.Value, (Get-Date -f MM-dd-yyyy))
$sw = [System.IO.StreamWriter]::new($path)
$sw.AutoFlush = $true
& "$using:plink\plink.exe" -telnet $using:i.Key -P $using:serialPort | TimeStamp
}
finally {
$sw.ForEach('Dispose')
}
}
}
$jobs | Receive-Job -AutoRemoveJob -Wait
The other alternative to the hash table could be to use a Csv (either from a file with Import-Csv or hardcoded with ConvertFrom-Csv).
Adding here another alternative to my previous answer, using a RunspacePool instance which has built-in a way of concurrency and enqueuing.
using namespace System.Management.Automation.Runspaces
try {
# define number of threads that can run at the same time
$threads = 10
# define IPs as Key and FileName as Value
$lookup = #{
'1.2.3.4' = 'FileNameForThisIP'
'192.168.1.15' = 'AnotherFileNameForTHatIP'
}
# path to directory executable
$plink = 'path\to\plinkdirectory\'
# path to log directory
$LogDir = 'path\to\logDirectory'
# serial port
$port = 123
$iss = [initialsessionstate]::CreateDefault2()
$rspool = [runspacefactory]::CreateRunspacePool(1, $threads, $iss, $Host)
$rspool.ApartmentState = 'STA'
$rspool.ThreadOptions = 'ReuseThread'
# session variables that will be intialized with the runspaces
$rspool.InitialSessionState.Variables.Add([SessionStateVariableEntry[]]#(
[SessionStateVariableEntry]::new('plink', $plink, '')
[SessionStateVariableEntry]::new('serialport', $port, '')
[SessionStateVariableEntry]::new('logDir', $LogDir, '')
))
$rspool.Open()
$rs = foreach($i in $lookup.GetEnumerator()) {
$ps = [powershell]::Create().AddScript({
param($pair)
filter timestamp {
$sw.WriteLine("$(Get-Date -Format MM/dd/yyyy_HH:mm:ss) $_")
}
try {
$path = Join-Path $LogDir -ChildPath ('{0}{1}.txt' -f $pair.Value, (Get-Date -f MM-dd-yyyy))
$sw = [System.IO.StreamWriter]::new($path)
$sw.AutoFlush = $true
& "$plink\plink.exe" -telnet $pair.Key -P $serialPort | TimeStamp
}
finally {
$sw.ForEach('Dispose')
}
}).AddParameter('pair', $i)
$ps.RunspacePool = $rspool
#{
Instance = $ps
AsyncResult = $ps.BeginInvoke()
}
}
foreach($r in $rs) {
try {
$r.Instance.EndInvoke($r.AsyncResult)
$r.Instance.Dispose()
}
catch {
Write-Error $_
}
}
}
finally {
$rspool.ForEach('Dispose')
}
I have this code which check for the existence of a project in SSISDB. On the first run I make sure the project is there and it returns the correct value. But then I delete the project and run the code again but it then returns 1 again. When I restart the session then it starts to return the correct answer again. What is the problem and how can I solve it?
import-module sqlserver;
$TargetInstanceName = "localhost\default"
$TargetFolderName = "FolderForTesting";
$ProjectName = "ProjectTesting";
$catalog = Get-Item SQLSERVER:\SSIS\$TargetInstanceName\Catalogs\SSISDB\
$folder = $catalog.Folders["$TargetFolderName"];
$project = $folder.Projects["$ProjectName"];
if($null -eq $project){
Return 0
} else {
Return 1
}
Combining mine and Theo's helpful comments into a possible solution:
import-module sqlserver;
$TargetInstanceName = "localhost\default"
$TargetFolderName = "FolderForTesting";
$ProjectName = "ProjectTesting";
try {
$catalog = Get-Item SQLSERVER:\SSIS\$TargetInstanceName\Catalogs\SSISDB\
$folder = $catalog.Folders[ $TargetFolderName ]
$project = $folder.Projects[ $ProjectName ]
if($null -eq $project){
Return 0
} else {
$project.Refresh() # Causes an exception if project actually doesn't exist
Return 1
}
}
catch {
return 0
}
This is based on Refreshing the SQL Server PowerShell Provider and PS + SQLPS refreshing the SQL Server object and your own testing. I couldn't find any official information regarding the topic.
I have put together a nice PowerShell script to script out the objects (tables, functions, sprocs etc) from a database, limiting it to the ones in a list.
But I am stuck trying to find a way to script the database itself. Each time I do that, it seems to try to script out the whole database (it is way to large for that to go well).
Assuming I have a $db variable that is a reference to my database, how can I use SMO to script out that database, creating it with the same Properties and DatabaseScopedConfigurations, but none of the actual objects in it?
Update:
For reference here is my current script. It takes a server and database name and will script out all the objects found in a file called DbObjectsList.txt (assuming they are in the database). But this does not actually make the database. The database I am running this on is a legacy one, and it has a bunch of odd options set. I would like to preserve those.
$serverName = "MyServerName"
$databaseName = "MyDbName"
$date_ = (date -f yyyyMMdd)
$path = ".\"+"$date_"
# Load the Sql Server Management Objects (SMO) and output to null so we don't show the dll details.
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') > $null
# Setup the scripting options
$scriptOptions = new-object ('Microsoft.SqlServer.Management.Smo.ScriptingOptions')
$scriptOptions.ExtendedProperties = $true
$scriptOptions.AnsiPadding = $true
$scriptOptions.ClusteredIndexes = $true
# Dri = Declarative Referential Integrity
$scriptOptions.DriAll = $true
$scriptOptions.Triggers = $true
$scriptOptions.NoCollation = $false
$scriptOptions.SchemaQualify = $true
$scriptOptions.ScriptSchema = $true
$scriptOptions.EnforceScriptingOptions = $true
$scriptOptions.SchemaQualifyForeignKeysReferences = $true
$scriptOptions.NonClusteredIndexes = $true
$scriptOptions.Statistics = $true
$scriptOptions.Permissions = $true
$scriptOptions.OptimizerData = $true
# get a reference to the database we are going to be scripting from
$serverInstance = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $serverName
$db=$serverInstance.Databases | Where-Object {$_.Name -eq $databaseName}
$dbname = "$db".replace("[","").replace("]","")
$dbpath = "$path"+ "\"+"$dbname" + "\"
if ( !(Test-Path $dbpath))
{
$null=new-item -type directory -name "$dbname"-path "$path"
}
# Load the list of db objects we want to script.
$listPath = ".\DbObjectList.txt"
if ((Test-Path $listPath))
{
$dbListItems = Get-Content -Path $listPath
}
else
{
throw "Could not find DbObjectst.txt file (it should have a list of what to script)."
}
# Setup the output file, removing any existing one
$outFile = "$dbpath" + "FullScript.sql"
if ((Test-Path $outFile)){Remove-Item $outFile }
$typeDelimiter = "=========="
foreach ($dbListItem in $dbListItems)
{
# Let the caller know which one we are working on.
echo $dbListItem
if ($dbListItem.StartsWith($typeDelimiter))
{
# Pull the type out of the header
$startIndex = $typeDelimiter.Length;
$stopIndex = $dbListItem.LastIndexOf($typeDelimiter)
$type = $dbListItem.Substring($startIndex, $stopIndex - $startIndex).Trim()
continue;
}
if ($type -eq $null)
{
throw "Types not included DbObjectsList.txt. Add types before groups of objects, surrounded by " + $typeDelimiter
}
foreach ($dbObjectToScript in $db.$type)
{
$objName = "$dbObjectToScript".replace("[","").replace("]","")
$compareDbListItem = "$dbListItem".replace("[","").replace("]","")
if ($compareDbListItem -eq $objName)
{
"-- " + $dbListItem | out-File -Append $outFile
$dbObjectToScript.Script($scriptOptions)+"GO" | out-File -Append $outFile
}
}
}
I'm trying to set up replication in RavenDB by using PowerShell DSC, but I get this error in the TestScript scriptblock when I try to compile the configuration:
PSDesiredStateConfiguration\Node : Error formatting a string: Input string was not in a correct format.
Here is my scriptblock:
TestScript = {
$result = Invoke-WebRequest -Method GET "http://localhost:8080/Databases/Test/Docs/Raven/Replication/Destinations" -UseBasicParsing
$ravenSlaves = "{0}".Split(",")
foreach($ravenSlave in $ravenSlaves)
{
if($result -notmatch $ravenSlave)
{
return $false
}
}
return $true
} -f ($Node.RavenSlaves)
And RavenSlaves is defined like a string in my ConfigurationData for the nodes like this:
#{
NodeName = "localhost"
WebApplication = "test"
Role = "Master server"
RavenSlaves = "server1,server2"
}
The problem seems to be connected to when I'm using the foreach to iterate over the $ravenSlaves variable, because if I remove the foreach (and the if inside the foreach) the configuration compiles and the mof file is created.
Kiran led me to the right solution by his comment about using the $using modifier in the configuration.
I edited the RavenSlaves property on the node to be an array like this:
#{
NodeName = "localhost"
WebApplication = "test"
Role = "Master server"
RavenSlaves = #("server1,server2")
}
And then I changed the TestScript-block to be like this:
TestScript = {
$result = Invoke-WebRequest -Method GET "http://localhost:8080/Databases/Test/Docs/Raven/Replication/Destinations" -UseBasicParsing
$ravenSlaves = $Node.RavenSlaves
foreach($ravenSlave in $using:ravenSlaves)
{
if($result -notmatch $ravenSlave)
{
return $false
}
}
return $true
}
The script compiled and ran on the server and the replication document in RavenDB was correct.
I'm trying to optimize my Powershell Script a little.
I have a lot of log (text) files, that i need to search through the content of, for a specific text entry.
If the entry is found, I need the script to trigger with an inset to an sql databse.
This is what I have for now:
$tidnu = (Get-Date -f dd.MM.yyyy)
$Text = "ERROR MESSAGE STACK"
$PathArray = #()
$NodeName = "SomeName"
$Logfil = "SomeLogFile"
Get-ChildItem $Path -Filter "*ORA11*.log" |
Where-Object { $_.Attributes -ne "Directory"} |
ForEach-Object {
If (Get-Content $_.FullName | Select-String -Pattern $Text)
{
$PathArray += $_.FullName
$cmd.commandtext = "INSERT INTO ErrorTabel (Datotid, Nodename, Logfil, ErrorFound) VALUES('{0}','{1}','{2}','{3}')" -f $tidnu, $NodeName, $Logfil, "Yes"
$cmd.ExecuteNonQuery()
}
else
{
$cmd.commandtext = "INSERT INTO ErrorTabel (Datotid, Nodename, ErrorFound) VALUES('{0}','{1}','{2}')" -f $tidnu, $NodeName, "No"
$cmd.ExecuteNonQuery()
}
}
This is working okay, but when i need to move to another log file name, i have simply made the same code again with different inputs.
What i would like to do, is to use an Array, and a foreach loop, so i could specify all the log files in one array, like:
$LogArray = #(Log1.log, log2.log, log3.log)
And specify all the Nodenames like:
$NodeArray = #(Node1, Node2, Node3)
And then make a foreach loop that will go through the logfiles one by one and insert into the databse, with it's matching nodename every time the loop runs through.
Can someone help me to make this happen? I have the idea on how it should be done, but I can't figure out how to write the code. All help would be much appreciated.
EDIT:
Ok, this is what i have now then, but i'm not sure that it's correct put together. Its giving me some strange results.
$conn = New-Object System.Data.SqlClient.SqlConnection
$conn.ConnectionString = "Data Source=PCDK03918;Initial Catalog=Rman;Integrated Security=SSPI;"
$conn.open()
$cmd = New-Object System.Data.SqlClient.SqlCommand
$cmd.connection = $conn
$tidnu = (Get-Date -f dd.MM.yyyy)
$Path = "C:\RMAN"
$Text = "ERROR MESSAGE STACK"
$nodes = #{
'NodeName1' = 'Node1log1.log', 'Node1log2.log', 'Node1log3.log'
'NodeName2' = 'Node2log1.log', 'Node2log2.log'
}
foreach ($NodeName in $nodes.Keys) {
foreach ($Logfil in $nodes[$NodeName]) {
Get-ChildItem $Path -Filter "*.log" |
ForEach-Object {
If (Get-Content $_.FullName | Select-String -Pattern $Text)
{
$cmd.commandtext = "INSERT INTO Error (Datotid, Nodename, Logfil, Error) VALUES('{0}','{1}','{2}','{3}')" -f $tidnu, $NodeName, $Logfil, "Yes"
$cmd.ExecuteNonQuery()
}
else
{
$cmd.commandtext = "INSERT INTO Error (Datotid, Nodename, Logfil, Error) VALUES('{0}','{1}','{2}','{3}')" -f $tidnu, $NodeName, $Logfil, "No"
$cmd.ExecuteNonQuery()
}
}
}
}
$conn.close()
I have created the log files mentioned in $nodes, in the folder, and put the "ERROR MESSAGE STACK" into Node1log1.log and Node1log2.log The rest of the log files are with no "ERROR MESSAGE STACK" inside.
But the result in the database is strange. It says Error = Yes to log files with no "ERROR MESSAGE STACK" inside, and it says Error = No to the same log files some rows down. Plus its inserting double rows and all in all its not doing as intended.
could it be because my
Get-ChildItem $Path -Filter "*.log" |
is wrong by using *.log ?
Or am I simply going completely wrong about this?
EDIT Once more:
Not sure what I was thinking yesterday, but I believe i have solved it now.
Get-ChildItem $Path -Filter "*.log" |
Will of course not work.
Get-ChildItem $Path -Filter $logfil |
Gives much more sense, and now my databse output is looking much more correct.
#Ansgar Wiechers - Thank you for pointing me in the right direction. I learned alot from this.
Consider using a hashtable for this:
$logs = #{
'Log1.log' = 'Node1'
'Log2.log' = 'Node2'
'Log3.log' = 'Node3'
}
That way you can iterate over the logs like this:
foreach ($Logfil in $logs.Keys) {
$NodeName = $logs[$Logfil]
...
}
If you have more than one log file per node name, it would be more efficient to reverse the mapping and store the log file names in an array:
$nodes = #{
'Node1' = 'Log1.log', 'Log2.log', 'Log3.log'
'Node2' = 'Log4.log', 'Log5.log'
}
Then you can process the logfiles with a nested loop like this:
foreach ($NodeName in $nodes.Keys) {
foreach ($Logfil in $nodes[$NodeName]) {
...
}
}
You should be able to fit your pipeline into either loop without further modifications.
Edit: As an optimization you could do something like this to avoid needlessly fetchin logs with each iteration of the outer loop:
$logs = Get-ChildItem $Path -Filter '*.log'
foreach ($NodeName in $nodes.Keys) {
$logs | ? { $nodes[$NodeName] -contains $_.Name } | % {
...
}
}