I have a PowerShell script that pulls in 1.4+ million rows of data and saves it to a HUGE CSV file that then gets imported into an SQL server. I thought there might be a way to have PowerShell insert the data into the SQL server directly but I am not sure how.
One of my concerns is that I don't want to buffer up the AD result into memory and then write them. I'd rather write them in batches of 1000 or something so memory consumption stays down. Get 1000 records, save to SQL server, and repeat...
I see articles about how to get PowerShell to write to an SQL server but they all seem to either do ALL data at one time or one record at a time -- both of which seem inefficient to me.
This is the PowerShell script I have to query AD.
# the attributes we want to load
$ATTRIBUTES_TO_GET = "name,distinguishedName"
# split into an array
$attributes = $ATTRIBUTES_TO_GET.split(",")
# create a select string to be used when we want to dump the information
$selectAttributes = $attributes | ForEach-Object {#{n="AD $_";e=$ExecutionContext.InvokeCommand.NewScriptBlock("`$_.$($_.toLower())")}}
# get a directory searcher to search the GC
[System.DirectoryServices.DirectoryEntry] $objRoot = New-Object System.DirectoryServices.DirectoryEntry("GC://dc=company,dc=com")
[System.DirectoryServices.DirectorySearcher] $objSearcher = New-Object System.DirectoryServices.DirectorySearcher($objRoot)
# set properties
$objSearcher.SearchScope = "Subtree"
$objSearcher.ReferralChasing = "All"
# need to set page size otherwise AD won't return everything
$objSearcher.PageSize = 1000
# load the data we want
$objSearcher.PropertiesToLoad.AddRange($attributes)
# set the filter
$objSearcher.Filter = "(&(objectClass=group)(|(name=a*)(name=b*)))"
# get the data and export to csv
$objSearcher.FindAll() | select -expandproperty properties | select $selectAttributes | export-csv -notypeinformation -force "out.csv"
I use Out-DataTable to convert my object array into a DataTable object type, then use Write-DataTable to bulk insert that into a database (Write-DataTable uses SqlBulkCopy to do this).
Caveats/gotchas for this (SqlBulkCopy can be a nuisance to troubleshoot):
Make sure your properties are the correct type (string for varchar/nvarchar, int for any integer values, dateTime can be string as long as the format is correct and SQL can parse it)
Make sure you properties are in order and line up with the table you're inserting to, including any fields that auto fill (incrementing ID key, RunDt, etc).
Out-DataTable: https://gallery.technet.microsoft.com/scriptcenter/4208a159-a52e-4b99-83d4-8048468d29dd
Write-DataTable: https://gallery.technet.microsoft.com/scriptcenter/2fdeaf8d-b164-411c-9483-99413d6053ae
Usage
If I were to continue on your example and skip the CSV, this is how I would do it... replace the last two lines with the code below (assuming that your object properties line up with the table perfectly, your SQL server name is sql-server-1, database name is org, and table name is employees):
try {
Write-DataTable -ServerInstance sql-server-1 -Database org -TableName employees -Data $($objSearcher.FindAll() | Select-Object -expandproperty properties | Select-Object $selectAttributes | Out-DataTable -ErrorAction Stop) -ErrorAction Stop
}
catch {
$_
}
Looking at your code it looks like you come from .NET or some language based on .NET. Have you heard of the cmdlets Get-ADUser / Get-ADGroup? This would simplify things tremendously for you.
As far as the SQL connection goes PowerShell doesn't have any native support for it. Microsoft has made cmdlets for it though! You just have to have SQL Server Installed in order to get them.... Which is kinda a bummer since SQL is so heavy and not everyone wants to install it. It is still possible using .NET, it's just not very quick or pretty. I won't be giving advice on the cmdlets here, you can Google that. As far as .NET, I would start by reading some of the documentation on the System.Data.SqlClient namespace as well as some historical questions on the subject.
Finally, as you said it would be a good idea to try and avoid overloading your RAM. The big thing here is trying to keep your entire script down to one single AD query. This way you avoid the troubling scenario of data changing between one query and the next. I think the best way of doing this would be to save your results straight to a file. Once you have that you could use SqlBulkCopy to insert into the table straight from your file. The downside to this is that it doesn't allow for multiple AD Properties. At least I don't think SqlBulkCopy will allow for this?
Get-ADUser "SomeParamsHere" | Out-File ADOutput.txt
If you have to have multiple AD properties and still want to keep the RAM usage to a minimum...well I toyed around with a script that would work but makes a few calls that would read from the entire file, which defeats the whole purpose. Your best option might be to save each property to a separate file then do your whole write DB thing. Example:
New-Item Name.txt
New-Item DistinguishedName.txt
Get-ADUser "SomeParamsHere" -Properties "Name,DistinguishedName" | Foreach {
Add-Content -Path "Name.txt" -Value "$_.Name"
Add-Content -PassThru "DistinguishedName.txt" -Value "$_.DistinguishedName"
}
Store results in your last line of code in a variable instead of exporting it to csv.
Then create group's of size you want't.
Using Out-DataTable and Write-DataTable write to SQL - links in nferrell's answer.
$res = $objSearcher.FindAll() | select -expandproperty properties | select
$selectAttributes
$counter = [pscustomobject] #{ Value = 0 }
#create groups with 1000 entries each
$groups = $res | Group-Object -Property { [math]::Floor($counter.Value++ / 1000) }
foreach ($group in $groups){
#convert to data table
$dt = $group.group | Out-DataTable
$dt | Write-DataTable -Database DB -ServerInstance SERVER -TableName TABLE
}
`
You're making this unneccesarily complicated.
If I read your code correctly, you want all groups starting with 'a' or 'b'.
# the attributes we want to export
$attributes = 'name', 'distinguishedName'
Import-Module ActiveDirectory
Get-ADGroup -Filter {(name -like "a*") -or (name -like "b*")} -SearchBase 'dc=company,dc=com' |
select $attributes | Export-Csv -NoTypeInformation -Force "out.csv"
Instead of using Export-Csv at the end, just pipe the output to the command which creates the SQL rows. By piping objects (instead of assigning them to a variable) you give PowerShell the ability to handle them efficiently (it will start processing objects as they come in, not buffer everything).
Unfortunately I can't help you with the SQL part.
Related
First, and newbie question. Sorry in advance ;)
I'm trying to create a two step script in powershell to change all sql server logins on different servers.
My goal is to be able to run it in a few seconds and not having to access the management studio for this operations. This has to be done in about 1000 computers running different versions of SQL Express.
1st step: Change the SA password.
I've tested this one and it's ok. I'm using a txt file with the server name list and two string for the password and username. Works perfect.
2nd step: Disable all sql logins (including windows auth ones)that are not on text file with the list of logins to keep.
Here's where I'm stuck.
I can use a list of users and disable them but I would like to do the opposite. I want to disable login names that are not on the list but are enabled on the sql server.
I need it to work this way because I have no chance to know the hostnames and usernames without accessing our clients computers / servers and I need it to be done really quick without manual procedures.
This is what I've done to be able to disable the ones on the list. It works fine but... I need the opposite.
Thank you very much and sorry if it's a stupid question.
Regards,
Luís
#DISABLE LOGINS ON THE LIST
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
$Error.Clear()
cls
$servers = Get-Content C:\example\servers.txt
$logins = Get-Content C:\example\users.txt
foreach($server in $servers)
{
$srv = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $server
foreach($login in $logins)
{
if ($srv.Logins.Contains($login))
{
$srv.Logins[$login].disable();
}
}
}
$srv.Logins |
Where-Object Name -notin $logins |
ForEach-Object Disable
Note that the lookup isn't efficient, because a linear search in array $logins is performed for each login, but that probably won't matter in practice.
The Where-Object and ForEach-Object calls use simplified syntax.
I am working on a script that should display the current database a program is connected to, and also give an option to replace with a new database (either by manually entering the database name but preferably listing all databases on the local server \ instance and give users "number selection" to select the database they want to use.
The connection string is saved in a text file called server.exe.config below is an example of what the file contains, its not the only data in the config file obviously
<add key="persistency.connection" value="data source=MyDatabaseServer;initial catalog=MyDatabase1;Integrated Security=True" />
I can use Get-Content to see the entire file and also use Where-Object {$_ -like "*initial catalog=*"} to see the only line which has the database configuration.
But I think this will be difficult for users to interpret what database is being used so if possible need a command that will display just the database name that is in the config file instead of the entire line, save that database name for future replacement when the user selects a new database to be replaced into the config file.
Possible?
While plain-text processing, as shown in your own answer, works in simple cases, for robustness it is usually preferable to use XML parsing, such as via the Select-Xml cmdlet:
$file = './server.exe.config'
# Extract the XML element of interest, using an XPath query
$xpathQuery = '/configuration/appSettings/add[#key = "persistency.connection"]'
$el = (Select-Xml $xpathQuery $file).Node
# Replace the database name in the element's value...
$newDatabase = 'NewDatabase'
$el.value = $el.value -replace '(?<=\binitial catalog=)[^;]+', $newDatabase
# ... and save the modified document back to the file.
# IMPORTANT: Be sure to use a *full* path to the output file,
# because PowerShell's current directory (location)
# usually differs from .NET's; Convert-Path ensures that.
$el.OwnerDocument.Save((Convert-Path $file))
Note that this technique works in principle for any XML file, not just .config.exe files.
I think this is simpler and works for me
$ConnString = Get-Content -Path $filepath\server.exe.Config | Where-Object {$_ -like "*initial catalog=*"}
$ConnString -split '\s*;\s*' | ForEach-Object -Process {
if ($_ -imatch '^initial catalog=(?<dbname>.+)$') {
$ConnStringdb = $Matches.dbname
}
}
I am then using the following simply to replace the text above back into the config file after a database name is enter - Manually. would love if I could get it to work automatically by maybe building an array or something.
Write-Host "Please Enter the database name you want to use for Production(Don't Leave Blank): " -NoNewline; $NewDatabaseConfig = Read-Host;
(Get-Content -Path $filepath\server.exe.Config -Raw) -Replace $ConnStringdb,$NewDatabaseConfig | Set-Content -Path $IPSServerProd\UBERbaseOfficeServer.exe.Config
I've got a problem searching an INDEX in an array made up by query sessions command in a terminal server.
This is the problematic script:
# Array of logged users in terminal servers
$a=Get-RDUsersession -CollectionName "BLABLA" -ConnectionBroker BLABLA.BLA.BL
# Array of all users with two columns from active directory
$b=Get-ADUser -filter * -properties TelephoneNumber,SamAccountName
Now imagine logging in the terminal server using the account name TEST instead of test.
If I do:
$c = $b[$b.SamAccountName.indexof("test")].TelephoneNumber
then I don't get the telephone number.
I think that's because of the case sensitivity, isn't it? If I type TEST in the search command, I get the correct number.
Is there any simple way to solve this problem and make the search of the index case-insensitive?
I've read about using the method [StringComparison]"CurrentCultureIgnoreCase", but it seems not working with array.
Thanks.
Since $b is an Object[] type, then you would probably want to do a Where-Object.
$b | Where-Object -FilterScript {$_.Samaccountname -like '*Smith*'} | Select-Object -ExpandProperty 'telephoneNumber'
That being said, an array in Powershell can be indexed case-insensitively if it is converted to a [Collections.Generic.List[Object]] type.
$b = [Collections.Generic.List[Object]]$b
$b.FindIndex( {$args[0].sAMAccountName -eq 'test'} )
Note that pulling every single user object in AD and filtering using where-object or index matching can be very slow. You can instead Get-ADUser as needed or pull all ADusers using a filter that pulls only the users returned in $a.
If you insist on having all ADUsers in one spot with one pull, consider looping over the list once to make a hash lookup so you can easily index the hash value.
#Create account lookup hash
$accountlookup = #{}
foreach ($element in $accounts) {
$accountlookup[$element.SamAccountName] = $element
}
Hope that helps!
With a powershell script
I`m importing a CSV file in an array, everything works fine.
$csv = Import-Csv "C:\test.csv" -delimiter ";"
But I'm not able to find easily a value in a field name PeopleID directly.
The only best method is to loop through all array line and look if the item I`m looking for exist like :
foreach($item in $csv)
{
if ($csv | where {$item.PeopleID -eq 100263} | select *) {
#Good I found it!!!
}
$List.add($item.PeopleID) > $null
}
Instead I decide to import only my column PeopleID directly in an array to make it faster:
$csvPeople = Import-Csv "C:\test.csv" -delimiter ";" | select PeopleID
If you see higher, I also create an array $List that add every PeopleID in my loop.
So I have 2 arrays that are identically
The problem if I use the CONTAINS command:
if($csvPeople -contains 100263)
the result is false
if($List -contains 100263)
the result is true
What can I do to have my $csvPeople array working with "contains" ?
Importing a csv column is faster than looping through result and adding it to a new array, but this array is working.
Do I'm missing somthing when I import my CSV column to have a "working" array ?
thanks
I think you are looking for -like not -contains. -contains is a bit finicky about how it is used. If you replace you if syntax with this:
if($csvPeople -like "*100263*")
You should be good to go. Note the wildcards on either side, I'm putting these here for you since I don't know exactly what your data looks like. You might be either able to remove them or able to change them.
Obligatory -like vs -contains article if you are interested: http://windowsitpro.com/blog/powershell-contains
Also, #AnsgarWiechers comment above will work. I believe you will still need to wrap your number in quotes though. I don't like to do this as it requires an exact match. If you are working with the CSV in excel or elsewhere and you have whitespace or oddball line ending characters then you might not get a hit with -contains.
I just noticed this in your script above where you are doubling your efforts:
foreach($item in $csv) # <-- Here $item is a record in the CSV
{
if ($csv | where {$item.PeopleID -eq 100263} | select *) { # <-- Here you are re-parsing the CSV even though you already have the data in $item
}
$List.add($item.Matricule) > $null
}
Since I don't have the full picture of what you are trying to do I'm providing a solution that allows you to match and take action per record.
foreach($item in $csv)
{
if ($item.PeopleID -eq 100263) {
# Do something with $item here now that you matched it
}
}
To Address your 191 users. I would take a different approach. You should be able to load a raw list of PeopleID's into a variable and then use the -in feature of Where to do a list-to-list comparison.
First load your target PeopleID's into a list. Assuming it is just a flat list, not a csv you could do your comparison like this:
$PeopleID_List = Get-Content YourList.txt
$csv | where { $_.PeopleID -in $PeopleID_List } | % { # Do something with the matched record here. Reference it with $_ }
This basically says for each record in the CSV I want you to check if that record's peopleID is in the $PeopleID_List. If it is, take action on that record.
PS noob here (as will be obvious shortly) but trying hard to get better. In my exchange 2010 environment I import and export huge numbers of .pst files. Many will randomly fail to queue up and once they're not in the queue it's very tedious to sort through the source files to determine which ones need to be run again so I'm trying to write a script to do it.
first I run a dir on the list of pst files and fill a variable with the associated aliases of the accounts:
$vInputlist = dir $vPath -Filter *.pst |%{ get-mailbox -Identity $_.basename| select alias}
Then I fill a variable with the aliases of all the files/accounts that successfully queued:
$vBatch = foreach ($a in (Get-MailboxImportRequest -BatchName $vBatchname)) {get-mailbox $a.mailbox | select alias}
Then I compare the two arrays to see which files I need to queue up again:
foreach($should in $vInputlist){if ($vBatch -notcontains $should){Write-Host $should ""}}
It seems simple enough yet the values in the arrays never match, or not match, as the case may be. I've tried both -contains and -notcontains. I have put in a few sanity checks along the way like exporting the variables to the screen and/or to csv files and the data looks fine.
For instance, when $vInputlist is first filled I send it to the screen and it looks like this:
Alias
MapiEnableTester1.psiloveyou.com
MapiEnableTester2.psiloveyou.com
MapiEnableTester3.psiloveyou.com
MapiEnableTester4.psiloveyou.com
Yet that last line of code I displayed above (..write-host $should,"") will output this:
#{Alias=MapiEnableTester1.psiloveyou.com}
#{Alias=MapiEnableTester2.psiloveyou.com}
#{Alias=MapiEnableTester3.psiloveyou.com}
#{Alias=MapiEnableTester4.psiloveyou.com}
(those all display as a column, not sure why they won't show that way here)
I've tried declaring the arrays like this, $vInputlist = #()
I've tried instead of searching for the alias just cleaning .pst off off the $_.basename using .replace
I've searched on comparing arrays til I'm blue in the fingers and I don't think my comparison is wrong, I believe that somehow no matter how I fill these variables I am corrupting or changing the data so that seemingly matching data simply doesn't.
Any help would be greatly appreciated. TIA
Using -contains to compare objects aren't easy because the objects are never identical even though they have the same property with the same value. When you use select alias you get an array of pscustomobjects with the property alias.
Try using the -expand parameter in select, like
select -expand alias
Using -expand will extract the value of the alias property, and your lists will be two arrays of strings instead, which can be compared using -contains and -notcontains.
UPDATE I've added a sample to show you what happends with your code.
#I'm creating objects that are EQUAL to the ones you have in your code
#This will simulate the objects that get through the "$vbatch -notcontains $should" test
PS > $arr = #()
PS > $arr += New-Object psobject -Property #{ Alias="MapiEnableTester1.psiloveyou.com" }
PS > $arr += New-Object psobject -Property #{ Alias="MapiEnableTester2.psiloveyou.com" }
PS > $arr += New-Object psobject -Property #{ Alias="MapiEnableTester3.psiloveyou.com" }
PS > $arr | ForEach-Object { Write-Host $_ }
#{Alias=MapiEnableTester1.psiloveyou.com}
#{Alias=MapiEnableTester2.psiloveyou.com}
#{Alias=MapiEnableTester3.psiloveyou.com}
#Now this is what you will get if you use "... | select -expand alias" instead of "... | select alias"
PS > $arrWithExpand = $arr | select -expand alias
PS > $arrWithExpand | ForEach-Object { Write-Host $_ }
MapiEnableTester1.psiloveyou.com
MapiEnableTester2.psiloveyou.com
MapiEnableTester3.psiloveyou.com