When I manually apply a SQL file, like a re-indexing job, everything runs perfectly. I want to automate applying SQL files in powershell, but I'm getting all kinds of incorrect syntax errors when using Get-Content. Is there a better way to get a SQL files contents, then apply those contents on a remote server that doesn't re-format the code to the point where I get incorrect syntax errors.
Generic error I get (any syntax might throw an error - ps seems to be mis-applying the syntax when getting the content of the file):
Incorrect syntax near [anything]
Note: all GOs have been removed, so this isn't related to those; it may throw an error due to a begin, a goto, etc. The reason is that it's not retaining how the SQL file is built.
Update
The SQL files can be anything from adding permissions to creating an index to building a table to creating a stored procedure. I have about 100 SQL files. If I manually execute them, they all work, so this isn't related to bad SQL syntax, but related to how Powershell is reading the SQL syntax and running it as a command.
Answer
The below appears to work:
Get-Content $file -Raw
but I'm getting all kinds of incorrect syntax errors when using Get-Content
Get-Content will return an array of strings. What you need to be using is a single string. If you have at least PowerShell 3.0 then the -Raw switch is what you want.
Get-Content $file -Raw
Earlier versions you can use
Get-Content $file | Out-String
For exceptionally large files you can use .Net StreamReader's but I don't think that will be required here.
How are you "applying a SQL file"? If you're passing the contents of the file into Invoke-SQLCmd, you'll need to use -Raw like #Matt said.
However, you don't even really need to do that. Invoke-SQLCmd lets you pass a filename directly into it. Use the -Inputfile parameter.
Related
I am trying to connect to an SFTP with the following command to move all .csv files from one location to another and I'm getting the "Too many parameters for command 'open'." error.
option echo off
option batch on
option confirm off
open sftp://XXX#XXX.com/ —hostkey="ssh-rsa 2048 XX:XX:XX:XX:XX:XX:XX" —rawsettings ProxyMethod=3 ProxyHost=proxy.uk.XXX.com
cd /XX/XX/XX/IN/LOAD
lcd \\XX.local\EMEA\XX\XX\Import_Location
put *.csv -nopreservetime=on -nopermissions=on
exit
I added the —hostkey parameter due to the "The server's host key was not found in the cache" error, the batch file was working fine before that, but I want to correct the host key error.
I checked all the dashes, the quotes, the only thing I'm confused about is whether the hostkey parameter is correct. The information online on WinSCP and some forums says you have to use SHA-256 fingerprint of the host key only which is a different format to the MD5 detail XX:XX:XX:XX.... Please can you help which one it is?
—hostkey="ssh-rsa 2048 XX:XX:XX:XX:XX:XX:XX"
OR
—hostkey="ssh-rsa 2056 AbC50IDzyx.....="
This is a similar query to mine, but I cannot see what the difference is so that theirs works and mine doesn't. Thank you.
The symbol you have at the beginning of —hostkey and —rawsettings is not a simple hyphen-minus (-), but em-dash (—).
Please use hyphen-minus (-) – what is the dash that you find on the standard English [and other] keyboards.
Or even easier, have WinSCP GUI generate a script template for you.
So actually you have the very same problem as in WinSCP forum post you referred to.
Other questions with the same error message, but different problem:
WinSCP command line - Too many parameters for command 'open' when using -rawtransfersettings switch
Getting "Too many parameters for command", when calling WinSCP command-line from VBA
FTP "Too many parameters for command 'synchronize'" with WinSCP
Obtaining the correct hostkey fingerprint:
https://winscp.net/eng/docs/faq_hostkey
Is there any way to omit the byte-order mark when redirecting the output stream to a file? For example, if I want to take the contents of an XML file and replace a string with a new value, I need to do create a new encoding and write the new output to a file like the following which is rather ham-handed:
$newContent = ( Get-Content .\settings.xml ) -replace 'expression', 'newvalue'
$UTF8NoBom = New-Object System.Text.UTF8Encoding( $false )
[System.IO.File]::WriteAllText( '.\settings.xml', $newContent, $UTF8NoBom )
I have also tried using Out-File, but specifying UTF8 as the encoding still contains a BOM:
( Get-Content .\settings.xml ) -replace 'expression', 'newvalue' | Out-File -Encoding 'UTF8' .\settings.xml
What I want to be able to do is simply redirect to a file without a BOM:
( Get-Content .\settings.xml ) -replace 'expression, 'newvalue' > settings.xml
The problem is that the BOM which is added to the output file routinely cause issues when reading these files from other applications (most notably, most applications which read an XML blow up if I modify the XML and it begins with a BOM, Chef Client also doesn't like a BOM in a JSON attributes file). Short of me writing a function like Write-FileWithoutBom to accept pipeline input and an output path, is there any way I can simply "turn off" writing a BOM when redirecting output to a file?
The solution doesn't necessarily have to use the redirection operator. If there is a built-in cmdlet I can use to output to a file without a BOM, that would be acceptable as well.
In Windows PowerShell as of v5.1 (the latest and last version), there is NO (direct) built-in way to get UTF-8 encoding without a BOM.
In v5.1+ you can change the default encoding for > / >> as follows, but if you choose utf8, you still get a BOM:
$PSDefaultParameterValues['Out-File:Encoding'] = 'utf8'
To avoid a BOM, either direct use of .NET APIs or a workaround via New-Item is required - see this answer.
Unfortunately, it is unlikely that Windows PowerShell will ever support creation of BOM-less UTF-8 files.[1]
PowerShell Core (v6+), by contrast, uses BOM-less UTF-8 by default (both for Out-File / > and Set-Content) and offers you a choice of BOM or no-BOM via -Encoding specifiers utf8 and utf8BOM.
[1] From a Microsoft blog post, emphasis added: "Windows PowerShell 5.1, much like .NET Framework 4.x, will continue to be a built-in, supported component of Windows 10 and Windows Server 2016. However, it will likely not receive major feature updates or lower-priority bug fixes." and, in a comment, "The goal with PowerShell Core 6.0 and all the compatibility shims is to supplant the need for Windows PowerShell 6.0 while converging the ecosystem on PowerShell Core. So no, we currently don’t have any plans to do a Windows PowerShell 6.0."
i have multiple files in folder that have underscores between characters and i want to change them to dashes
ex...F01B_B1_DD.DXF
replace with F01B-B1-DD.DXF
Thank you,
Jeff
PowerShell:
Get-ChildItem *_*.dxf | ForEach-Object {
Rename-Item $_.Name ($_.Name -replace '_','-')
}
It depends what command line tools you have installed. By default, it's not that easy in a batch file as you don't have access to great tools like linux/unix sed. I haven't used PowerShell, so I don't know if anything is available there, although it is possible, even if not intuitive (see below).
You're much better going for an application. There exists several tools to do this sort of thing. I would recommend one of these examples:
I have personally used and can vouch for this app:
http://www.bulkrenameutility.co.uk/Screenshots.php
I haven't used this one:
https://www.advancedrenamer.com/
edit: removed code sample as it didn't do as advertised :)
Microsoft Remote Desktop saved sessions have values in them when you open them with a text editor (to test yourself, open Remote Desktop Connection, click Options, and then click Save As. Open the resulting .rdp file in a text editor).
However, using the standard Select-String command here (which works when exactly the same syntax on other file formats):
$MyOObject."Prompt" = (Select-String -Path $Path -Pattern "promptcredentialonce: (.*)").Matches.Groups[1].Value
... produces the following error:
Cannot index a null array
Is there a different command to use to parse this kind of file, or any non standard text file, in PowerShell 2.0?
Your pattern is incorrect. The syntax of the options in .rdp files is
name:type:value
In your case:
promptcredentialonce:i:0
However, you're trying to match something with a space after the option name (which doesn't exist):
promptcredentialonce: (.*)
Without a match the .Matches property is empty and .Group[1] attempts an indexed access on a null value.
If you want the value including the type, remove the space:
promptcredentialonce:(.*)
If you want just the value, change the pattern to something like this:
promptcredentialonce:\w+:(.*)
My current scenario is like this:
I need to login to sqlplus from a shell script to call a stored procedure.
After that I need to create a CSV file by SPOOLING data from a table.
Then I need to check whether the CSV file has been created in a particular directory and depending on the result an update query needs to be run.
I know that this can be checked within sqlplus with the help of UTL_FILE package but unfortunately due to Client policies,the access of this package is restricted in the current system.
Another way is to exit from sqlplus and perform the file check in UNIX and then again log in to sqlplus to perform the rest actions. But this I believe would result in slower execution time and performance is an important factor in this implementation as the tables contain huge volumes of data(in millions).
So is there any other way to check this from sqlplus without exiting from the current session?
System Info:
OS - Red Hat Enterprise Linux
Database - Oracle 11g
If the file is on the same machine that you're running SQL*Plus on, you could potentially use the host command.
If the file you're checking is the same one you're spooling to, it must exist anyway, or you would have got an SP error of some kind; but if you do want to check the same file for some reason, and assuming you have a substitution variable with the file name:
define csv_file=/path/to/spool.csv
-- call procedure
spool &csv_file
-- do query
spool off
host ls &csv_file
update your_table
set foo=bar
where &_rc = 0;
If the file exists when the host command is run, the _rc substitution variable will be set to zero. If the file doesn't exist or isn't readable for any reason it will be something else - e.g. 2 if the file just doesn't exist. Adding the check &_rc = 0 to your update will mean no rows are updated if there was an error. (You can of course still have whatever other conditions you need for the update).
You could suppress the display of the file name by adding 1>/dev/null to the host command string; and could also suppress any error messages by also adding 2>/dev/null, though you might want to see those.
The documentation warns against using &_rc as it isn't portable; but it works on RHEL so as long as you don't need your script to be portable to other operating systems this may be good enough for you. What you can't do, though, is do anything with the contents of the file, or interpret anything about it. All you have available is the return code from the command you run. If you need anything more sophisticated you could call a script that generates specific return codes, but that's getting a bit messy.