I'm trying to using Microsoft's Log Parser to read multiple sets of IIS log files. Now, my query works fine, however, to get it to work properly, I need to have the directory listed that the files exist directly under.
I want to be able to do a recursive search under a high level directory. I have found how to do this thru the DLLs, but I can't find how with the command prompt.
There has to be a simple solution to this, and I'm just missing it.
Add the -recurse:-1 option to the command-line. Check the available command-line options for your input format with: C:\>logparser -h -i:IIS
Example output:
Input format: IIS (Microsoft IIS Log Format)
Parses Microsoft IIS log files
FROM syntax:
<filename> | <SiteID> [, <filename> | <SiteID> ... ]
<SiteID> = '<' SiteID '>'
SiteID can be a SiteID number, a fully qualified ADSI Path (e.g.
"//GABRIEGI1/W3SVC/1"), or a Site name (e.g. "My External Site"), eventually
containing wildcards
Parameters:
-locale <locale name> : 3-letter ID of the log file locale
[default value=DEF]
-returnExtraFields ON|OFF : Return additional fields in
Parameters field [default value=OFF]
-iCodepage <codepage ID> : Input codepage (-2=guess from
filename and/or LogInUTF8 property)
[default value=guess from filename
and/or LogInUTF8 property]
-recurse <level> : Max subdirectory recursion level
(0=no recurse, -1=all levels)
[default value=0]
-minDateMod <date> : Minimum file last modified date
[default value=not specified]
-iCheckpoint <checkpoint file> : Save checkpoint information to this
file [default value=no checkpoint]
Fields:
LogFilename (S) LogRow (I) UserIP (S) UserName (S)
Date (T) Time (T) ServiceInstance (S) HostName (S)
ServerIP (S) TimeTaken (I) BytesSent (I) BytesReceived (I)
StatusCode (I) Win32StatusCode (I) RequestType (S) Target (S)
Parameters (S)
I couldnt run a -recurse if the import format was set to W3C. (-i:W3C)
For this I simply added added the following in Powershell when specifying the file/folder path. E.G
$httpLogPath = "Get-ChildItem Y:\Data\folder* -include *.log -recurse"
Related
I am having a bad time with the standalone version of 7-Zip. I want to compress a folder with a password. I tried: 7za a my_folder -pmy_password. It's compressing all the files in which 7za.exe is located with file name of my_folder.
BTW: I am using a scripting language called AutoIt.
As per Command Line Syntax (7zip.chm) > Contents > Command Line Version > Syntax :
7za <command> [<switch>...] <base_archive_name> [<arguments>...]
<arguments> ::= <switch> | <wildcard> | <filename> | <list_file>
<switch>::= <switch_symbol><switch_characters>[<option>]
<switch_symbol> ::= '/' | '-'
<list_file> ::= #{filename}
a is the command.
-pmy_password is a switch and should be therefore after the command and not at end. But switches can be also appended after base archive file name although this makes it more difficult to read and parse.
my_folder should be an argument, but is interpreted as base archive file name because you have not specified any archive file name.
So try:
7za.exe a -r -pmy_password MyArchive.zip my_folder
I am trying to create a JBDC feeder to load data from SQL Server into elasticsearch. I am using the guide here: https://github.com/jprante/elasticsearch-river-jdbc (search for heading 'How to run a standalone JDBC feeder').
I have successfully downloaded and installed elasticsearch and have it up and running. I have downloaded the JDBC driver for SQL server and moved it into the ./plugins/jdbc folder.
I am up to the part that involves creating a bash script. Before today, I have never even looked at a bash script and I'm having trouble getting it to work since I don't yet know half the syntax.
The elasticsearch directory is c:\elasticsearch-1.4.0
and here is my bash script:
#!/bin/sh
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# ES_HOME required to detect elasticsearch jars
export ES_HOME= C:\elasticsearch-1.4.0
echo '
{
"elasticsearch" : {
"cluster" : "elasticsearch",
"host" : "localhost",
"port" : 9200
},
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:sqlserver://localhost;databaseName=MyDatabase",
"user" : "MyUser",
"password" : "MyPassword",
"sql" : "select * From MyTable",
"treat_binary_as_string" : true,
"index" : "MyFirstESIndex"
}
}
' | java \
-cp "${DIR}/*" \
org.xbib.elasticsearch.plugin.jdbc.feeder.Runner \
org.xbib.elasticsearch.plugin.jdbc.feeder.JDBCFeeder
What do I need to update in this script? Is it something in this line of the script:
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
The reason I am doing this is because I'm looking for the best method to insert potentially tens of millions of records into elasticsearch from sql server in one go i.e a bulk insert.
Our first iteration of this involved getting each row of data in a table, converting it to a JSON document, and inserting into ES. This took about 10hrs to get all the data in there.
Thanks in advance for any advice.
Instead of this:
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
Do this, if it's important for you to be in that working directory:
DIR="$( dirname "${BASH_SOURCE[0]}" )"
cd $DIR
Also, if it's not a typo, remove the space after ES_HOME=, and, when in doubt, use quotes:
export ES_HOME="C:\elasticsearch-1.4.0"
Additionally, with the Java -cp argument, if you want to include all the jar files (which I'm assuming), don't use quotes so the globbing works:
# Since you are already in the directory, you don't need DIR
JARS=$(echo ./*jar)
...
# On the cp line, substitute spaces with : to build the classpath
-cp "${JARS// /:}" \
I hope this helps. If you can give more details about how the script is failing, I could help more. Are you getting particular error messages?
I am looking for a script to rename files and directories that have special characters in them.
My files:
?rip?ev <- Directory
- Juhendid ?rip?evaks.doc <- Document
- ?rip?ev 2 <- Subdirectory
-- t?ts?.xml <- Subdirectory file
They need to be like this:
ripev <- Directory
- Juhendid ripevaks.doc <- Document
- ripev 2 <- Subdirectory
-- tts.xml <- Subdirectory file
I need to change the files and the folders so that the filetype stays the same as it is for example .doc and .xml wont be lost. Last time I did it with rename it lost every filetype and the files were moved to mother directory in this case ?rip?ev directory and subdirectories were empty. Everything was located under the mother directory /home/samba/.
So in this case I need just to rename the question mark in the file name and directory name, but not to move it anywhere else or lose any other character or the filetype. I have been looking around google for a answer but haven't found one. I know it can be done with find and rename, but haven't been able to over come the complexity of the script. Can anyone help me please?
You can just do something like this
find -name '*\?*' -exec bash -c 'echo mv -iv "$0" "${0//\?/}"' {} \;
Note the echo before the mv so you can see what it does before actually changing anything. Otherwise above:
searches for ? in the name (? is equivalent to a single char version of * so needs to be escaped)
executes a bash command passing the {} as the first argument (since there is no script name it's $0 instead of $1)
${0//\?/} performs parameter expansion in bash replacing all occurrences of ? with nothing.
Note also that file types do not depend on the name in linux, but this should not change any file extension unless they contain ?.
Also this will rename symlinks as well if they contain ? (not clear whether or not that was expected from question).
I usually do this kind of thing in Perl:
#!/usr/bin/perl
sub procdir {
chdir #_[0];
for (<*>) {
my $oldname = $_;
rename($oldname, $_) if s/\?//g;
procdir($_) if -d;
}
chdir "..";
}
procdir("top_directory");
I have a SQL script designed to be executed by sqlcmd, and a Command script that executes sqlcmd with the correct parameters.
I want to convert the Command script to a PowerShell script that uses Invoke-Sqlcmd instead of sqlcmd.
The SQL script, the Command script, and the new PowerShell script all live in the directory C:\Users\iain.CORP\SqlcmdQuestion.
SQL Script
The SQL script is called ExampleQuery.sql. It selects a string literal. The value of the string literal is set by sqlcmd at runtime to the value of the ComputerName sqlcmd scripting variable. The code looks like this:
SELECT '$(ComputerName)';
Command Script
The command script is called ExecQuery.cmd. It calls sqlcmd to execute ExampleQuery.sql and sets the value of the scripting variable ComputerName to the value of the environment variable COMPUTERNAME. The code looks like this:
sqlcmd -i ExampleQuery.sql -v ComputerName = %COMPUTERNAME%
When I open a command prompt, the default working directory is C:\Users\iain.CORP. I change the to the directory containing the files, and run the Command script:
cd C:\Users\iain.CORP\SqlcmdQuestion
ExecQuery.cmd
I see this output:
---------
SKYPC0083
(1 rows affected)
The script successfully selects a string literal set by sqlcmd.
PowerShell Script
The PowerShell script is called ExecQuery.ps1. It is supposed to do the same as the command script, using Invoke-Sqlcmd instead of sqlcmd. The code looks like this:
Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
Invoke-Sqlcmd -InputFile 'ExampleQuery.sql' -Variable "ComputerName = $Env:COMPUTERNAME"
When I open a PowerShell prompt, the default working directory is Z:\. I change to the directory containing the files, and run the PowerShell script:
cd C:\Users\iain.CORP\SqlcmdQuestion
.\ExecQuery.ps1
I see this output:
Invoke-Sqlcmd : Could not find file 'Z:\ExampleQuery.sql'.
At C:\Users\iain.CORP\SqlcmdQuestion\ExecQuery.ps1:4 char:14
+ Invoke-Sqlcmd <<<< -InputFile 'ExampleQuery.sql' -Variable "ComputerName = $Env:COMPUTERNAME"
+ CategoryInfo : InvalidResult: (:) [Invoke-Sqlcmd], FileNotFoundException
+ FullyQualifiedErrorId : ExecutionFailed,Microsoft.SqlServer.Management.PowerShell.GetScriptCommand
The PowerShell script raises an error because Invoke-Sqlcmd can't find the the input file in the Z:\ directory, which happens to be the default working directory.
The Command script found the script in the current working directory.
How do I make Invoke-Sqlcmd use the current working directory instead of the default working directory?
For this answer, assume that the directory C:\Users\iain.CORP\SqlcmdQuestion exists and that executing dir at that location produces the following output, as implied by the question:
Directory: C:\Users\iain.Corp\SqlcmdQuestion
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 26/09/2012 15:30 27 ExampleQuery.sql
-a--- 26/09/2012 15:30 61 ExecQuery.cmd
-a--- 26/09/2012 15:34 172 ExecQuery.ps1
PowerShell ignores the working directory by design
My question has a false premise:
How do I make Invoke-Sqlcmd use the current working directory instead of the default working directory?
The cmdlet does use the current working directory. The problem is that I didn't change the working directory at all in my PowerShell session.
In PowerShell, cd is an alias for the Set-Location cmdlet. You can prove this using the Get-Alias cmdlet:
Get-Alias cd
Output:
CommandType Name Definition
----------- ---- ----------
Alias cd Set-Location
Alex Angelopoulos explains:
[A]lthough PowerShell's location is analogous to the working directory, the location is not the same thing as the working directory. In fact, PowerShell doesn't touch the working directory.
Set-Location does not set the working directory. It sets the working location, which is a similar but distinct concept in PowerShell.
You can prove this by inspecting the working directory using the .NET property Environment.CurrentDirectory after setting the working location using cd as in the question:
cd C:\Users\iain.CORP\SqlcmdQuestion
Environment::CurrentDirectory
Output:
Z:\
I would guess this design decision was made to be consistent. The working directory would be undefined when, for example, the working location were set to a registry hive.
Invoke-Sqlcmd violates this design principle
Invoke-Sqlcmd violates PowerShell's general design principle to use the working location rather than the working directory. Most cmdlets use the working location to resolve relative paths, but Invoke-Sqlcmd is an exception.
Using the ILSpy disassembler and a little intuition to inspect the containing assembly Microsoft.SqlServer.Management.PSSnapins, I believe I have found the reason for the error.
I believe that the cmdlet's parameter -InputFile is implemented by the method IncludeFileName. ILSpy's disassembly of the method looks like this:
// Microsoft.SqlServer.Management.PowerShell.ExecutionProcessor
public ParserAction IncludeFileName(string fileName, ref IBatchSource pIBatchSource)
{
if (!File.Exists(fileName))
{
ExecutionProcessor.sqlCmdCmdLet.TerminateCmdLet(new FileNotFoundException(PowerShellStrings.CannotFindPath(fileName), fileName), "ExecutionFailureException", ErrorCategory.ParserError);
return ParserAction.Abort;
}
BatchSourceFile batchSourceFile = new BatchSourceFile(fileName);
pIBatchSource = batchSourceFile;
return ParserAction.Continue;
}
Invoke-Sqlcmd uses the .NET method File.Exists to check whether the specified input file exists. The method's documentation remarks that relative paths are resolved using the working directory:
The path parameter is permitted to specify relative or absolute path
information. Relative path information is interpreted as relative to
the current working directory. To obtain the current working
directory, see GetCurrentDirectory.
This suggests that File.Exists would return false in this case, which would cause the error message seen in the question. You can prove this by executing the method directly from the prompt:
cd C:\Users\iain.CORP\SqlcmdQuestion
[IO.File]::Exists('ExecQuery.sql')
Output:
False
The method returns false, so the cmdlet terminates with a 'file not found' error.
You can work around the unusual behavior
There are two workarounds for Invoke-Sqlcmd using the working directory instead of the working location to resolve relative paths:
Always use an absolute path as the value of the -InputFile parameter. CandiedCode's answer shows how to do this.
Set the working directory and use a relative path.
I solved the problem without side-effects by modifying ExecQuery.ps1 like this:
Add-PSSnapin SqlServerCmdletSnapin100
Add-PSSnapin SqlServerProviderSnapin100
$RestoreValue = [Environment]::CurrentDirectory
[Environment]::CurrentDirectory = Get-Location
Invoke-Sqlcmd -InputFile 'ExampleQuery.sql' -Variable "ComputerName = $Env:COMPUTERNAME"
[Environment]::CurrentDirectory = $RestoreValue
I see this output:
Column1
-------
SKYPC0083
Success!
The new script sets the working directory to match the working location before executing Invoke-Sqlcmd. To avoid unintended side-effects of changing the working directory, the scrtipt restores the working directory value before completing.
Setting the current directory is described in this Channel 9 thread. The example there uses the Directory.SetCurrentDirectory method, but I find it simpler to set the property directly.
You could fully qualify the Inputfile location:
Invoke-Sqlcmd -InputFile 'C:\Users\iain.CORP\SqlcmdQuestion\ExampleQuery.sql' -Variable "ComputerName = $Env:COMPUTERNAME"
And use a variable to drive the script location:
$FileLocation = 'C:\Users\iain.CORP\SqlcmdQuestion\'
I have a BAT file that runs a script on oracle :
sqlplus myuser/mypassword#mydatabase #C:\runthisfile.sql
I want to distribute this to other users (that don't necessarily know how to modify a BAT file).
I want the dos prompt to ask the user to enter their user and password (obviously I don't want to give them my connection details). Have tried all types of combination but all that happens is that I end up with SQL>......
Am stumped!
You can use the SET command with the /P argument in order to prompt the user for text during a batch file run, for example:
SET /P variable=Please enter text
This will then fill variable with whatever they type before hitting return.
#ECHO OFF
SET /P uname=Username:
SET /P pass=password:
This is a simple program which will prompt the first for a username, then a password.
You should then be able to pass this as an argument to sqlplus:
sqlplus %uname%/%pass%#mydatabase #C:\runthisfile.sql
Regarding with SQLPlus stop, doing nothing:
Sometimes SQLPlus finish with ... meaning that is waiting for something more.
Try to add "/" (without quotes) in the end of your SQL file to execute it.
I hope it will help...
It is the very simple code for opening SQLPLUS without entering usename and password manually.
sqlplus -L UserName/Password
For example : sqlplus -L Rak4ak#sun64/rk4
For Understanding :
sqlplus [ [] [{logon | /nolog}] [] ]
is: [-C ] [-L] [-M ""] [-NOLOGINTIME] [-R ]
[-S]
-C <version> Sets the compatibility of affected commands to the
version specified by <version>. The version has
the form "x.y[.z]". For example, -C 10.2.0
-L Attempts to log on just once, instead of
reprompting on error.
-M "<options>" Sets automatic HTML markup of output. The options
have the form:
HTML [ON|OFF] [HEAD text] [BODY text] [TABLE text]
[ENTMAP {ON|OFF}] [SPOOL {ON|OFF}] [PRE[FORMAT] {ON|OFF}]
-NOLOGINTIME Don't display Last Successful Login Time.
-R <level> Sets restricted mode to disable SQL*Plus commands
that interact with the file system. The level can
be 1, 2 or 3. The most restrictive is -R 3 which
disables all user commands interacting with the
file system.
-S Sets silent mode which suppresses the display of
the SQL*Plus banner, prompts, and echoing of
commands.
is: {[/][#] | / }
[AS {SYSDBA | SYSOPER | SYSASM | SYSBACKUP | SYSDG | SYSKM}] [EDITION=value]
Specifies the database account username, password and connect
identifier for the database connection. Without a connect
identifier, SQL*Plus connects to the default database.
The AS SYSDBA, AS SYSOPER, AS SYSASM, AS SYSBACKUP, AS SYSDG,
and AS SYSKM options are database administration privileges.