Log files are available inside the 1500 folders.
D:\temp\output\BG Output_Last_0001\log.txt
D:\temp\output\BG Output_Last_0002\log.txt
D:\temp\output\BG Output_Last_0003\log.txt
D:\temp\output\BG Output_Last_0004\log.txt
D:\temp\output\BG Output_Last_0005\log.txt
.
.
I can get the output of parsing a single log file using the below command.
C:\Program Files (x86)\Log Parser 2.2>LogParser -i:TSV "select MIN(Datetime) AS StartTime from 'D:\temp\output\log_0001\log.txt'" -o:datagrid
What I tried:
I tried the below command to parse using the * symbol in Log Parser
C:\Program Files (x86)\Log Parser 2.2>LogParser -i:TSV "select MIN(Datetime) AS StartTime from 'D:\temp\output\*\log.txt'" -o:datagrid
D:\temp\output\*\log.txt
Can someone help how to parse the log from the multiple folders?
Related
Hi I have a DB2 database at
/db2/ins/data/ins/dbtest
but it origin is
/db2/oldins/data/oldins/dbtest1
I copied the files to the folders as needed.
My relocate.cfg look like:
DB_NAME=dbtest1,dbtest
DB_PATH=/db2/oldins/data/dbtest1/metalog/,/db2/ins/data/ins/dbtest/metalog
INSTANCE=oldins,ins
STORAGE_PATH=/db2/oldins/data/dbtest1/data/,/db2/ins/data/ins/dbtest/data/
LOG_DIR=/db2/oldins/data/dbtest1/metalog/oldins/NODE0000/SQL00001/LOGSTREAM0000/,/db2/ins/data/ins/dbtest/metalog/NODE0000/SQL00001/
LOGARCHMETH1=DISK:/db2/backup/ins/dbtest/archivlogfiles/
I get this error:
DBT1006N The "/db2/oldins/data/dbtest1/data/dbtest1_TS.dbf/SQLTAG.NAM" file or device could not be opened.
The system is DB2 v. 10.5 LUW.
The file does exist and the priviledges are correct.
How do I add this to the relocate.cfg file or what do I need to do?
Thank you for any help.
Here is one of simple test case how to use db2relocatedb.
[Db2] Simple test case shell script for db2relocatedb command
https://www.ibm.com/support/pages/node/1099185
It has topic about:
- db2relocatedb for changing container path
And it tells that we need to change 'path' by 'mv' command before run db2relocatedb command as below:
# mv storage path manually and run db2relocatedb with relocate.cfg file
mv /home/db2inst1/db/stor1 /home/db2inst1/db/new1
mv /home/db2inst1/db/stor2 /home/db2inst1/db/new2
db2relocatedb -f relocate.cfg
It is recommended to review it.
Hope this helps.
I'm trying to run a batch file to execute many commands in 1 psql shell
I'm using Postgres version 11.4
This is my code:
#ECHO OFF
"C:\Program Files\PostgreSQL\11\bin\psql.exe" "dbname=databasename
host=hostname user=username password=#bcd1234 port=5432 sslmode=require"
DELETE from my_table1;
DELETE from my_table2;
DELETE from my_table3;
PAUSE
I expect the script delete all data from 3 tables, but it only run the first command line to login Postgres.
You can execute multiple commands by executing them from a file.
Create a file and write all your commands in it.
Use the -f option to pass the file as the source of commands.
Please refer: (-f option) https://www.postgresql.org/docs/9.1/app-psql.html
Having some trouble using the bulk:upsert command to update Account objects via a csv file. Hopefully someone can help me with this. Below is what I'm doing:
My csv file name is account.csv and it contains the following data:
Id,Name
0012F00000QjhC7QAJ,LimTest 1
0012F00000QjhkSQAR,LimTest 2
Below is the command that I'm running:
sfdx force:data:bulk:upsert -s Account -f account.csv -i Id -u dev
Above command gets submitted sucessfully. But the job failed.
The Batch status is as of below:
When I view the request, it looks like below:
It works after I manually created an empty file and copied and pasted the data into this new file. The original file, account.csv, was created using this command:
sfdx force:data:soql:query -q "select Id, Name from Account" -r csv -u dev > account.csv
I guess the above command must have created the file in a different encoding that the bulk:upsert does not know how to handle.
I have created a BCP utility and I have wrapped it in a bat file. I have then created a daily task using Task Scheduler in Windows Server 2012.
The function of the BCP utility is to rename a file called 'myfile.csv' (located in C:) by adding a date stamp to it and updating the file with the result of a SQL query.
The codes currently stand as follows:
cd:\Program Files\ Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn
set vardate=%DATE:~4,10%
set varDateWithoutSlashes=%vardate:/=-%
ren C:\myfile.csv myfile_%varDateWithoutSlashes%.csv
bcp "SELECT TOP 100 ReservationStayID,NameTitle,FirstName,LastName,ArrivalDate,DepartureDate FROM MyDatabase.dbo.GuestNameInfo" queryout C:\myfile.csv -t, -c -S [ipaddress] -U sa -P 1234
My problem is that when the task runs, it renames the file correctly with a the date stamp but it seems that the SELECT query does not run as the file is empty (except the headers, which have been pre-loaded by the way).
What is wrong with my codes?
I should also add the following:
Are the double quotes in the select statement above correct? Or should they be single quotes?
Should the ipaddress in my codes above be in square brackets or should I remove them?
I have left the "Location" filed 'as is' in the Task Scheduler (please see screenshot below). Should that be filled? If yes, by what?
Thanks for helping out!
I'm trying to using Microsoft's Log Parser to read multiple sets of IIS log files. Now, my query works fine, however, to get it to work properly, I need to have the directory listed that the files exist directly under.
I want to be able to do a recursive search under a high level directory. I have found how to do this thru the DLLs, but I can't find how with the command prompt.
There has to be a simple solution to this, and I'm just missing it.
Add the -recurse:-1 option to the command-line. Check the available command-line options for your input format with: C:\>logparser -h -i:IIS
Example output:
Input format: IIS (Microsoft IIS Log Format)
Parses Microsoft IIS log files
FROM syntax:
<filename> | <SiteID> [, <filename> | <SiteID> ... ]
<SiteID> = '<' SiteID '>'
SiteID can be a SiteID number, a fully qualified ADSI Path (e.g.
"//GABRIEGI1/W3SVC/1"), or a Site name (e.g. "My External Site"), eventually
containing wildcards
Parameters:
-locale <locale name> : 3-letter ID of the log file locale
[default value=DEF]
-returnExtraFields ON|OFF : Return additional fields in
Parameters field [default value=OFF]
-iCodepage <codepage ID> : Input codepage (-2=guess from
filename and/or LogInUTF8 property)
[default value=guess from filename
and/or LogInUTF8 property]
-recurse <level> : Max subdirectory recursion level
(0=no recurse, -1=all levels)
[default value=0]
-minDateMod <date> : Minimum file last modified date
[default value=not specified]
-iCheckpoint <checkpoint file> : Save checkpoint information to this
file [default value=no checkpoint]
Fields:
LogFilename (S) LogRow (I) UserIP (S) UserName (S)
Date (T) Time (T) ServiceInstance (S) HostName (S)
ServerIP (S) TimeTaken (I) BytesSent (I) BytesReceived (I)
StatusCode (I) Win32StatusCode (I) RequestType (S) Target (S)
Parameters (S)
I couldnt run a -recurse if the import format was set to W3C. (-i:W3C)
For this I simply added added the following in Powershell when specifying the file/folder path. E.G
$httpLogPath = "Get-ChildItem Y:\Data\folder* -include *.log -recurse"