Batch file to monitor a processes RAM, CPU%, Network data, threads - batch-file

I need to generate a report (periodic, say every 1 minute) that whilst run, generates the following in a txt file (or other):
For a given process...
Timestamp : RAM : CPU% : Network data sent/received for last second : Total network data sent/received : threads
I believe in Process Explorer the network data sent/received for last second is called the Delta.
Could you recommend how I might capture this using either a plain batch file, or relying on another tool if required? Such as power shell or PsList? Or at least, point me in the direction of the applicable tool that'll report all these things for a given process? And ideally, be able to report these from a process running on a remote machine if possible! Many thanks, knowledge gurus!

logman create counter cpu_mem_trh -c "\Processor(_Total)\% Processor Time" "\Memory\Pool Paged Bytes" "\Process(*)\Thread Count" -f csv -cf C:\PerfLogs\perflog.csv
logman update cpu_mem_trh -si 60 -v mmddhhmm
logman start cpu_mem_trh
to stop the performance counter use:
logman start cpu_mem_trh
Here are all available performance counters.
And here's the logman help.
For remote machine try with \\machine name prefix on each counter path or with -s option.Time intervals are set with -si option on the update verb. Path to the report is set with -cf option.

Related

Check if a file exists in UNIX from SQLplus without UTL_FILE

My current scenario is like this:
I need to login to sqlplus from a shell script to call a stored procedure.
After that I need to create a CSV file by SPOOLING data from a table.
Then I need to check whether the CSV file has been created in a particular directory and depending on the result an update query needs to be run.
I know that this can be checked within sqlplus with the help of UTL_FILE package but unfortunately due to Client policies,the access of this package is restricted in the current system.
Another way is to exit from sqlplus and perform the file check in UNIX and then again log in to sqlplus to perform the rest actions. But this I believe would result in slower execution time and performance is an important factor in this implementation as the tables contain huge volumes of data(in millions).
So is there any other way to check this from sqlplus without exiting from the current session?
System Info:
OS - Red Hat Enterprise Linux
Database - Oracle 11g
If the file is on the same machine that you're running SQL*Plus on, you could potentially use the host command.
If the file you're checking is the same one you're spooling to, it must exist anyway, or you would have got an SP error of some kind; but if you do want to check the same file for some reason, and assuming you have a substitution variable with the file name:
define csv_file=/path/to/spool.csv
-- call procedure
spool &csv_file
-- do query
spool off
host ls &csv_file
update your_table
set foo=bar
where &_rc = 0;
If the file exists when the host command is run, the _rc substitution variable will be set to zero. If the file doesn't exist or isn't readable for any reason it will be something else - e.g. 2 if the file just doesn't exist. Adding the check &_rc = 0 to your update will mean no rows are updated if there was an error. (You can of course still have whatever other conditions you need for the update).
You could suppress the display of the file name by adding 1>/dev/null to the host command string; and could also suppress any error messages by also adding 2>/dev/null, though you might want to see those.
The documentation warns against using &_rc as it isn't portable; but it works on RHEL so as long as you don't need your script to be portable to other operating systems this may be good enough for you. What you can't do, though, is do anything with the contents of the file, or interpret anything about it. All you have available is the return code from the command you run. If you need anything more sophisticated you could call a script that generates specific return codes, but that's getting a bit messy.

Arelle locate ratio extraction command that i cannot understand to find in docs(~2pages)

The basic command while we are working with Command Line Operation in Arelle is:
python arelleCmdLine.py arguments
provided we go with the cmd to the folder that arelle is installed.
I have devoted huge resources but i cannot find if there is a command in the Documentation (about ~2 pages) that can output ratios (e.g. Current Ratio) or metrics (e.g. Revenue) instead of having to download all the data in Columns and filter the data. I must admit that i cannot understand some commands in the documentation.
What i am doing to download data is:
python arelleCmdLine.py -f http://www.sec.gov/Archives/edgar/data/1009672/000119312514065056/crr-20131231.xml -v --facts D:\Save_in_File.html --factListCols "Label Name contextRef unitRef Dec Prec Lang Value EntityScheme EntityIdentifier Period Dimensions"
-f is the command that pulls data and after that is a location with data in the web
-v is the command to validate the data that are pulled
--facts Saves the data in an HTML file in a designated directory
factListCols is the Columns i choose to have (i take all the available columns in the upper command).
There is an absolute zero on tutorials.
Arelle only runs on Python 3 and can be downloaded without creating a hassle only by following these quick and simple steps.

if-then batch to map drives

I am trying to get a couple scripts to work with each other, but I am not entirely familiar with the if-then commands, I am using wizapp and I have my info ready to go, but I don't know how to map a specific location based on the output of wizapp, as a for instance
if %siteid%=="0"
How do I map that to a drive, I have 10 different drives that have to be mapped using that
info, and I am lost, siteid will obviously be different in each if then statement?
This is relatively easy to do. I will provide manual instructions as it is extremely useful to learn and will improve your coding skills.
C:\windows\system32> net view
Server Name Remark
---------------------------------------------------------------------------------
\\PC1
\\PC2
\\PC3
\\PC4
\\PC5
\\PC6
\\PC7
\\PC8
\\PC9
\\SERVER
The command completed successfully.
C:\windows\system32> net view \\PC1
Shared resources at \\PC1
Share name Type Used as Comment
-------------------------------------------------------------------------------
SharedDocs Disk
The command completed successfully.
C:\windows\system32> net use ( Drive letter A-Z ) \\PC1\SharedDocs
The command completed successfully.
Now open up My Computer and youll see that PC1 is registered as a drive onto your computer.

Replay a file-based data stream

I have a live stream of data based on files in different formats. Data comes over the network and is written to files in certain subdirectories in a directory hierarchy. From there it is picked up and processed further. I would like to replay e.g. one day of this data stream for testing and simulation purposes. I could duplicate the data stream for one day to a second machine and „record“ it this way, by just letting the files pile up without processing or moving them.
I need something simple like a Perl script which takes a base directory, looks at all contained files in subdirectories and their creation time and then copies the files at the same time of the day to a different base directory.
Simple example: I have files a/file.1 2012-03-28 15:00, b/file.2 2012-03-28 09:00, c/file.3 2012-03-28 12:00. If I run the script/program on 2012-03-29 at 08:00 it should sleep until 09:00, copy b/file.2 to ../target_dir/b/file.2, then sleep until 12:00, copy c/file.3 to ../target_dir/c/file.3, then sleep until 15:00 and copy a/file.1 to ../target_dir/a/file.1.
Does a tool like this already exist? It seems I’m missing the right search keywords to find it.
The environment is Linux, command line preferred. For one day it would be thousands of files with a few GB in total. The timing does not have to be ultra-precise. Second resolution would be good, minute resolution would be sufficient.

DB2 load partitioned data in parallel

I have a 10-node DB2 9.5 database, with raw data on each machine (ie
node1:/scratch/data/dataset.1
node2:/scratch/data/dataset.2
...
node10:/scratch/data/dataset.10
There is no shared NFS mount - none of my machines could handle all of the datasets combined.
each line of a dataset file is a long string of text, column delimited. The first column is the key. I don't know the hash function that DB2 will use, so dataset is not pre-partitioned.
Short of renaming all of my files, is there any way to get DB2 to load them all in parallel?
I'm trying to do something like
load from '/scratch/data/dataset' of del modified by coldel| fastparse messages /dev/null replace into TESTDB.data_table part_file_location '/scratch/data';
but I have no idea how to suggest to db2 that it should look for dataset.1 on the first node, etc.
If the individual data files on each partition didn't originate from the same database partition, then you're stuck, and will have to run the load 10 times -- once from each different database partition. You could do this with db2_all to perform the load in a single command:
db2_all "db2 connect to db; db2 load from /scratch/data/dataset.\$DB2NODE of del ..."
Don't try to run the db2_all command in parallel. ;-)
One other thought for the future: Do you have enough room on a single server if you compress all of the files first? You can load from a named pipe:
mkfifo f
cat dataset.*.gz | gzip -dc > f &
db2 "load from f of del ...."

Resources