Fewer FIO output. Update in the same line - benchmarking

After reading fio manual and researching for quite some time, I still do not know the answer. Please help if you know the trick:
Is there an option I can choose in fio so that it doesn't print a new line of progress every second?
[root#localhost fio]# fio test.fio
raw=random-read: (g=0): rw=randread, bs=8K-8K/8K-8K/8K-8K, ioengine=libaio, iodepth=16
fio-2.2.9
Starting 1 process
Jobs: 1 (f=1): [r(1)] [5.0% done] [2023MB/0KB/0KB /s] [259K/0/0 iops] [eta 0
Jobs: 1 (f=1): [r(1)] [6.7% done] [2016MB/0KB/0KB /s] [258K/0/0 iops] [eta 0
Jobs: 1 (f=1): [r(1)] [8.3% done] [2018MB/0KB/0KB /s] [258K/0/0 iops] [eta 0
....

I know this is a bit old, but the solution is simple: make your terminal wider and it won't do that.

Related

Noob user, run into an error on prototyping attempt

I'm making my first attempt at creating a Service and API prototype. I've been following this guide https://medium.com/madhash/prototype-your-app-with-firebase-api-7a51325de6f2
I've reached the step of "Run > Run Function > Initialize" but i'm getting an error relating to the code.gs I've pasted in from https://gist.github.com/dottedsquirrel/b248e5098fe2a9c860ed60f7db7c7e1d
The error i'm getting is "We're sorry, a server error occurred. Please wait a bit and try again. (line 49, file "Code")"
When I look at the line of code it states: var ss = SpreadsheetApp.openById(sheetID);
Can anyone provide me some pointers as to how to resolve this error? Could there be a problem with the data set I created? The data sheet i'm pointing looks like this:
ID
ItemFunctionalID
ItemFunctionalID
Type
CustodianGroupID
ProcessScopeID
CountryID
StartDt
EndDt
Status
1
28948190843
Release
100199
200002
01/01/2021
Active
2
28948190843
Release
100011
01/01/2021
Active
3
28948191024
Release
100199
200002
01/01/2021
Active
4
28948191024
Release
100011
01/01/2021
Active
5
28948191031
Release
100199
200002
01/01/2021
Active
Thanks!
I tried to run the code by:
Run> Run Function > Initialize
And it didn't work, instead try to go to select function, select initialize and then press the triangle on the left to run the script.

Performance Tuning Mystery

So I have a SSIS package with a performance problem. I had done 4 runs so far.
Run 1 - Run whole package . It takes 58 seconds . Performance problem replicated.
Run 2 - Run whole package with logging enabled. 66 seconds .
01-18-Package 66 10:32:26 10:33:32
Task1 2 10:32:26 10:32:28
Task2 1 10:32:28 10:32:29
Task3 2 10:32:29 10:32:31
Task4 1 10:32:31 10:32:32
Task5 1 10:32:31 10:32:32
Data Flow 59 10:32:32 10:33:31
Task 7 1 10:33:31 10:33:32
The bottleneck appears to be the Data Flow.
Run 3. Execute Data flow on its own using right click and execute task. It takes 8 seconds. What ? Running the package with only the data flow task, with play button gives me 9.6 seconds.
Run 4 . Strip out everything from package apart from data flow and run with logging. 52 seconds.
Is the problem the data flow or is this a memory issue ? What should be my logical next step in this investigation ? Logging is not the issue and the Data Flow on its own is not the problem. There is a lookup in the data flow that may use some memory if that is an issue.
[Find FaultID [33]] Information: The Find FaultID processed 540345
rows in the cache. The processing time was 1.623 seconds. The cache
used 19452420 bytes of memory.

How Do You Monitor The CPU Utilization of a Process Using PowerShell?

I'm trying to write a PowerShell script to monitor % CPU utilization of a SQL Server process. I'd like to record snapshots of this number every day so we can monitor it over time and watch for trends.
My research online said this WMI query should give me what I want:
Get-WmiObject -Query "SELECT PercentProcessorTime FROM win32_PerfFormattedData_PerfProc_Process WHERE Name='SqlServr'"
When I run the WMI query I usually get a value somewhere between 30-50%
However, when I watch the process in Resource Monitor it usually averages at less than 1% CPU usage
I know the WMI query is simply returning snapshot of CPU usage rather than an average over a long period of time so I know the two aren't directly comparable. Even so, I think the snapshot should usually be less than 1% since the Resource Monitor average is less than 1%.
Does anybody have any ideas on why there is such a large discrepancy? And how I can get an accurate measurement of the CPU usage for the process?
Everything I've learned about WMI and performance counters over the last couple of days.
WMI stands for Windows Management Instrumentation. WMI is a collection of classes registered with the WMI system and the Windows COM subsystem. These classes are known as providers and have any number of public properties that return dynamic data when queried.
Windows comes pre-installed with a large number of WMI providers that give you information about the Windows environment. For this question we are concerned with the Win32_PerfRawData* providers and the two wrappers that build off of it.
If you query any Win32_PerfRawData* provider directly you'll notice the numbers it returns are scary looking. That's because these providers give the raw data you can use to calculate whatever you want.
To make it easier to work with the Win32_PerfRawData* providers Microsoft has provided two wrappers that return nicer answers when queried, PerfMon and Win32_PerfFormattedData* providers.
Ok, so how do we get a process's % CPU utilization? We have three options:
Get a nicely formatted number from the Win32_PerfFormattedData_PerfProc_Process provider
Get a nicely formatted number from PerfMon
Calculate the % CPU utilization for ourselves using Win32_PerfRawData_PerfProc_Process
We will see that there is a bug with option 1 so that it doesn't work in all cases even though this is the answer usually given on the internet.
If you want to get this value from Win32_PerfFormattedData_PerfProc_Process you can use the query mentioned in the question. This will give you the sum of the PercentProcessorTime value for all of this process's threads. The problem is that this sum can be >100 if there is more than 1 core but this property maxes out at 100. So, as long as the sum of all this process's threads is less than 100 you can get your answer by dividing the process's PercentProcessorTime property by the core count of the machine.
If you want to get this value from PerfMon in PowerShell you can use Get-Counter "\Process(SqlServr)\% Processor Time". This will return a number between 0 - (CoreCount * 100).
If you want to calculate this value for yourself the PercentProcessorTime property on the Win32_PerfRawData_PerfProc_Process provider returns the CPU time this process has used. So, you'll need to take two snapshots we'll call them s1 and s2. We'll then do (s2.PercentProcessorTime - s1.PercentProcessorTime) / (s2.TimeStamp_Sys100NS - s1.TimeStamp_Sys100NS).
And that is the final word. Hope it helps you.
Your hypothesis is almost correct. A single thread (and a process will always have at least one thread) can have at most 100% for PercentProcessorTime but:
A process can have multiple threads.
A system can have multiple (logical) CPU cores.
Hence here (Intel i7 CPU with hyperthreading on) I have 8 logical cores, and the top 20 threads (filtering out totals) shows (with a little tidying up to make it readable):
PS > gwmi Win32_PerfFormattedData_PerfProc_Thread |
?{$_.Name -notmatch '_Total'} |
sort PercentProcessorTime -desc |
select -first 20 |
ft -auto Name,IDProcess,IDThread,PercentProcessorTime
Name IDProcess IDThread PercentProcessorTime
---- --------- -------- --------------------
Idle/6 0 0 100
Idle/3 0 0 100
Idle/5 0 0 100
Idle/1 0 0 100
Idle/7 0 0 96
Idle/4 0 0 96
Idle/0 0 0 86
Idle/2 0 0 68
WmiPrvSE/7#1 7420 6548 43
dwm/4 2260 6776 7
mstsc/2#1 3444 2416 3
powershell/7#2 6352 6552 0
conhost/0#2 6360 6368 0
powershell/5#2 6352 6416 0
powershell/6#2 6352 6420 0
iexplore/7#1 4560 3300 0
Foxit Reader/1 736 5304 0
Foxit Reader/2 736 6252 0
conhost/1#2 6360 1508 0
Foxit Reader/0 736 6164 0
all of which should add up to something like 800 for the last column.
But note this is all rounded to integers. Compare with the CPU column of Process Explorer (which doesn't round when View | Show Fractional CPU is selected) over a few processes. Note, much like win32_PerfFormattedData_PerfProc_Process the percentage value is normalised for the core count (and this is only part of the display):
A lot of processes are using a few hundreds of thousands of cycles, but not enough to round up to a single percent.
Have you tryed Get-Counter ?
PS PS:\> Get-Counter "\Processus(iexplor*)\% temps processeur"
Timestamp CounterSamples
--------- --------------
17/07/2012 22:39:25 \\jpbhpp2\processus(iexplore#8)\% temps processeur :
1,5568026751287
\\jpbhpp2\processus(iexplore#7)\% temps processeur :
4,6704080253861
\\jpbhpp2\processus(iexplore#6)\% temps processeur :
0
\\jpbhpp2\processus(iexplore#5)\% temps processeur :
4,6704080253861
\\jpbhpp2\processus(iexplore#4)\% temps processeur :
0
\\jpbhpp2\processus(iexplore#3)\% temps processeur :
0
\\jpbhpp2\processus(iexplore#2)\% temps processeur :
0
\\jpbhpp2\processus(iexplore#1)\% temps processeur :
1,5568026751287
\\jpbhpp2\processus(iexplore)\% temps processeur :
0
Be careful it depend on you locale test :
PS PS:\> Get-Counter -ListSet * | where {$_.CounterSetName -contains "processus"}

Apache2: server-status reported value for "requests/sec" is wrong. What am I doing wrong?

I am running Apache2 on Linux (Ubuntu 9.10).
I am trying to monitor the load on my server using mod_status.
There are 2 things that puzzle me (see cut-and-paste below):
The CPU load is reported as a ridiculously small number,
whereas, "uptime" reports a number between 0.05 and 0.15 at the same time.
The "requests/sec" is also ridiculously low (0.06)
when I know there are at least 10 requests coming in per second right now.
(You can see there are close to a quarter million "accesses" - this sounds right.)
I am wondering whether this is a bug (if so, is there a fix/workaround),
or maybe a configuration error (but I can't imagine how).
Any insights would be appreciated.
-- David Jones
- - - - -
Current Time: Friday, 07-Jan-2011 13:48:09 PST
Restart Time: Thursday, 25-Nov-2010 14:50:59 PST
Parent Server Generation: 0
Server uptime: 42 days 22 hours 57 minutes 10 seconds
Total accesses: 238015 - Total Traffic: 91.5 MB
CPU Usage: u2.15 s1.54 cu0 cs0 - 9.94e-5% CPU load
.0641 requests/sec - 25 B/second - 402 B/request
11 requests currently being processed, 2 idle workers
- - - - -
After I restarted my Apache server, I realized what is going on. The "requests/sec" is calculated over the lifetime of the server. So if your Apache server has been running for 3 months, this tells you nothing at all about the current load on your server. Instead, reports the total number of requests, divided by the total number of seconds.
It would be nice if there was a way to see the current load on your server. Any ideas?
Anyway, ... answered my own question.
-- David Jones
Apache status value "Total Accesses" is total access count since server started, it's delta value of seconds just what we mean "Request per seconds".
There is the way:
1) Apache monitor script for zabbix
https://github.com/lorf/zapache/blob/master/zapache
2) Install & config zabbix agentd
UserParameter=apache.status[*],/bin/bash /path/apache_status.sh $1 $2
3) Zabbix - Create apache template - Create Monitor item
Key: apache.status[{$APACHE_STATUS_URL}, TotalAccesses]
Type: Numeric(float)
Update interval: 20
Store value: Delta (speed per second) --this is the key option
Zabbix will calculate the increment of the apache request, store delta value, that is "Request per seconds".

Performance-related: How does SQL Server process concurrent queries from multiple connections from a single .NET desktop application?

Single-threaded version description:
Program gathers a list of questions.
For each question, get model answers, and run each one through a scoring module.
Scoring module makes a number of (read-only) database queries.
Serial processing, single database connection.
I decided to multi-thread the above described program by splitting the question list into chunks and creating a thread for each one.
Each thread opens it's own database connection and works on it's own list of questions (about 95 questions on each of 6 threads). The application waits for all threads to finish, then aggregates the results for display.
To my surprise, the multi-threaded version ran in approximately the same time, taking about 16 seconds instead of 17.
Questions:
Why am I not seeing the kind of gain in performance I would expect from executing queries concurrently on separate threads with separate connections? Machine has 8 processors.
Will SQL Server process queries concurrently when they are coming from a single application, or might it (or .net itself) be serializing them?
Might there be something misconfigured, that would make it go faster, or might I just be pushing SQL Server to its computational limits?
Current configuration:
Microsoft SQL Server Developer Edition 9.0.1406 RTM
OS: Windows Server 2003 Standard
Processors: 8
RAM: 4GB
This is just a shot in the dark, but I bet you are not seeing the performance gain because they serialize themselves in the database due to locking of shared resources (records). Now for the small print.
I assume your C# code is actually correct and you actually do start separate threads and issue each query in parallel. No offense, but I've seen many making that claim and the code being actually serial in the client, for various reasons. You should validate this by monitoring the server (via Profiler, or use the sys.dm_exec_requests and sys.dm_exec_sessions).
Also I assume that your queries are of similar weight. i.e., you do not have one thread that lasts 15 seconds and 5 that 100 ms.
The symptoms you describe, in lack of more details, would point that you have a write operation at the beginning of each thread that takes an X lock on some resource. First thread starts and locks the resource, other 5 wait. 1st thread is done, releases the resource then the next one grabs it, other 4 wait. So last thread has to wait for the execution of all other 5. This would be extremely easy to troubleshoot by looking at sys.dm_exec_requests and monitor what blocks the requests.
BTW you should consider using Asynchronous Processing=true and rely on the async methods like BeginExecuteReader to launch your commands in execution in parallel w/o the overhead of client side threads.
You can simply check the task manager when the process is running. If it's showing 100% CPU usage then its CPU bound. Otherwise its IO Bound.
For hyperthreading 50% CPU usage is roughly equal to 100% usage!
Wow I didn't realize how old the thread was. I guess its always good to leave the response for others looking.
How large is your database?
How fast are your HDDs / Raid / Other storage
Perhaps your DB is I/O bound?
My first inclination is that you're trying to solve an IO problem with threads, which almost never works. IO is IO, and more threads doesn't increase the pipe. You'd be better off downloading all questions and their answers in one batch and processing the batch locally with multiple threads.
Having said that, you're probably experiencing some db locking that is causing slowness. Since you're talking about read-only queries, try using the with (nolock) hint on your queries to see if that helps.
Regarding SQL server processing, it is my understanding that SQL Server will try to process as many connections concurrently as possible (one statement at a time per connection), up to the max connections allowed by configuration. The kind if issue you're seeing is almost never a thread issue and almost always a locking or IO problem.
is it possible that the the threads share a connection? did you verify that multiple SPIDs are created when this runs (sp_who)?
I ran a join query across sys.dm_os_workers, sys.dm_os_tasks, and sys.dm_exec_requests on task_address, and here are the results (some uninteresting/zero-valued fields excluded, others prefixed with ex or os to resolve ambiguities):
-COL_NAME- -Thread_1- -Thread_2- -Thread_3- -Thread_4-
task_state SUSPENDED SUSPENDED SUSPENDED SUSPENDED
context_switches_count 2 2 2 2
worker_address 0x3F87A0E8 0x5993E0E8 0x496C00E8 0x366FA0E8
is_in_polling_io_completion_routine 0 0 0 0
pending_io_count 0 0 0 0
pending_io_byte_count 0 0 0 0
pending_io_byte_average 0 0 0 0
wait_started_ms_ticks 1926478171 1926478187 1926478171 1926478187
wait_resumed_ms_ticks 1926478171 1926478187 1926478171 1926478187
task_bound_ms_ticks 1926478171 1926478171 1926478156 1926478171
worker_created_ms_ticks 1926137937 1923739218 1921736640 1926137890
locale 1033 1033 1033 1033
affinity 1 4 8 32
state SUSPENDED SUSPENDED SUSPENDED SUSPENDED
start_quantum 3074730327955210 3074730349757920 3074730321989030 3074730355017750
end_quantum 3074730334339210 3074730356141920 3074730328373030 3074730361401750
quantum_used 6725 11177 11336 6284
max_quantum 4 15 5 20
boost_count 999 999 999 999
tasks_processed_count 765 1939 1424 314
os.task_address 0x006E8A78 0x00AF12E8 0x00B84C58 0x00D2CB68
memory_object_address 0x3F87A040 0x5993E040 0x496C0040 0x366FA040
thread_address 0x7FF08E38 0x7FF8CE38 0x7FF0FE38 0x7FF92E38
signal_worker_address 0x4D7DC0E8 0x571360E8 0x2F8560E8 0x4A9B40E8
scheduler_address 0x006EC040 0x00AF4040 0x00B88040 0x00E40040
os.request_id 0 0 0 0
start_time 2009-05-26 19:39 39:43.2 39:43.2 39:43.2
ex.status suspended suspended suspended suspended
command SELECT SELECT SELECT SELECT
sql_handle 0x020000009355F1004BDC90A51664F9174D245A966E276C61 0x020000009355F1004D8095D234D39F77117E1BBBF8108B26 0x020000009355F100FC902C84A97133874FBE4CA6614C80E5 0x020000009355F100FC902C84A97133874FBE4CA6614C80E5
statement_start_offset 94 94 94 94
statement_end_offset -1 -1 -1 -1
plan_handle 0x060007009355F100B821C414000000000000000000000000 0x060007009355F100B8811331000000000000000000000000 0x060007009355F100B801B259000000000000000000000000 0x060007009355F100B801B259000000000000000000000000
database_id 7 7 7 7
user_id 1 1 1 1
connection_id BABF5455-409B-4F4C-9BA5-B53B35B11062 A2BBCACF-D227-466A-AB08-6EBB56F34FF2 D330EDFE-D49B-4148-B7C5-8D26FE276D30 649F0EC5-CB97-4B37-8D4E-85761847B403
blocking_session_id 0 0 0 0
wait_type CXPACKET CXPACKET CXPACKET CXPACKET
wait_time 46 31 46 31
ex.last_wait_type CXPACKET CXPACKET CXPACKET CXPACKET
wait_resource
open_transaction_count 0 0 0 0
open_resultset_count 1 1 1 1
transaction_id 3052202 3052211 3052196 3052216
context_info 0x 0x 0x 0x
percent_complete 0 0 0 0
estimated_completion_time 0 0 0 0
cpu_time 0 0 0 0
total_elapsed_time 54 41 65 39
reads 0 0 0 0
writes 0 0 0 0
logical_reads 78745 123090 78672 111966
text_size 2147483647 2147483647 2147483647 2147483647
arithabort 0 0 0 0
transaction_isolation_level 2 2 2 2
lock_timeout -1 -1 -1 -1
deadlock_priority 0 0 0 0
row_count 6 0 1 1
prev_error 0 0 0 0
nest_level 2 2 2 2
granted_query_memory 512 512 512 512
The query plan predictor for all queries shows a couple nodes, 0% for select, and 100% for a clustered index seek.
Edit: The fields and values I left out where (same for all 4 threads, except for context_switch_count): exec_context_id(0), host_address(0x00000000), status(0), is_preemptive(0), is_fiber(0), is_sick(0), is_in_cc_exception(0), is_fatal_exception(0), is_inside_catch(0), context_switch_count(3-89078), exception_num(0), exception_Severity(0), exception_address(0x00000000), return_code(0), fiber_address(NULL), language(us_english), date_format(mdy), date_first(7), quoted_identifier(1), ansi_defaults(0), ansi_warnings(1), ansi_padding(1), ansi_nulls(1), concat_null_yields_null(1), executing_managed_code(0)

Resources