How to read the resource utilization parameters such as CPU, RAM, BW and Disk I/O values of container, VMs, Host in Cloudsim plus? - cloudsim

How can I read and save the resource consumption of container in cloud plus?
containers: CPU utilization, Memory Utilization, BW Utilization
VMs: CPU utilization, Memory Utilization, BW Utilization, Power Consumption
Hosts: CPU utilization, Memory Utilization, BW Utilization, Power Consumption

Related

mssql/server:2019-latest on k8s shows high "other" system cpu utilization

I am running "mcr.microsoft.com/mssql/server:2019-latest" on aws: c5.2xlarge and AmazonLinux2.
EC2 node monitoring tab shows: < 20% cpu node utilization.
The only thing running is my mssql/server:2019-latest along with aws and k8 pods.
The performance dashboard in sql server shows:
c5.xlarge
c5.2xlarge
As I go from c5.xlarge to c5.2xlarge the interaction with mssms, management studio, is slightly faster, however, sql server performance dashboard cpu utilization is unchanged. Why is this happening?

SSD read/write measurement using iperf3

is it possible to measure the read/write speed on an SSD drive by connecting it to a PC and using iperf?

Find out which query increasing DTU in SQL Azure

I am using SQL Azure SQL Server for my App. The DB tier, which my app was in was working perfectly till recently and the avg DTU usage was under control. But off late, during peak hours, the DTU spikes to touch 100% consistently. Upgrading to the Next tier is an option, but first i want to figure out which query is responsible for this. Is this possible to find in Azure, which query made the DTU jump to 100%?
Simply put DTU is a blend of CPU,IO,Memory you will get based on your service tier..It would be really great if there is column which shows how much DTU the query used compared to total DTU..
I would deal this issue in two steps..
Step1:
Find out Which metric is consistently more than 90%
--This DMV stores DTU usage for every 15 seconds upto 14 days..
SELECT
AVG(avg_cpu_percent) AS 'Average CPU Utilization In Percent',
MAX(avg_cpu_percent) AS 'Maximum CPU Utilization In Percent',
AVG(avg_data_io_percent) AS 'Average Data IO In Percent',
MAX(avg_data_io_percent) AS 'Maximum Data IO In Percent',
AVG(avg_log_write_percent) AS 'Average Log Write Utilization In Percent',
MAX(avg_log_write_percent) AS 'Maximum Log Write Utilization In Percent',
AVG(avg_memory_usage_percent) AS 'Average Memory Usage In Percent',
MAX(avg_memory_usage_percent) AS 'Maximum Memory Usage In Percent'
FROM sys.dm_db_resource_stats;
Step2:
If you see any of the above metric consistently more than 90%,you can tune those queries..
For Example,if cpu is more than 90% ,you can start tracking the queries which are having high cpu usage and tune them..
Updated as of 20171701:
SQLAzure introduced Query Performance insight,which shows DTU used by a Query.You will have to enable Querystore for this to work..
As you can see from screenshot above,you can see exact DTU usage for each query

most impactful Postgres settings to tweak when host has lots of free RAM

My employer runs Postgres on a decently "large" VM. It is currently configured with 24 cores and 128 GB physical RAM.
Our monitoring solution indicates that the Postgres processes never consume more than about 11 GB of RAM even during periods of heaviest load. Presumably all the remaining free RAM is used by the OS to cache the filesystem.
My question: What configuration settings, if tweaked, are most likely to provide performance gains given a workload that's a mixture of transactional and analytical?
In other words, given there's an embarassingly large amount of free RAM, where am I likely to derive the most "bang for my buck" settings-wise?
EDITED TO ADD:
Here are the current values for some settings frequently mentioned in tuning guides. Note: I didn't set these values; I'm just reading what's in the conf file:
shared_buffers = 32GB
work_mem = 144MB
effective_cache_size = 120GB
"sort_mem" and "max_fsm_pages" weren't set anywhere in the file.
The Postgres version is 9.3.5.
The setting that controls Postgres memory usage is shared_buffers. The recommended setting is 25% of RAM with a maximum of 8GB.
Since 11GB is close to 8GB, it seems your system is tuned well. You could use effective_cache_size to tell Postgres you have a server with a large amount of memory for OS disk caching.
Two good places for starting Postgres performance tuning:
Turn on SQL query logging and explain analyze slow or frequent queries
Use pg_activity (a "top" for Postgres) to see what keeps your server busy

Microsoft SQL Server memory utilization

I'm confused about memory utilization of SQL Server. Before migration I've been using 2GB RAM and memory usage was a 1.87GB. After I migrated my server with 16GB physical memory. But it's not approximately 1.87GB it will be 5Gb or higher than. SQL Server data hasn't changed.
Here is memory usage of my current server. Do you have any idea about this? PLS
It uses more memory because it can. It's used for caching purposes; indexes, query plans, data.
You can limit the memory usage in the Server Memory Server Configuration Options

Resources