How to read the resource utilization parameters such as CPU, RAM, BW and Disk I/O values of container, VMs, Host in Cloudsim? - cloudsim

I have a question about simulation with cloudsim. I want to simulate 4 hosts, 20 VMs and 40 container as illustrated in the picture.
I want to gather the resource utilization of container, VM and host.
the resource consumptions that would be considered, are listed as:
in container: cpu,memory and network
in virtual machine: cpu,memory,network, if it possibles power consumption of virtual machine
in host: cpu,memory, network and power consumption.
How can I code it?
In advance, I really appreciate for the time that you spend to answer my question.
Thanks,
enter image description here

Related

What does 100% utilisation mean in SageMaker Studio?

(This is related to Sage Maker Studio CPU Usage but focuses on interpreting meaning rather than modifying behaviour)
SageMaker Studio shows Kernel and Instance usage for CPU and Memory:
The kernel is just the selected Jupyter kernel and so would appear as a single process on a local machine, while the instance is the EC2 instance that they're running on.
The only documentation from Amazon appears to be in Use the SageMaker Studio Notebook Toolbar which says that it "Displays the CPU usage and memory usage. Double-click to toggle between the current kernel and the current instance" (this is outdated and relates to the old position of the information).
In the context of SageMaker Studio, does 100% CPU mean 100% of one CPU or 100% of all CPUs? (top shows multi-core as >100% but consolidated measures like Windows Task Manager's default representation show all cores as 100%)
And does 25% instance utilisation then mean that my instance is over-specced? (Intuitively, it should do because I'm not using 100% even when training a model, but I've tried smaller instances and still never maxes Instance CPU usage, only Kernel CPU usage)
I've tried using joblib to make some parallel "wheel spinning" tasks to check usage, but that just resulted in Kernel being quiet and Instance having all of the usage!

Sage Maker Studio CPU Usage

I'm working in sage maker studio, and I have a single instance running one computationally intensive task:
It appears that the kernel running my task is maxed out, but the actual instance is only using a small amount of its resources. Is there some sort of throttling occurring? Can I configure this so that more of the instance is utilized?
Your ml.c5.xlarge instance comes with 4 vCPU. However, Python only uses a single CPU by default. (Source: Can I apply multithreading for computationally intensive task in python?)
As a result, the overall CPU utilization of your ml.c5.xlarge instance is low. To utilize all the vCPUs, you can try multiprocessing.
The examples below are performed using a 2 vCPU + 4 GiB instance.
In the first picture, multiprocessing is not set up. The instance CPU utilization peaks at around 50%.
single processing:
In the second picture, I created 50 processes to be run simultaneously. The instance CPU utilization rises to 100% immediately.
multiprocessing:
It might be something off with these stats your seeing, or they are showing different time spans, or the kernel has a certain resources assignment out of the total instance.
I suggest opening a terminal and running top to see what's actually going on and which UI stat it matches (note your opening the instance's terminal, and not the Jupyter UI instance terminal).

why does libvlcsharp winform mosaic with 16 channels use a lot of CPU

Setup as follows: A winform app, visual studio 2019, create 16 videoview/mediaplayer instances, each streaming a 960 X 540 30fps camera stream from a multicasting camera.
CPU i7 2.67GHz, GPU NV GTX 1650.
The GPU is loading up to 44% decode and about the same for 3d. The application uses an amazing 75 to 90% of the CPU. It jumps around a lot from one test run to another. The GPU is very stable.
Here's some other information that is interesting. If I run a single copy of this application with one video stream the CPU use is about 5/10% of CPU. If I run 16 instances of the application each instance uses about 4/10 to 8/10% of the CPU. Once I have 16 videos streaming the GPU is same as above (44%) the CPU is nominal.
The increase of CPU usage within one instance while adding cameras is not linear it takes a big jump after 9.
From the diagnostic image below you can see the usage is isolated almost entirely in the Native code. Other diagrams show about 2/3 in the kernel and 1/3 in system IO. The CPU is spread across all the cores pretty evenly.
code on gist
I have tried a lot of variations on this but no matter what I try the CPU usage is pretty constant once I get up to 16 channels. I have tried running each instance within its own thread. That made no difference. I really would like to understand this and find a way to reduce CPU usage. I have an application that uses this tech and a customer that requires even more channels than 16.
It may be a bug, which would need to be reported on trac.videolan.org with a minimal C/C++ reproduction sample for the VLC developers.
Do note that comparing 16 VLC app instances (16 processes) playing and 1 LibVLC-based app instance playing 16 streams (1 process) is not exactly a fair comparison.
The perf usage should still be linear and not exponential, though, so maybe there is a bug.

hardware specification recommendation for Solr

I am looking for a hardware specification for Solr search engine. Our requirement is to build a search system which indexes about 5 to 9 million documents. The peak query per second is around 50 people. I checked the Dell website and think that maybe a Rack Server is good. So I made a sample product. How do you think about my choice? Do you have any experience on hardware specification for Solr system?
PowerEdge R815
R815 Chassis for Up to Six 2.5 Inch Hard Drives
Processor
2x AMD Opteron 6276, 2.3GHz, 16C, Turbo CORE, 16M L2/16M L3, 1600Mhz Max Mem
Additional Processor
No 3rd/4th Processors edit
Operating System
No Operating System edit
OS Media kits
None edit
OS and SW Client Access Licenses
None edit
Memory
64GB Memory (8x8GB), 1333MHz, Dual Ranked LV RDIMMs for 2 Processors edit
Hard Drive Configuration
No RAID for PERC H200 Controllers (Non-Mixed Drives) edit
Internal Controller
PERC H200 Integrated RAID Controller edit
Hard Drives
1TB 7.2K RPM SATA 2.5in Hot-plug Hard Drive edit
Data Protection Offers
None edit
Embedded Management
iDRAC6 Express edit
System Documentation
Electronic System Documentation and OpenManage DVD Kit edit
Network Adapter
IntelĀ® Gigabit ET NIC, Dual Port, Copper, PCIe-4 edit
Network Adapter
IntelĀ® Gigabit ET NIC, Dual Port, Copper, PCIe-4 edit
Host Bus Adapter/Converged Network Adapter
None edit
Power Supply
1100 Watt Redundant Power Supply edit
Power Cords
NEMA 5-15P to C13 Wall Plug, 125 Volt, 15 AMP, 10 Feet (3m), Power Cord edit
BIOS Setting
Performance BIOS Setting edit
Rails
No Rack Rail or Cable Management Arm edit
Bezel
PowerEdge R815 Bezel edit
Internal Optical Drive
DVD ROM, SATA, Internal
I agree with Marko (not myself, other Marko:).
You should use e.g. jMeter to test capabilities (the most important metric of course being: how response time changes with number of parallel users) of your configuration and then make educated decision based on those results.
Be prepared to play with JVM memory settings in order to see how if affects overall performance.
I'd also test various application servers to see how that decision affects response time.
PS If you choose to use jMeter you should definitely make use of jMeter Plugins, which will allow you (Composite graph) to show number of parallel users and response time with server's processor, memory and network loads on the same graph.
This is a hugely open ended question, with far too many details unknown - the straw-man hardware spec is really not very useful (TL;DR)
There is only one sensible way to go about tackling this problem and that is empirically.

A-sync Speed over Gigabit LAN Ethernet File Transfer

I have a strange problem with transferring files between 2 computers connected to each other through an ethernet cable. Both PCs have on-board gigabit ethernet ports. Aside from the different hardwares, the softwares (especially network settings) are configured almost the same, with Windows 7 x64 etc. Tests have been taken with and without antivirus programs running with no difference. Duplex settings are auto negotiation. Jumbo packets (~9MB) are enabled (usually I'm transferring really large files). Hard drives are not a problem, since local transfer speed within a computer is around 100 MB/s.
Now if I am on PC1, and accessing shared files on PC2: Transferring files from PC1 to PC2 is very fast, usually in the range of 60 MB/s (see results below from LAN SpeedTest). But the opposite (transferring from PC2 to PC1) is really slow, about 10 MB/s.
Speed Test 1
If I am on PC2, and accessing PC1: Transferring files from PC2 to PC1 is slow (see speed test below - it's actually a little slower than when I'm transferring files and reading the speed report from Windows), while the opposite is fast (also about 60 MB/s like in the first case)
(I would post link 2 here but it does not allow me to since I am new)
So what causes this?
TIA
It might have to do with the hard drive write speed of PC1. The read speed is fast enough to send data to PC2, but the write speed is not fast enough to receive the data from PC2.
This may be a silly question but when you say the computers are connected via an Ethernet cable do you mean directly connected or via a switch? If directly connected, are you using a crossover Ethernet cable or straight-through cable? If using a straight-through cable, I would say that one or both of the cards are failing to auto-sense if they have that capability. Also auto-duplex and auto-speed settings have been known to fail in the past, try setting them manually as well.

Resources