Can anyone explain the real difference between these two methods
vm.getTotalUtilizationOfCpu(CloudSim.clock());
and
cloudlet.getUtilizationOfCpu(CloudSim.clock());
Thanks in advance
Here's the difference.
1) vm.getTotalUtilizationOfCpu(CloudSim.clock());
getTotalUtilizationOfCpu is the method of class vm.
You can call it by instance of vm class
If you look at the implementation of this method in source code.
public abstract double getTotalUtilizationOfCpu(double time);
/**
* Gets the current requested mips.
*
* #return the current mips
*/
It returns the cpu utilization in form of mips
2) cloudlet.getUtilizationOfCpu(CloudSim.clock());
getTotalUtilizationOfCpu is the method of class cloudlet.
You can call it by instance of cloudlet class
if you look at the implementation of this method in source code.
/**
* Gets the utilization percentage of cpu.
*
* #param time the time
* #return the utilization of cpu
*/
public double getUtilizationOfCpu(final double time) {
return getUtilizationModelCpu().getUtilization(time);
}
It returns the cpu utilization in form of percentage (between 0 to 1)
Related
Respected researchers, i want to calculate the power consumed by the virtual machines of a physical server in a cloud datacenter.
please help me i will be very thankful for this appreciation.
/**
* The cost of each byte of bandwidth (bw) consumed.
*/
protected double costPerBw;
/**
* The total bandwidth (bw) cost for transferring the cloudlet by the
* network, according to the {#link #cloudletFileSize}.
*/
protected double accumulatedBwCost;
// Utilization
/**
* The utilization model that defines how the cloudlet will use the VM's
* CPU.
This segment are taken from Cloudlet.java.line 212. This may be helpful.
Or, as you set each VM properties you can calculate the power consumption.
//VM description
int vmid = 0;
int mips = 250;
long size = 10000; //image size (MB)
int ram = 2048; //vm memory (MB)
long bw = 1000;
int pesNumber = 1; //number of cpus
String vmm = "Xen"; //VMM name
This segment taken from CloudSimExample3.java line 64
CloudSim Plus has a built-in feature to compute VM's power consumption.
The method below shows how to use such a feature. You can get the complete example here.
private void printVmsCpuUtilizationAndPowerConsumption() {
for (Vm vm : vmList) {
System.out.println("Vm " + vm.getId() + " at Host " + vm.getHost().getId() + " CPU Usage and Power Consumption");
double vmPower; //watt-sec
double utilizationHistoryTimeInterval, prevTime = 0;
final UtilizationHistory history = vm.getUtilizationHistory();
for (final double time : history.getHistory().keySet()) {
utilizationHistoryTimeInterval = time - prevTime;
vmPower = history.vmPowerConsumption(time);
final double wattsPerInterval = vmPower*utilizationHistoryTimeInterval;
System.out.printf(
"\tTime %8.1f | Host CPU Usage: %6.1f%% | Power Consumption: %8.0f Watt-Sec * %6.0f Secs = %10.2f Watt-Sec\n",
time, history.vmCpuUsageFromHostCapacity(time) *100, vmPower, utilizationHistoryTimeInterval, wattsPerInterval);
prevTime = time;
}
System.out.println();
}
}
I am doing some computer vision stuff using CUDA. Following code takes about 20 seconds to complete.
__global__ void nlmcuda_kernel(float* fpOMul,/*other input args*/){
float fpODenoised[75];
/*Do awesome stuff to compute fpODenoised*/
//inside nested loops:(This is the statement that is the bottleneck in the code.)
fpOMul[ii * iwl * iwxh + iindex * iwxh + il] = fpODenoised[ii * iwl +iindex];
}
if I replace that statement with
fpOMul[ii * iwl * iwxh + iindex * iwxh + il] = 2.0f;
the code hardly takes a couple of seconds to complete.
Why is the specified statment slow and how can I make it run fast?
When you make the code change the compiler can see that all your awesome fpdenoised code is no longer needed and can optimize it out. The actual statement you modified is not the direct cause of the perf difference. You can verify this by looking at the ptx or sass code in each case.
For FPS calculation, I use some code I found on the web and it's working well. However, I don't really understand it. Here's the function I use:
void computeFPS()
{
numberOfFramesSinceLastComputation++;
currentTime = glutGet(GLUT_ELAPSED_TIME);
if(currentTime - timeSinceLastFPSComputation > 1000)
{
char fps[256];
sprintf(fps, "FPS: %.2f", numberOfFramesSinceLastFPSComputation * 1000.0 / (currentTime . timeSinceLastFPSComputation));
glutSetWindowTitle(fps);
timeSinceLastFPSComputation = currentTime;
numberOfFramesSinceLastComputation = 0;
}
}
My question is, how is the value that is calculated in the sprint call stored in the fps array, since I don't really assign it.
This is not a question about OpenGL, but the C standard library. Reading the reference documentation of s(n)printf helps:
man s(n)printf: http://linux.die.net/man/3/sprintf
In short snprintf takes a pointer to a user supplied buffer and a format string and fills the buffer according to the format string and the values given in the additional parameters.
Here's my suggestion: If you have to ask about things like that, don't tackle OpenGL yet. You need to be fluent in the use of pointers and buffers when it comes to supplying buffer object data and shader sources. If you plan on using C for this, get a book on C and thoroughly learn that first. And unlike C++ you can actually learn C to some good degree over the course of a few months.
This function is supposedly called at every redraw of your main loop (for every frame). So what it's doing is increasing a counter of frames and getting the current time this frame is being displayed. And once per second (1000ms), it's checking that counter and reseting it to 0. So when getting the counter value at each second, it's getting its value and displaying it as the title of the window.
/**
* This function has to be called at every frame redraw.
* It will update the window title once per second (or more) with the fps value.
*/
void computeFPS()
{
//increase the number of frames
numberOfFramesSinceLastComputation++;
//get the current time in order to check if it has been one second
currentTime = glutGet(GLUT_ELAPSED_TIME);
//the code in this if will be executed just once per second (1000ms)
if(currentTime - timeSinceLastFPSComputation > 1000)
{
//create a char string with the integer value of numberOfFramesSinceLastComputation and assign it to fps
char fps[256];
sprintf(fps, "FPS: %.2f", numberOfFramesSinceLastFPSComputation * 1000.0 / (currentTime . timeSinceLastFPSComputation));
//use fps to set the window title
glutSetWindowTitle(fps);
//saves the current time in order to know when the next second will occur
timeSinceLastFPSComputation = currentTime;
//resets the number of frames per second.
numberOfFramesSinceLastComputation = 0;
}
}
I am using a frame grabber inspecta-5 with 1GB memory, also a high speed camera "EoSens Extended Mode, 640X480 1869fps, 10X8 taps". I am new to coding for grabbers and also to controlling the camera. the Inspecta-5 grabber, gives me different options, like changing the requested number of frames from the camrea to grabber and also from grabber to main memory. also I can use camrea links to send signal to camera and have different exposure times.
but Im not really sure what should I use to obtain 1000 frame per second rate, and how can I test it?
according to the software manual if I set the following options in the camera profile :
ReqFrame=1000
GReqFrame=1000
it means transfer 1000 frames from the camera to grabber and also transfer 1000 frame from grabber to Main memory, respectively.
but does it mean that I have 1000fps?
what would be the option for setting the fps to 1000 ? and also how can I test it that I really grabbed 1000 frames in One Second????
here is a link to the grabber software manual : mikrotron.de/index.php?de_downloadfiles you can find the software manual under the "Inspecta Level1 API for WinNT/2000/XP" section. the file name is "i5-level1-sw_manual_e.pdf" , just in case if anybody needs it.
THANK YOU
At 1,000fps you don't have much time to snap a frame or even save a frame. Use the following example and plug in your estimated FPS, capture and save latencies. At 1,000fps, you can have a total of about .8ms latency (and why not .99999? I don't know - something to do with unattainable theoretical max or my old PC).
public static void main(String args[]) throws Exception {
int fps = 1000;
float simulationCaptureNowMS = .40f;
float simulationSaveNowNowMS = .40f;
final long simulationCaptureNowNS = (long)(simulationCaptureNowMS * 1000000.0f);
final long simulationSaveNowNowNS = (long)(simulationSaveNowNowMS * 1000000.0f);
final long windowNS = (1000*1000000)/fps;
final long movieDurationSEC = 2;
long dropDeadTimeMS = System.currentTimeMillis() + (1000* movieDurationSEC);
while(System.currentTimeMillis() < dropDeadTimeMS){
long startNS = System.nanoTime();
actionSimulator(simulationCaptureNowNS);
actionSimulator(simulationSaveNowNowNS);
long endNS = System.nanoTime();
long sleepNS = windowNS-(endNS-startNS);
if (sleepNS<0) {
System.out.println("Data loss. Try again.");
System.exit(0);
}
actionSimulator(sleepNS);
}
System.out.println("No data loss at "+fps+"fps with interframe latency of "+(simulationCaptureNowMS+simulationSaveNowNowMS)+"ms");
}
private static void actionSimulator(long ns) throws Exception {
long d = System.nanoTime()+ns;
while(System.nanoTime()<d) Thread.yield();
}
How do I calculate network utilization for both transmit and receive either using C or a shell script?
My system is an embedded linux. My current method is to recorded bytes received (b1), wait 1 second, then recorded again (b2). Then knowing the link speed, I calculate the percentage of the receive bandwidth used.
receive utilization = (((b2 - b1)*8)/link_speed)*100
is there a better method?
Check out open source programs that does something similar.
My search turned up a little tool called vnstat.
It tries to query the /proc file system, if available, and uses getifaddrs for systems that do not have it. It then fetches the correct AF_LINK interface, fetches the corresponding if_data struct and then reads out transmitted and received bytes, like this:
ifinfo.rx = ifd->ifi_ibytes;
ifinfo.tx = ifd->ifi_obytes;
Also remember that sleep() might sleep longer than exactly 1 second, so you should probably use a high resolution (wall clock) timer in your equation -- or you could delve into the if-functions and structures to see if you find anything appropriate for your task.
thanks to 'csl' for pointing me in the direction of vnstat. using vnstat example here is how I calculate network utilization.
#define FP32 4294967295ULL
#define FP64 18446744073709551615ULL
#define COUNTERCALC(a,b) ( b>a ? b-a : ( a > FP32 ? FP64-a-b : FP32-a-b))
int sample_time = 2; /* seconds */
int link_speed = 100; /* Mbits/s */
uint64_t rx, rx1, rx2;
float rate;
/*
* Either read:
* '/proc/net/dev'
* or
* '/sys/class/net/%s/statistics/rx_bytes'
* for bytes received counter
*/
rx1 = read_bytes_received("eth0");
sleep(sample_time); /* wait */
rx2 = read_bytes_received("eth0");
/* calculate MB/s first the convert to Mbits/s*/
rx = rintf(COUNTERCALC(rx1, rx2)/(float)1048576);
rate = (rx*8)/(float)sample_time;
percent = (rate/(float)link_speed)*100;