I'm new to GCP and GAE. I'm trying to deploy elixir container of phoenix on GAE.
The requested amounnt of instances has exceeded GCE's default quota. Please see https://cloud.google.com/compute/quotas/ for more information on GCE resources
That error occurs when I deploy the container.
So I have checked the quotas of project with gcloud command, but there is no quotas which is exceeded.
quotas:
- limit: 1000.0
metric: SNAPSHOTS
usage: 0.0
- limit: 5.0
metric: NETWORKS
usage: 1.0
- limit: 100.0
metric: FIREWALLS
usage: 6.0
- limit: 100.0
metric: IMAGES
usage: 0.0
- limit: 8.0
metric: STATIC_ADDRESSES
usage: 0.0
- limit: 200.0
metric: ROUTES
usage: 1.0
- limit: 15.0
metric: FORWARDING_RULES
usage: 0.0
- limit: 50.0
metric: TARGET_POOLS
usage: 0.0
- limit: 50.0
metric: HEALTH_CHECKS
usage: 2.0
- limit: 4.0
metric: IN_USE_ADDRESSES
usage: 0.0
- limit: 50.0
metric: TARGET_INSTANCES
usage: 0.0
- limit: 10.0
metric: TARGET_HTTP_PROXIES
usage: 0.0
- limit: 10.0
metric: URL_MAPS
usage: 0.0
- limit: 5.0
metric: BACKEND_SERVICES
usage: 1.0
- limit: 100.0
metric: INSTANCE_TEMPLATES
usage: 1.0
- limit: 5.0
metric: TARGET_VPN_GATEWAYS
usage: 0.0
- limit: 10.0
metric: VPN_TUNNELS
usage: 0.0
- limit: 3.0
metric: BACKEND_BUCKETS
usage: 0.0
- limit: 10.0
metric: ROUTERS
usage: 0.0
- limit: 10.0
metric: TARGET_SSL_PROXIES
usage: 0.0
- limit: 10.0
metric: TARGET_HTTPS_PROXIES
usage: 0.0
- limit: 10.0
metric: SSL_CERTIFICATES
usage: 0.0
- limit: 100.0
metric: SUBNETWORKS
usage: 24.0
- limit: 10.0
metric: TARGET_TCP_PROXIES
usage: 0.0
- limit: 12.0
metric: CPUS_ALL_REGIONS
usage: 2.0
- limit: 0.0
metric: SECURITY_POLICIES
usage: 0.0
- limit: 0.0
metric: SECURITY_POLICY_RULES
usage: 0.0
- limit: 20.0
metric: PACKET_MIRRORINGS
usage: 0.0
- limit: 100.0
metric: NETWORK_ENDPOINT_GROUPS
usage: 0.0
- limit: 6.0
metric: INTERCONNECTS
usage: 0.0
- limit: 5000.0
metric: GLOBAL_INTERNAL_ADDRESSES
usage: 0.0
- limit: 5.0
metric: VPN_GATEWAYS
usage: 0.0
- limit: 100.0
metric: MACHINE_IMAGES
usage: 0.0
- limit: 0.0
metric: SECURITY_POLICY_CEVAL_RULES
usage: 0.0
- limit: 0.0
metric: GPUS_ALL_REGIONS
usage: 0.0
- limit: 5.0
metric: EXTERNAL_VPN_GATEWAYS
usage: 0.0
- limit: 1.0
metric: PUBLIC_ADVERTISED_PREFIXES
usage: 0.0
- limit: 10.0
metric: PUBLIC_DELEGATED_PREFIXES
usage: 0.0
- limit: 128.0
metric: STATIC_BYOIP_ADDRESSES
usage: 0.0
I have no clue for resolving this problem. I and my coworker are wondering that the default quota of App engine is small, so it reaches quota limit with just 1 project.(It's not really though)
I even need a little hint. Help me!
GAE uses Google Compute Engine's quota.
It looks like you've exceeded your GCE quota limit.
You can check GCE's VM instance quota limit at "Quota" page.
It is quota page which concludes GCE VM instance limit. ->
Quota Page for VM instance
I think you configured number of instances over 8.
If you want to mitigate such limitation, Send request to Google via "EDIT QUOTAS".->
Send request
Related
My code currently creates an output that comes out as so (using example numbers)
0.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 2.0
0.0 0.0 0.0
0.0 0.0 2.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 3.0 0.0
0.0 0.0 0.0
0.0 0.0 1.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
was hoping for a solution on how to plot this data either as a 3D splot or as a gif that cycles through each matrix (actual code contains a few hundred matrices). I'm able to alter the output format if necessary. So far I've tried
do for [i=1:7] {
plot "data.txt" matrix with image
}
As well as attempting other solutions I've found on the site but none seem to be trying to do the same thing as me.
If anyone who has gnuplot experience could help me that would be a huge help (I'm using mac if that makes a difference)
Welcome to StackOverflow! I assume all separations of your matrices are two empty lines.
If this is the case you can address the matrices via index (check help index).
You can find out with stats (check help stats) how many blocks you have. Loop through these blocks and set the output to term gif animate (check help gif). Instead of plotting the datablock $Data simply plot your file.
Scrupt:
### plot matrices as asnimation
reset session
$Data <<EOD
0.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 0.0
0.0 0.0 2.0
0.0 0.0 0.0
0.0 0.0 2.0
0.0 0.0 0.0
1.0 0.0 0.0
0.0 0.0 0.0
0.0 3.0 0.0
0.0 0.0 0.0
0.0 0.0 1.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
3.0 0.0 0.0
0.0 0.0 0.0
1.0 0.0 2.0
0.0 0.0 0.0
EOD
stats $Data u 0 nooutput # get the number of blocks
N = STATS_blocks
set term gif size 600,400 animate delay 30
set output "SO72250259.gif"
set size ratio -1
set cbrange [0:3]
set xrange [-0.5:2.5]
set yrange [-0.5:3.5]
do for [i=0:N-1] {
plot $Data index i matrix w image
}
set output
### end of script
Result:
I'd greatly appreciate some help on this. I'm using jupyter notebook.
I have a dataframe where I want calculate the interrater reliability. I want to compare them pairwise by the value of the ID column (all IDs have a frequency of 2, one for each coder). All ID values represent different articles, so I do not want to compare them all together, but more take the average of the interrater reliability of each pair (and potentially also for each column).
N. ID. A. B.
0 8818313 Yes Yes 1.0 1.0 1.0 1.0 1.0 1.0
1 8818313 Yes No 0.0 1.0 0.0 0.0 1.0 1.0
2 8820105 No Yes 0.0 1.0 1.0 1.0 1.0 1.0
3 8820106 No No 0.0 0.0 0.0 1.0 0.0 0.0
I've been able to find some instructions of the cohen's k, but not of how to do this pairwise by value in the ID column.
Does anyone know how to go about this?
Here is how I will approach it:
from io import StringIO
from sklearn.metrics import cohen_kappa_score
df = pd.read_csv(StringIO("""
N,ID,A,B,Nums
0, 8818313, Yes, Yes,1.0 1.0 1.0 1.0 1.0 1.0
1, 8818313, Yes, No,0.0 1.0 0.0 0.0 1.0 1.0
2, 8820105, No, Yes,0.0 1.0 1.0 1.0 1.0 1.0
3, 8820105, No, No,0.0 0.0 0.0 1.0 0.0 0.0 """))
def kappa(df):
nums1 = [float(num) for num in df.Nums.iloc[0].split(' ') if num]
nums2 = [float(num) for num in df.Nums.iloc[1].split(' ') if num]
return cohen_kappa_score(nums1, nums2)
df.groupby('ID').apply(kappa)
This will generate:
ID
8818313 0.000000
8820105 0.076923
dtype: float64
I'm working on Cloudsim Plus(simulation Tool) for a project work and I need to calculate the power consumption of each Virtual machine for implementing VM SELECTION ALGORITHM using MAXIMUM POWER REDUCTION POLICY.
The below code is a small portion of large code, written by me in PowerExample.java which is already available in clousimPlus examples folder. I have created four Virtual machines, two host and eight cloudlets.
Map<Double, Double> percent = v.getUtilizationHistory().getHistory();
System.out.println("Vm Id " + v.getId());
System.out.println("----------------------------------------");
for (Map.Entry<Double, Double> entry : percent.entrySet()) {
System.out.println(entry.getKey() + " " + entry.getValue());
}
}
Output of the above code :-
Vm Id 0
----------------------------------------
10.0 1.0
20.0 1.0
30.0 1.0
40.0 1.0
50.0 1.0
60.0 0.5
70.0 0.5
80.0 0.5
90.0 0.5
99.0 0.5
100.0 0.5
100.21 0.0
Vm Id 1
----------------------------------------
10.0 1.0
20.0 1.0
30.0 1.0
40.0 1.0
50.0 1.0
60.0 0.5
70.0 0.5
80.0 0.5
90.0 0.5
99.0 0.5
100.0 0.5
100.21 0.0
Vm Id 2
----------------------------------------
10.0 1.0
20.0 1.0
30.0 1.0
40.0 1.0
50.0 1.0
60.0 0.5
70.0 0.5
80.0 0.5
90.0 0.5
99.0 0.5
100.0 0.5
100.21 0.0
Vm Id 3
----------------------------------------
10.0 1.0
20.0 1.0
30.0 1.0
40.0 1.0
50.0 1.0
60.0 0.5
70.0 0.5
80.0 0.5
90.0 0.5
99.0 0.5
100.0 0.5
100.21 0.0
Based on the PowerExample you mentioned, you can add the following method in your simulation to print VM utilization history (make sure you update your CloudSim Plus to the latest version):
private void printVmsCpuUtilizationAndPowerConsumption() {
for (Vm vm : vmList) {
System.out.println("Vm " + vm.getId() + " at Host " + vm.getHost().getId() + " CPU Usage and Power Consumption");
double vmPower; //watt-sec
double utilizationHistoryTimeInterval, prevTime = 0;
final UtilizationHistory history = vm.getUtilizationHistory();
for (final double time : history.getHistory().keySet()) {
utilizationHistoryTimeInterval = time - prevTime;
vmPower = history.vmPowerConsumption(time);
final double wattsPerInterval = vmPower*utilizationHistoryTimeInterval;
System.out.printf(
"\tTime %8.1f | Host CPU Usage: %6.1f%% | Power Consumption: %8.0f Watt-Sec * %6.0f Secs = %10.2f Watt-Sec\n",
time, history.vmCpuUsageFromHostCapacity(time) *100, vmPower, utilizationHistoryTimeInterval, wattsPerInterval);
prevTime = time;
}
System.out.println();
}
}
After updating your fork, you can get the complete PowerExample here.
Unfortunately, there is no built-in feature to store RAM and BW utilization. This way, you have to implement inside your simulation, as demonstrated in VmsRamAndBwUsageExample.java
I have just started reading about neural networks and I have a basic question. Regarding "initializing" the Hopfield network, I am unable to understand that notion of initialization. That is, do we input some random numbers? or do input a well defined pattern which makes the neurons settle down first time up, assuming all neurons were at state equal to zero, with other stable states being either 1 or -1 after the input.
Consider the neural network below. Which I have taken from HeatonResearch
Glad if someone clears this to me.
When initialising neural networks, including the recurrent Hopfield networks, it is common to initialise with random weights, as that in general will give good learning times over multiple trials and over an ensemble of runs, it will avoid local minima. It is usually not a good idea to start from the same starting weights over multiple runs as you will likely encounter the same local minima. With some configurations, the learning can be sped up by doing an analysis of the role of the node in the functional mapping, but that is often a later step in the analysis after getting something working.
The purpose of a Hopefiled network is to recall the data it has been shown, serving as content-addressable memory. It begins as a clean slate, with all weights set to zero. Training the network on a vector adjusts the weights to respond to it.
The output of a node in a Hopfield network depends on the state of each other node and the weight of the node's connection to it. States correspond to the input, with intput 0 mapping to -1, and the input 1 mapping to 1. So, if the network in your example had input 1010, N1 would have state 1, N2 -1, N3 1, and N4 -1.
Training the network means adding the dot product between the output and itself to the weight matrix setting the diagonal to zero. So, to train on 10101, we would add [1 -1 1 -1 ] · [1 -1 1 -1 ]ᵀ to the weight matrix.
You can checkout this repository --> Hopfield Network
There you have an example for test a pattern after train the Network off-line. This is the test
#Test
public void HopfieldTest(){
double[] p1 = new double[]{1.0, -1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0};
double[] p2 = new double[]{1.0, 1.0,1.0,-1.0,1.0,-1.0,-1.0,1.0,-1.0};
double[] p3 = new double[]{1.0, 1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,-1.0};
ArrayList<double[]> patterns = new ArrayList<>();
patterns.add(p1);
patterns.add(p2);
Hopfield h = new Hopfield(9, new StepFunction());
h.train(patterns); //train and load the Weight matrix
double[] result = h.test(p3); //Test a pattern
System.out.println("\nConnections of Network: " + h.connections() + "\n"); //show Neural connections
System.out.println("Good recuperation capacity of samples: " + Hopfield.goodRecuperation(h.getWeights().length) + "\n");
System.out.println("Perfect recuperation capacity of samples: " +
Hopfield.perfectRacuperation(h.getWeights().length) + "\n");
System.out.println("Energy: " + h.energy(result));
System.out.println("Weight Matrix");
Matrix.showMatrix(h.getWeights());
System.out.println("\nPattern result of test");
Matrix.showVector(result);
h.showAuxVector();
}
And after run the test you can see
Running HopfieldTest
Connections of Network: 72
Good recuperation capacity of samples: 1
Perfect recuperation capacity of samples: 1
Energy: -32.0
Weight Matrix
0.0 0.0 2.0 -2.0 2.0 -2.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 -2.0 2.0 -2.0
2.0 0.0 0.0 -2.0 2.0 -2.0 0.0 0.0 0.0
-2.0 0.0 -2.0 0.0 -2.0 2.0 0.0 0.0 0.0
2.0 0.0 2.0 -2.0 0.0 -2.0 0.0 0.0 0.0
-2.0 0.0 -2.0 2.0 -2.0 0.0 0.0 0.0 0.0
0.0 -2.0 0.0 0.0 0.0 0.0 0.0 -2.0 2.0
0.0 2.0 0.0 0.0 0.0 0.0 -2.0 0.0 -2.0
0.0 -2.0 0.0 0.0 0.0 0.0 2.0 -2.0 0.0
Pattern result of test
1.0 1.0 1.0 -1.0 1.0 -1.0 -1.0 1.0 -1.0
-------------------------
The auxiliar vector is empty
I hope you find it useful. Regards
do you know any application beside pattern recog. worthe in order to implement Hopfield neural network model?
Recurrent neural networks (of which hopfield nets are a special type) are used for several tasks in sequence learning:
Sequence Prediction (Map a history of stock values to the expected value in the next timestep)
Sequence classification (Map each complete audio snippet to a speaker)
Sequence labelling (Map an audio snippet to the sentence spoken)
Non-markovian reinforcement learning (e.g. tasks that require deep memory as the T-Maze benchmark)
I am not sure what you mean by "pattern recognition" exactly, since it basically is a whole field into which each task for which neural networks can be used fits.
You can use Hopfield network for optimization problems as well.
You can checkout this repository --> Hopfield Network
There you have an example for test a pattern after train the Network off-line.
This is the test
#Test
public void HopfieldTest(){
double[] p1 = new double[]{1.0, -1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0};
double[] p2 = new double[]{1.0, 1.0,1.0,-1.0,1.0,-1.0,-1.0,1.0,-1.0};
double[] p3 = new double[]{1.0, 1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,-1.0};
ArrayList<double[]> patterns = new ArrayList<>();
patterns.add(p1);
patterns.add(p2);
Hopfield h = new Hopfield(9, new StepFunction());
h.train(patterns); //train and load the Weight matrix
double[] result = h.test(p3); //Test a pattern
System.out.println("\nConnections of Network: " + h.connections() + "\n"); //show Neural connections
System.out.println("Good recuperation capacity of samples: " + Hopfield.goodRecuperation(h.getWeights().length) + "\n");
System.out.println("Perfect recuperation capacity of samples: " + Hopfield.perfectRacuperation(h.getWeights().length) + "\n");
System.out.println("Energy: " + h.energy(result));
System.out.println("Weight Matrix");
Matrix.showMatrix(h.getWeights());
System.out.println("\nPattern result of test");
Matrix.showVector(result);
h.showAuxVector();
}
And after run the test you can see
Running HopfieldTest
Connections of Network: 72
Good recuperation capacity of samples: 1
Perfect recuperation capacity of samples: 1
Energy: -32.0
Weight Matrix
0.0 0.0 2.0 -2.0 2.0 -2.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 -2.0 2.0 -2.0
2.0 0.0 0.0 -2.0 2.0 -2.0 0.0 0.0 0.0
-2.0 0.0 -2.0 0.0 -2.0 2.0 0.0 0.0 0.0
2.0 0.0 2.0 -2.0 0.0 -2.0 0.0 0.0 0.0
-2.0 0.0 -2.0 2.0 -2.0 0.0 0.0 0.0 0.0
0.0 -2.0 0.0 0.0 0.0 0.0 0.0 -2.0 2.0
0.0 2.0 0.0 0.0 0.0 0.0 -2.0 0.0 -2.0
0.0 -2.0 0.0 0.0 0.0 0.0 2.0 -2.0 0.0
Pattern result of test
1.0 1.0 1.0 -1.0 1.0 -1.0 -1.0 1.0 -1.0
-------------------------
The auxiliar vector is empty
I hope this can help you