can anybody tell me why this method, which is called very often, is leaking memory?
If I take a look at the iOS Allocation Tool / VM Tool there are no leaks .... but if I look at a report_memory function which I found at here at stackoverflow, I can see the that the resident size is growing by 1 MB per 2 seconds. If I don't call this method the resident size is only growing by 1 MB per 40 seconds. At some time I receive a "Did receive memory warning" Log, but I cant figure out why this is happening. Resident Size, Dirty Size, Allocations ... everything looks alright.
path2 is a class Variable.
-(void) drawPath:(float) winkel path:(UIBezierPath *) mpath toPoint:(CGPoint) pt{
path2 = [UIBezierPath bezierPathWithCGPath:mpath.CGPath];
box = CGPathGetPathBoundingBox(path2.CGPath);
CGAffineTransform translate = CGAffineTransformMakeTranslation(-1 * (box.origin.x + (box.size.width / 2)), -1 * (box.origin.y + (box.size.height / 2)));
[path2 applyTransform:translate];
CGAffineTransform rotate = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(winkel));
[path2 applyTransform:rotate];
translate = CGAffineTransformMakeTranslation((box.origin.x + (box.size.width / 2)), (box.origin.y + (box.size.height / 2)));
[path2 applyTransform:translate];
translate = CGAffineTransformMakeTranslation(pt.x-(box.size.width / 2), pt.y-(box.size.height / 2));
[path2 applyTransform:translate];
[path2 fill];
}
I think the problem is CGAffineTransformMakeTranslation/applyTransform ... but I cant figure out why this method is leaking.
Every Core Foundation object of returned by a function that has Make in the name, must be explicitly released using CFRelease.
Related
I was trying to do out-of-sample prediction using the gof function from package btergm. When calculating the AUC value of a precision-recall curve from the testing set, I get the result of 1.012909, which seems to be theoretically impossible.
How can I interpret the result, or am I doing something wrong. Thank you! Here is my code:
network <- readRDS(url("https://www.dropbox.com/s/zxhqxa8h9awkzpd/network.rds?dl=1"))
model.3<- btergm(network[1:9]~edges+gwodegree(1, fixed=TRUE)+transitiveties + ctriple +
gwidegree(1, fixed=TRUE)+mutual+gwesp(1.5, fixed=TRUE)+ttriple+
memory(type="stability")+delrecip,R=1000)
gof.3 <-gof(model.3, nsim = 1000,target=network[[10]],formula = network[9:10]~edges+gwodegree(1,fixed=TRUE) +
transitiveties + ctriple +gwidegree(1,fixed=TRUE)+mutual+gwesp(1.5, fixed=TRUE)+ttriple+
memory(type="stability")+delrecip,coef = coef(model.3),
statistics = rocpr)
gof.3[[1]]$auc.pr
I am running a logistic generalized linear mixed model and would like to plot my effects together with confidence intervals.
I use the lme4 package to fit my model:
glmer (cbind(positive, negative) ~ F1 * F2 * F3 + V1 + F1 * I(V1^2) + V2 + F1 * I(V2^2) + V3 + I(V3^2) + V4 + I(V4^2) + F4 + (1|OLRE) + (1|ind), family = binomial, data = try, na.action = na.omit, control=glmerControl(optimizer = "optimx", calc.derivs = FALSE, optCtrl = list(method = "nlminb", starttests = FALSE, kkt = FALSE)))
OLRE means that I use an observational level random effect in order to overcome overdispersion.
If you wonder because of convergence warnings, I went through the lme4 troubleshooting protocol, and it should be fine.
In order to get effect plots with confidence intervals, I tried to use ggpredict:
sjPlot, e.g.:
plot_model(mod, type = "pred", terms = c("F1", "F2", "F3"), ci.lvl = 0.95)
I also tried to go through this protocol:
https://rpubs.com/hughes/84484
intdata <- expand.grid(
positive = c(0,1),
negative = c(0,1),
F1 = as.factor(c(0,1)),
F2 = as.factor(c(1,2,3)),
F3= as.factor(c(1,2,3,4)),
V1= as.numeric(median(try$V1)),
F4= as.factor(c(30, 31)),
ind= as.factor(c(68)),
OLRE = as.factor(c(2450)),
V2= as.numeric(median(try$V2)),
V3= as.numeric(median(try$V3)),
V4= as.numeric(median(try$V4))
)
#conditional variances
cV <- ranef(mod, condVar = TRUE)
ranvar <- attr(cV[[1]], "postVar")
sqrt(diag(ranvar[,,1]))
mm <- model.matrix(terms( mod), data=intdata,
contrasts.arg = lapply(intdata[,c(3:5, 7:9)], contrasts, contrasts=FALSE))
predFun<-function(.) mm%*%fixef(.)
bb<-bootMer(mod,FUN=predFun,nsim=3)
with some problems and warnings regarding contrasts and
Error in mm %*% fixef(.) : non-conformable arguments
Therefore, I am wondering if anyone might help me but so far I am not able to Nothing worked so far, so I would really appreciate some help.
Here the link to the data https://drive.google.com/file/d/1qZaJBbM1ggxwPnZ9bsTCXL7_BnXlvfkW/view?usp=sharing
I feel I must be missing something obvious, in struggling to get a positive control for logistic regression going in tensorflow probability.
I've modified the example for logistic regression here, and created a positive control features and labels data. I struggle to achieve accuracy over 60%, however this is an easy problem for a 'vanilla' Keras model (accuracy 100%). What am I missing? I tried different layers, activations, etc.. With this method of setting up the model, is posterior updating actually being performed? Do I need to specify an interceptor object? Many thanks..
### Added positive control
nSamples = 80
features1 = np.float32(np.hstack((np.reshape(np.ones(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features2 = np.float32(np.hstack((np.reshape(np.zeros(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features = np.vstack((features1, features2))
labels = np.concatenate((np.zeros(40), np.ones(40)))
featuresInt, labelsInt = build_input_pipeline(features, labels, 10)
###
#w_true, b_true, features, labels = toy_logistic_data(FLAGS.num_examples, 2)
#featuresInt, labelsInt = build_input_pipeline(features, labels, FLAGS.batch_size)
with tf.name_scope("logistic_regression", values=[featuresInt]):
layer = tfp.layers.DenseFlipout(
units=1,
activation=None,
kernel_posterior_fn=tfp.layers.default_mean_field_normal_fn(),
bias_posterior_fn=tfp.layers.default_mean_field_normal_fn())
logits = layer(featuresInt)
labels_distribution = tfd.Bernoulli(logits=logits)
neg_log_likelihood = -tf.reduce_mean(labels_distribution.log_prob(labelsInt))
kl = sum(layer.losses)
elbo_loss = neg_log_likelihood + kl
predictions = tf.cast(logits > 0, dtype=tf.int32)
accuracy, accuracy_update_op = tf.metrics.accuracy(
labels=labelsInt, predictions=predictions)
with tf.name_scope("train"):
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate)
train_op = optimizer.minimize(elbo_loss)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
# Fit the model to data.
for step in range(FLAGS.max_steps):
_ = sess.run([train_op, accuracy_update_op])
if step % 100 == 0:
loss_value, accuracy_value = sess.run([elbo_loss, accuracy])
print("Step: {:>3d} Loss: {:.3f} Accuracy: {:.3f}".format(
step, loss_value, accuracy_value))
### Check with basic Keras
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1)])
optimizer = tf.train.AdamOptimizer(5e-2)
kerasModel.compile(optimizer = optimizer, loss = 'binary_crossentropy',
metrics = ['accuracy'])
kerasModel.fit(features, labels, epochs = 50) #100% accuracy
Compared to the github example, you forgot to divide by the number of examples when defining the KL divergence:
kl = sum(layer.losses) / FLAGS.num_examples
When I change this to your code, I quickly get to an accuracy of 99.9% on your toy data.
Additionaly, the output layer of your Keras model actually expects a sigmoid activation for this problem (binary classification):
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation='sigmoid')])
It's a toy problem, but you will notice that the model gets to 100% accuracy faster with a sigmoid activation.
This is my source code and I want to reduce the possible errors. When running this code there is a lot of difference between trained output to target. I have tried different ways but didn't work so please help me reducing it.
a=[31 9333 2000;31 9500 1500;31 9700 2300;31 9700 2320;31 9120 2230;31 9830 2420;31 9300 2900;31 9400 2500]'
g=[35000;23000;3443;2343;1244;9483;4638;4739]'
h=[31 9333 2000]'
inputs =(a);
targets =[g];
% Create a Fitting Network
hiddenLayerSize = 1;
net = fitnet(hiddenLayerSize);
% Choose Input and Output Pre/Post-Processing Functions
% For a list of all processing functions type: help nnprocess
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% For help on training function 'trainlm' type: help trainlm
% For a list of all training functions type: help nntrain
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression','plotconfusion' 'plotfit','plotroc'};
% Train the Network
[net,tr] = train(net,inputs,targets);
plottrainstate(tr)
% Test the Network
outputs = net(inputs)
errors = gsubtract(targets,outputs)
fprintf('errors = %4.3f\t',errors);
performance = perform(net,targets,outputs);
% Recalculate Training, Validation and Test Performance
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs);
valPerformance = perform(net,valTargets,outputs);
testPerformance = perform(net,testTargets,outputs);
% View the Network
view(net);
sc=sim(net,h)
I think you need to be more specific.
What is the performance like on your training set and on your test set?
Have you tried doing any regularization?
I am very new to programming in Vizard, but I am a pretty strong .js programmer. I have an art gallery and I want a man to walk from picture to picture. He needs to wait for a few seconds at each picture.
So I have a number of walking sequences and I'm trying to use the 'ontimer' function to call the next walk sequence and also add a few seconds of delay.
It works perfectly the first time it is called, in dostuff(), but doesn't work at all in dostuff2(). I assume I am using 'ontimer' incorrectly, could anyone explain where I am going wrong?
Any help or advice would be hugely appreciated!
walkOne = vizact.walkto(4, -0.5, 4)
turnOne = vizact.turn(60)
walking_sequence = vizact.sequence( [walkOne, turnOne])
walkTwo = vizact.walkto(5.350, -0.5, -2)
turnTwo = vizact.turn(60)
walking_sequenceTwo = vizact.sequence( [walkTwo, turnTwo])
def dostuff():
male.addAction(walking_sequence)
vizact.ontimer(10,dostuff2)
def dostuff2():
male.addAction(walking_sequenceTwo)
print(vizact.ontimer)
vizact.ontimer(20,dostuff)
Cracked it!! Got rid of the ontimer completely and used waittimer instead, seems to work ok.
walkOne = vizact.walkto(4, -0.5, 4)
turnOne = vizact.turn(60)
walking_sequence = vizact.sequence(walkOne, turnOne, vizact.waittime(10))
walkTwo = vizact.walkto(5.350, -0.5, -2)
turnTwo = vizact.turn(60)
walking_sequenceTwo = vizact.sequence(walkTwo, turnTwo, vizact.waittime(10))
def dostuff():
male.addAction(walking_sequence)
dostuff2()
def dostuff2():
male.addAction(walking_sequenceTwo)
dostuff3()