Lasso via GridSearchCV: ConvergenceWarning: Objective did not converge - gridsearchcv

I am trying to find the optimal parameter of a Lasso regression:
alpha_tune = {'alpha': np.linspace(start=0.000005, stop=0.02, num=200)}
model_tuner = Lasso(fit_intercept=True)
cross_validation = RepeatedKFold(n_splits=5, n_repeats=3, random_state=1)
model = GridSearchCV(estimator=model_tuner, param_grid=alpha_tune, cv=cross_validation, scoring='neg_mean_squared_error', n_jobs=-1).fit(features_train_std, labels_train)
print(model.best_params_['alpha'])
My variables are demeaned and standardized. But I get the following error:
ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations, check the scale of the features or consider increasing regularisation. Duality gap: 1.279e+02, tolerance: 6.395e-01
I know this error has been reported several times, but none of the previous posts answer how to solve it. In my case, the error is generated by the fact that the lowerbound 0.000005 is very small, but this is a reasonable value as indicated by solving the tuning problem via the information criteria:
lasso_aic = LassoLarsIC(criterion='aic', fit_intercept=True, eps=1e-16, normalize=False)
lasso_aic.fit(X_train_std, y_train)
print('Lambda: {:.8f}'.format(lasso_aic.alpha_))
lasso_bic = LassoLarsIC(criterion='bic', fit_intercept=True, eps=1e-16, normalize=False)
lasso_bic.fit(X_train_std, y_train)
print('Lambda: {:.8f}'.format(lasso_bic.alpha_))
AIC and BIC give values of around 0.000008. How can this warning be solved?

Increasing the default parameter max_iter=1000 in Lasso will do the job:
alpha_tune = {'alpha': np.linspace(start=0.000005, stop=0.02, num=200)}
model_tuner = Lasso(fit_intercept=True, max_iter=5000)
cross_validation = RepeatedKFold(n_splits=5, n_repeats=3, random_state=1)
model = GridSearchCV(estimator=model_tuner, param_grid=alpha_tune, cv=cross_validation, scoring='neg_mean_squared_error', n_jobs=-1).fit(features_train_std, labels_train)
print(model.best_params_['alpha'])

Related

lapply calling .csv for changes in a parameter

Good afternoon
I am currently trying to pull some data from pushshift but I am maxing out at 100 posts. Below is the code for pulling one day that works great.
testdata1<-getPushshiftData(postType = "submission", size = 1000, before = "1546300800", after= "1546200800", subreddit = "mysubreddit", nest_level = 1)
I have a list of Universal Time Codes for the beginning and ending of each day for a month. What I would like to do is get the syntax to replace the "after" and "before" values for each day and for each day to be added to the end of the pulled data. Even if it placed the data to a bunch of separate smaller datasets I could work with it.
Here is my (feeble) attempt. "links" is the data frame with the UTCs
mydata<- lapply(1:30, function(x) getPushshiftData(postType = "submission", size = 1000, after= links$utcstart[,x],before = links$utcendstart[,x], subreddit = "mysubreddit", nest_level = 1))
Here is the error message I get: Error in links$utcstart[, x] : incorrect number of dimensions
I've also tried without the "function (x)" argument and get the following message:
Error in ifelse(is.null(after), "", sprintf("&after=%s", after)) :
object 'x' not found
Can anyone help with this?

cellvariable*Diffusion in fipy

I am trying to solve the following coupled pde's in fipy. I tried the following
eq1 = (DiffusionTerm(coeff=1, var=f)-f*DiffusionTerm(coeff=1, var=phi)
+f-f**3 == 0)
eq2 = (2*DiffusionTerm(coeff=f, var=phi)+f*DiffusionTerm(coeff=1, var=phi)
== 0)
eq = eq1 & eq2
eq.solve()
but it does not like "f*DiffusionTerm(coeff=1, var=phi)" and I get the error.
"TermMultiplyError: Must multiply terms by int or float." Is there a way that I can implement a cell variable times a diffusion term?
Neither of the following will work in FiPy,
from fipy import CellVariable, DiffusionTerm, Grid1D
mesh = Grid1D(nx=10)
var = CellVariable(mesh=mesh)
# eqn = var * DiffusionTerm(coeff=1)
eqn = ImplicitSourceTerm(coeff=DiffusionTerm(coeff=1))
eqn.solve(var)
They both simply don't make sense with respect to the discretization in the finite volume method. Regardless, you can use the following identity to rewrite the term of interest
Basically, instead of using
var * DiffusionTerm(coeff=1)
you can use
DiffusionTerm(coeff=var) - var.grad.mag**2
Giving a regular diffusion term and an extra explicit source term.

The gof function from package btergm gives AUC value of a precision-recall greater than 1

I was trying to do out-of-sample prediction using the gof function from package btergm. When calculating the AUC value of a precision-recall curve from the testing set, I get the result of 1.012909, which seems to be theoretically impossible.
How can I interpret the result, or am I doing something wrong. Thank you! Here is my code:
network <- readRDS(url("https://www.dropbox.com/s/zxhqxa8h9awkzpd/network.rds?dl=1"))
model.3<- btergm(network[1:9]~edges+gwodegree(1, fixed=TRUE)+transitiveties + ctriple +
gwidegree(1, fixed=TRUE)+mutual+gwesp(1.5, fixed=TRUE)+ttriple+
memory(type="stability")+delrecip,R=1000)
gof.3 <-gof(model.3, nsim = 1000,target=network[[10]],formula = network[9:10]~edges+gwodegree(1,fixed=TRUE) +
transitiveties + ctriple +gwidegree(1,fixed=TRUE)+mutual+gwesp(1.5, fixed=TRUE)+ttriple+
memory(type="stability")+delrecip,coef = coef(model.3),
statistics = rocpr)
gof.3[[1]]$auc.pr

Tensorflow Probability Logistic Regression Example

I feel I must be missing something obvious, in struggling to get a positive control for logistic regression going in tensorflow probability.
I've modified the example for logistic regression here, and created a positive control features and labels data. I struggle to achieve accuracy over 60%, however this is an easy problem for a 'vanilla' Keras model (accuracy 100%). What am I missing? I tried different layers, activations, etc.. With this method of setting up the model, is posterior updating actually being performed? Do I need to specify an interceptor object? Many thanks..
### Added positive control
nSamples = 80
features1 = np.float32(np.hstack((np.reshape(np.ones(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features2 = np.float32(np.hstack((np.reshape(np.zeros(40), (40, 1)),
np.reshape(np.random.randn(nSamples), (40, 2)))))
features = np.vstack((features1, features2))
labels = np.concatenate((np.zeros(40), np.ones(40)))
featuresInt, labelsInt = build_input_pipeline(features, labels, 10)
###
#w_true, b_true, features, labels = toy_logistic_data(FLAGS.num_examples, 2)
#featuresInt, labelsInt = build_input_pipeline(features, labels, FLAGS.batch_size)
with tf.name_scope("logistic_regression", values=[featuresInt]):
layer = tfp.layers.DenseFlipout(
units=1,
activation=None,
kernel_posterior_fn=tfp.layers.default_mean_field_normal_fn(),
bias_posterior_fn=tfp.layers.default_mean_field_normal_fn())
logits = layer(featuresInt)
labels_distribution = tfd.Bernoulli(logits=logits)
neg_log_likelihood = -tf.reduce_mean(labels_distribution.log_prob(labelsInt))
kl = sum(layer.losses)
elbo_loss = neg_log_likelihood + kl
predictions = tf.cast(logits > 0, dtype=tf.int32)
accuracy, accuracy_update_op = tf.metrics.accuracy(
labels=labelsInt, predictions=predictions)
with tf.name_scope("train"):
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate)
train_op = optimizer.minimize(elbo_loss)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
# Fit the model to data.
for step in range(FLAGS.max_steps):
_ = sess.run([train_op, accuracy_update_op])
if step % 100 == 0:
loss_value, accuracy_value = sess.run([elbo_loss, accuracy])
print("Step: {:>3d} Loss: {:.3f} Accuracy: {:.3f}".format(
step, loss_value, accuracy_value))
### Check with basic Keras
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1)])
optimizer = tf.train.AdamOptimizer(5e-2)
kerasModel.compile(optimizer = optimizer, loss = 'binary_crossentropy',
metrics = ['accuracy'])
kerasModel.fit(features, labels, epochs = 50) #100% accuracy
Compared to the github example, you forgot to divide by the number of examples when defining the KL divergence:
kl = sum(layer.losses) / FLAGS.num_examples
When I change this to your code, I quickly get to an accuracy of 99.9% on your toy data.
Additionaly, the output layer of your Keras model actually expects a sigmoid activation for this problem (binary classification):
kerasModel = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation='sigmoid')])
It's a toy problem, but you will notice that the model gets to 100% accuracy faster with a sigmoid activation.

How to improve ANN results by reducing error through hidden layer size, through MSE, or by using while loop?

This is my source code and I want to reduce the possible errors. When running this code there is a lot of difference between trained output to target. I have tried different ways but didn't work so please help me reducing it.
a=[31 9333 2000;31 9500 1500;31 9700 2300;31 9700 2320;31 9120 2230;31 9830 2420;31 9300 2900;31 9400 2500]'
g=[35000;23000;3443;2343;1244;9483;4638;4739]'
h=[31 9333 2000]'
inputs =(a);
targets =[g];
% Create a Fitting Network
hiddenLayerSize = 1;
net = fitnet(hiddenLayerSize);
% Choose Input and Output Pre/Post-Processing Functions
% For a list of all processing functions type: help nnprocess
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.outputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'dividerand'; % Divide data randomly
net.divideMode = 'sample'; % Divide up every sample
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% For help on training function 'trainlm' type: help trainlm
% For a list of all training functions type: help nntrain
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotregression','plotconfusion' 'plotfit','plotroc'};
% Train the Network
[net,tr] = train(net,inputs,targets);
plottrainstate(tr)
% Test the Network
outputs = net(inputs)
errors = gsubtract(targets,outputs)
fprintf('errors = %4.3f\t',errors);
performance = perform(net,targets,outputs);
% Recalculate Training, Validation and Test Performance
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs);
valPerformance = perform(net,valTargets,outputs);
testPerformance = perform(net,testTargets,outputs);
% View the Network
view(net);
sc=sim(net,h)
I think you need to be more specific.
What is the performance like on your training set and on your test set?
Have you tried doing any regularization?

Resources