I would like to generate odds-ratios or coefficients for various features in my dataset along with their 95% confidence intervals using a logistic regression model.
Since we cannot generate 95% CI values for odds-ratios or coefficients in sklearn logistic regression models, I started to play with statsmodels.
However, I am not seeing any standard errors for the coefficients in my output using a very large dataset that contains 17 dummy coded categorical features and 1 outcome variable - with modest correlation seen for only a couple of features (Person’s r < 0.45).
My code follows below:
import statsmodels.api as sm
X_atr = sm.add_constant(X_atr) #add constant for intercept
logit_model = sm.Logit(y_atr, X_atr) #Create model instance
result = logit_model.fit(method = "bfgs") #Fit model
print(result.summary()) #print results
Here is a sample of my output. I am getting the coefficients - but without their standard errors or 95% CI values. Can somebody suggest how to fix this issue?
Related
I'm trying to make a sentiment classifier on the IMDB dataset. I am fairly new to NLP and data science, and I keep getting this error while tryng to fit my model.
ValueError: Found input variables with inconsistent numbers of samples: [24745, 40000]
I've looked at many other threads and it all says to reshape your data, which I did, the size of my X variable is (24745, 100) and for my y variables its (40000, 1).
I am currently trying to use any of these models:
MultinomialNB
BernoulliNB
GaussianNB
I've also tried to create a Tensorflow Sequential Model with Bidirectional LSTM but that produced terrible accuracy, it was at 50 for a long time, randomly guessing.
Any help appreciated please.
I am performing a resource selection function using use and availability locations for a set of animals. For this type of analysis, an infinitely weighted logistic regression is suggested (Fithian and Hastie 2013) and is done by setting weights of used locations to 1 and available locations to some large number (e.g. 10,000). I know that implementing this approach using the glm function in R would be relatively simple
model1 <- glm(used ~ covariates , family=binomial, weights=weights)
I am attempting to implement this as part of a larger hierarchical bayesian model, and thus need to figure out how to incorporate weights in JAGS. In my searching online, I have not been able to find a clear example of how to use weights in specifically a logistic regression. For a poisson model, I have seen suggestions to just multiply the weights by lambda such as described here. I was uncertain if this logic would hold for weights in a logistic regression. Below is an excerpt of JAGS code for the logistic regression in my model.
alpha_NSel ~ dbeta(1,1)
intercept_NSel <- logit(alpha_NSel)
beta_SC_NSel ~ dnorm(0, tau_NSel)
tau_NSel <- 1/(pow(sigma_NSel,2))
sigma_NSel ~ dunif(0,50)
for(n in 1:N_NSel){
logit(piN[n]) <- intercept_NSel + beta_SC_NSel*cov_NSel[n]
yN[n] ~ dbern(piN[n])
}
To implement weights, would I simply change the bernoulli trial to the below? In this case, I assume I would need to adjust weights so that they are between 0 and 1. So weights for used are 1/10,000 and available are 1?
yN[n] ~ dbern(piN[n]*weights[n])
I am working with the MNIST dataset, x_test has dimension of (10000,784) and y_test has a dimension of (10000,10). I need to iterate through each sample of these two numpy arrays at the same time as I need to pass them individually to score.evaluate()
I tried nditer, but it throws an error saying operands could not be broadcasted together since hey have different shape.
score=[]
for x_sample, y_sample in np.nditer ([x_test,y_test]):
a=x_sample.reshape(784,1)
a=np.transpose(a)
b=y_sample.reshape(10,1)
b=np.transpose(b)
s=model.evaluate(a,b,verbose=0)
score.append(s)
Assuming that what you are actually trying to do here is to get the individual loss per sample in your test set, here is a way to do it (in your approach, even if you get past the iteration part, you will have issues with model.evaluate, which was not designed for single sample pairs)...
To make the example reproducible, here I also assume we have first run the Keras MNIST CNN example for only 2 epochs; so, the shape of our data is:
x_test.shape
# (10000, 28, 28, 1)
y_test.shape
# (10000, 10)
Given that, here is a way to get the individual loss per sample:
from keras import backend as K
y_pred = model.predict(x_test)
y_test = y_test.astype('float32') # necessary, as y_pred.dtype is 'float32'
y_test_tensor = K.constant(y_test)
y_pred_tensor = K.constant(y_pred)
g = K.categorical_crossentropy(target=y_test_tensor, output=y_pred_tensor)
ce = K.eval(g) # 'ce' for cross-entropy
ce
# array([1.1563368e-05, 2.0206178e-05, 5.4946734e-04, ..., 1.7662416e-04,
# 2.4232995e-03, 1.8954457e-05], dtype=float32)
ce.shape
# (10000,)
i.e. ce now contains what the score list in your question was supposed to contain.
For confirmation, let's calculate the loss for all test samples using model.evaluate:
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
# Test loss: 0.050856544668227435
and again manually, averaging the values of the ce we have just calculated:
import numpy as np
log_loss = np.sum(ce)/ce.shape[0]
log_loss
# 0.05085654296875
which, although not exactly equal (due to different numeric precision involved in the two ways of calculation), they are practically equal indeed:
log_loss == score[0]
# False
np.isclose(log_loss, score[0])
# True
Now, the adaptation of this to your own case, where the shape of x_test is (10000, 784), is arguably straighforward...
You are mixing between training features and testing labels. The training set has 60,000 samples and the test set has 10,000 samples (that is, your x_test should be of dimension (10000,784)). Make sure you have download all the correct data, and don't mix up training data with testing data.
http://yann.lecun.com/exdb/mnist/
I have a training set with about 300000 examples and about 50-60 features and also it's a multiclass with about 7 classes. I have my logistic regression function that finds out the convergence of the parameters using gradient descent. My gradient descent algorithm, finds the parameters in matrix form as it's faster in matrix form than doing separately and linearly in loops.
Ex :
Matrix(P) <- Matrix(P) - LearningRate( T(Matrix(X)) * ( Matrix(h(X)) -Matrix(Y) ) )
For small training data, it's quite fast and gives correct values with maximum iterations to be around 1000000, but with that much training data, it's extremely slow, that with around 500 iterations it takes 18 minutes, but with that much iterations in gradient descent, the cost is still high and it does not predict the class correctly.
I know, I should implement maybe feature selection, or feature scaling and I can't use the packages provided. Language used is R. How do I go about implementing feature selection or scaling without using any library packages.
According to link, you can use either Z-score normalization or min-max scaling method. Both methods scale the data to [0,1] range. Z-score normalization is calculated as
Min-max scaling method is calculated as:
I have started using Vowpal Wabbit for logistic regression, however I am unable to reproduce the results it gives. Perhaps there is some undocumented "magic" it does, but has anyone been able to replicate / verify / check the calculations for logistic regression?
For example, with the simple data below, we aim to model the way age predicts label. It is obvious there is a strong relationship as when age increases the probability of observing 1 increases.
As a simple unit test, I used the 12 rows of data below:
age label
20 0
25 0
30 0
35 0
40 0
50 0
60 1
65 0
70 1
75 1
77 1
80 1
Now, performing a logistic regression on this dataset, using R , SPSS or even by hand, produces a model which looks like L = 0.2294*age - 14.08. So if I substitude the age, and use the logit transform prob=1/(1+EXP(-L)) I can obtain the predicted probabilities which range from 0.0001 for the first row, to 0.9864 for the last row, as reasonably expected.
If I plug in the same data in Vowpal Wabbit,
-1 'P1 |f age:20
-1 'P2 |f age:25
-1 'P3 |f age:30
-1 'P4 |f age:35
-1 'P5 |f age:40
-1 'P6 |f age:50
1 'P7 |f age:60
-1 'P8 |f age:65
1 'P9 |f age:70
1 'P10 |f age:75
1 'P11 |f age:77
1 'P12 |f age:80
And then perform a logistic regression using
vw -d data.txt -f demo_model.vw --loss_function logistic --invert_hash aaa
(command line consistent with How to perform logistic regression using vowpal wabbit on very imbalanced dataset ) , I obtain a model L= -0.00094*age - 0.03857 , which is very different.
The predicted values obtained using -r or -p further confirm this. The resulting probabilities end up nearly all the same, for example 0.4857 for age=20, and 0.4716 for age=80, which is extremely off.
I have noticed this inconsistency with larger datasets too. In what sense is Vowpal Wabbit carrying out the logistic regression differently, and how are the results to be interpreted?
This is a common misunderstanding of vowpal wabbit.
One cannot compare batch learning with online learning.
vowpal wabbit is not a batch learner. It is an online learner. Online learners learn by looking at examples one at a time and slightly adjusting the weights of the model as they go.
There are advantages and disadvantages to online learning. The downside is that convergence to the final model is slow/gradual. The learner doesn't do a "perfect" job at extracting information from each example, because the process is iterative. Convergence on a final result is deliberately restrained/slow. This can make online learners appear weak on tiny data-sets like the above.
There are several upsides though:
Online learners don't need to load the full data into memory (they work by examining one example at a time and adjusting the model based on the real-time observed per-example loss) so they can scale easily to billions of examples. A 2011 paper by 4 Yahoo! researchers describes how vowpal wabbit was used to learn from a tera (10^12) feature data-set in 1 hour on 1k nodes. Users regularly use vw to learn from billions of examples data-sets on their desktops and laptops.
Online learning is adaptive and can track changes in conditions over time, so it can learn from non-stationary data, like learning against an adaptive adversary.
Learning introspection: one can observe loss convergence rates while training and identify specific issues, and even gain significant insights from specific data-set examples or features.
Online learners can learn in an incremental fashion so users can intermix labeled and unlabeled examples to keep learning while predicting at the same time.
The estimated error, even during training, is always "out-of-sample" which is a good estimate of the test error. There's no need to split the data into train and test subsets or perform N-way cross-validation. The next (yet unseen) example is always used as a hold-out. This is a tremendous advantage over batch methods from the operational aspect. It greatly simplifies the typical machine-learning process. In addition, as long as you don't run multiple-passes over the data, it serves as a great over-fitting avoidance mechanism.
Online learners are very sensitive to example order. The worst possible order for an online learner is when classes are clustered together (all, or almost all, -1s appear first, followed by all 1s) like the example above does. So the first thing to do to get better results from an online learner like vowpal wabbit, is to uniformly shuffle the 1s and -1s (or simply order by time, as the examples typically appear in real-life).
OK now what?
Q: Is there any way to produce a reasonable model in the sense that it gives reasonable predictions on small data when using an online learner?
A: Yes, there is!
You can emulate what a batch learner does more closely, by taking two simple steps:
Uniformly shuffle 1 and -1 examples.
Run multiple passes over the data to give the learner a chance to converge
Caveat: if you run multiple passes until error goes to 0, there's a danger of over-fitting. The online learner has perfectly learned your examples, but it may not generalize well to unseen data.
The second issue here is that the predictions vw gives are not logistic-function transformed (this is unfortunate). They are akin to standard deviations from the middle point (truncated at [-50, 50]). You need to pipe the predictions via utl/logistic (in the source tree) to get signed probabilities. Note that these signed probabilities are in the range [-1, +1] rather than [0, 1]. You may use logistic -0 instead of logistic to map them to a [0, 1] range.
So given the above, here's a recipe that should give you more expected results:
# Train:
vw train.vw -c --passes 1000 -f model.vw --loss_function logistic --holdout_off
# Predict on train set (just as a sanity check) using the just generated model:
vw -t -i model.vw train.vw -p /dev/stdout | logistic | sort -tP -n -k 2
Giving this more expected result on your data-set:
-0.95674145247658 P1
-0.930208359811439 P2
-0.888329575506748 P3
-0.823617739247262 P4
-0.726830630992614 P5
-0.405323815830325 P6
0.0618902961794472 P7
0.298575998150221 P8
0.503468453150847 P9
0.663996516371277 P10
0.715480084449868 P11
0.780212725426778 P12
You could make the results more/less polarized (closer to 1 on the older ages and closer to -1 on the younger) by increasing/decreasing the number of passes. You may also be interested in the following options for training:
--max_prediction <arg> sets the max prediction to <arg>
--min_prediction <arg> sets the min prediction to <arg>
-l <arg> set learning rate to <arg>
For example, by increasing the learning rate from the default 0.5 to a large number (e.g. 10) you can force vw to converge much faster when training on small data-sets thus requiring less passes to get there.
Update
As of mid 2014, vw no longer requires the external logistic utility to map predictions back to [0,1] range. A new --link logistic option maps predictions to the logistic function [0, 1] range. Similarly --link glf1 maps predictions to a generalized logistic function [-1, 1] range.