How to update statsmodels to 0.13.0.dev0 version (to use OrderedModel module)? - logistic-regression

What I was trying to do?
I was trying to analyze data using ordinal logistic regression. For that, I tried to import OrderedModel from statsmodels.miscmodels.ordinal_model as suggested by this doc.
Then, what is the problem?
After execution of the above mentioned import statement, I got the following error.
No module named 'statsmodels.miscmodels.ordinal_model'
How did I try to solve the problem?
First of all, I checked the statsmodels version, I am using. I find that I am using the latest version (0.12.1), available in Anaconda. From this doc, I perceive that I will need to use 0.13.0.dev0 version to get the OrderedModel module, as in v0.12.1, there is no folder/file named OrderedModel. However, I do not find any way to update the statsmodels to 0.13.0.dev0 version.
Then, my question
How can I update statsmodels to 0.13.0.dev0 version so that I can use OrderedModel module?
Note: I know that in Python, there are some other ways to do ordinal logit regression. However, I want to use statsmodels due to it's nice summary of analysis.
Thanks in advance!

You can install a recent build from the nightly wheel repository hosted on Anaconda.org.
Run pip install -i https://pypi.anaconda.org/scipy-wheels-nightly/simple statsmodels.

It looks like you will need to compile from the GitHub. See prior related question here:
How to update to the developer version of statsmodels using Conda?

Related

Angular Material get latest version | md-chips : read only, removable

For my applications I need to use the <chips> directive from Angular-Material.
The chips need to be read only (no input) but still removable. This feature is not implemented in the latest released version : see https://material.angularjs.org/1.1.0-rc.5/demo/chips
But it is implemented on the MASTER version : see https://material.angularjs.org/HEAD/demo/chips
The feature I look for :
Is there any easy or right way to get this version in my app ?
I know this question might be simple and I apologize for that, I just never had this problem before so I wanted to make sure I am doing the right thing !
Thank you in advance.
As per the installation documents:
bower install 'angular-material#master'
I did this and then did a search for the string "removable" in angular-material.js and it is found in the code for MdChipsCtrl.

What is the maintained version of google-app-engine mapreduce for python?

It seems that the latest documentation is this one:
https://github.com/GoogleCloudPlatform/appengine-mapreduce/wiki/1-MapReduce
so I assumed that the code in this repo is the latest, is that so?
It seems that in the SDK one can also find a mapreduce lib under google.appengine.ext.mapreduce.
The issue is that I can't find a working example using either one of them, nor a good installation instructions (just putting the mapreduce dir from the above repo doesn't seem to work). Also running the tests fails.
So before digging in and working through the issues to get it working I wonder if I'm missing something obvious?
Any help will be great.
Got response from the Google team that indeed this library is maintain.
Indeed the code example was missing but it was added back now and some of the documentation was updated.
The best tutorial I have found is this one:
http://sookocheff.com/series/mapreduce-api/
Now linked from the official documentation.
One thing that wasn't clear to me in the documentation is that in addition to the mapreduce lib which you need to add to your app code, you need also to add the dependencies:
simplejson
cloudstorage
graphy
appengine-pipelines
the code comes with a build.sh which does it on the some code but you need to do it your self if you are adding the mapreduce lib to your code project.
To use the lib you need to add just this folder to your project. Then use these imports:
from mapreduce import base_handler
from mapreduce import mapreduce_pipeline
This is a good start point.

Run pgrouting 1.x and 2.x on same machine

We want to run pgrouting 2.x on our test server. Additionally, we want existing applications still run on pgrouting 1.x.
Does anyone know, if it's possible installing and running them in parallel?
Currently, we run on Postgres 9.1.9 and PostGIS 2.0.1.
No, I do not think you can do this for the same postgresql version because both versions use a shared library file librouting.so and this file is not compatible between the old and new versions of pgrouting. If you install Postgresql 9.1 and 9.2 for example then you can install pgrouting 1.x on 9.1 and pgrouting 2.x on 9.2 without a problem. In hind sight, maybe I should have done a better job of change the file names to avoid this, but I didn't so its not going to work.
Also I do not think pgrouting 1.x will work with PostGIS 2.0.1 because they removed a lot of functions that pgrouting 1.x uses. It might be possible to solve this problem if you load the PostGIS legacy.sql file.

Chef Package Versioning

If a Chef recipe (or any of it's cookbook dependencies) use the package resource without specifying a version, then the latest version of the package is installed. If you want to control and test exactly what you are installing, then you must always supply the package version. What can you do when the cookbooks that you depend on do not take the same precautions?
See for example the default recipe in the ark cookbook. If this recipe is used on a production server, it could install packages that have not been tested. This is just one example (with over 5m downloads) so I am wondering how people are getting around this problem.
What can you do when the cookbooks that you depend on do not take the same precautions?
I don't think there is a simple answer. This is basically "the Chef way" ...
(Actually, I would suggest that hard-wiring package versions could do more harm than good. One of the good things about using a (good) distribution's package repo is that they regularly release updates with patches for security issues and bugs. But if you wired fixed package versions into your recipes or roles/nodes or something, you would prevent any such patches from propagating to your system.)
However, if it is critical to you that package versions are stable then maybe ...
Clone and hack the cookbooks in question to use specific versions. (Actually, you probably need to do this anyway, to avoid being bitten by unstable cookbooks!)
Use a distro (such as RHEL or its "clones") that values long-term stability, and only pushes out package updates that are "really important".
Create your own private mirror of the distro's package repos with only the "good" versions of the critical packages in it.
Modify Chef so that the package resources pick/install specified versions by default. (I don't imagine this would be easy. But if you did come up with a good solution at this level, it would be a pretty useful addition to Chef! IMO.)
UPDATE
Actually, there is a way to do this for (at least) Debian-based systems; see the apt cookbook and in particular the references to "pinning".
Or with yum, you could "lock" particular versions using "yum versionlock ..." as described here: https://www.zulius.com/how-to/yum-install-specific-package-version/
UPDATE - 2
Another possible trick would be to "inject" a version attribute into the "unsafe" package resources. Something like this:
# first, include_recipe a recipe that specifies 'package "foo"' without
# a version attribute
# then ...
r = resources("package[foo]")
r.variables['version'] = "1.2.3"
With a little ingenuity, one could create a "package version lock" recipe that pulled the versions from a databag, and dealt with missing resource exceptions and version attributes that were actually provided. But I don't know if this is "A Good Idea" (tm).
Chef's package resource uses the package manager of the node's operating system (like apt, yum, etc). These tools always install the most recent version that is available through the repositories. That's why chef's package resource also installs this version.
What the ark cookbook is that it downloads the source code and then compiles it - obvious that you can specify the version to install (through the passed URL).
So it depends on your actual need. If you want to install the version that is available through the distro's or your own package repo, then it's totally fine (and that's what most cookbooks do). If you want to compile everything from source (where you usually have the option to specify the version, the coverage of chef cookbooks supporting this is lower.
Personally, I'd suggest that you set up an own apt/yum/whatever repo for the software for which have specific version requirements.
In short : I'm not managing this.
In a more complete answer:
All distro/release go throught a validation phase before releasing new packages, I'm confident over it and it helps me keep in sync with security fixes.
As far as I know all package managers takes care of not upgrading a package in a breaking way if it is a dependencies of a package installed manually, again you have to trust the package maintainer about this.
i.e.: the package ressource without version won't update make nor gcc if it is a dependency of one package you installed with a fixed version.
For exemple under ubuntu, if you set the nagios package to manual, it will never try to update the libc package over a breaking change, and so I could break other package isntallation as dependencies are not satisfied.
If you're absolutely concerned about it, you have some choices:
Rewrite each pacakge ressource to use fixed version of packages
Fork any cookbook to fix thooses issues (you can write a foodcritic rule to help you detect package ressources without specific versions)
Have your own repos, stable and testing, and move packages inside stable repo once tested and use testing repo on your staging/QA environement.
The 3 is the most conservative as you choose what is in the repo in the stable branch and it won't change magically. The drawback is security fixes you'll have to manage.
Hope it would help.
We had the same issue in our cookbook. So we decided to use data bags.
Data bags can be easily changed, for example:
knife data bag from file my_data_bag host1
OR
knife data bag edit my_data_bag host1
Your recipe will be able to see the specified version from the data bag using the code like this:
my_bag = data_bag_item('my_data_bag', 'host1')
Chef::Log.info("You have changed the version to: #{my_bag['version']}")
package 'java' do
version my_bag['version']
action :install
end
So finally you don't need to modify Cookbook or Recipe. All you need is to pass the version to the data bag.

Is Raven.Client.Authorization 1.0.960 package compatible with Raven.Client 1.0.972?

The current version of the Raven.Client.Authorization is back a version from Raven.Client. The new Raven.Clients allows you to use the latest Json.Net package and therefore RestSharp, ETC.
I hope to save some time / avoid a deep valley of frustration here. Can 1.0.972 support the 1.0.960 Authorization?
I don't sure if out of the box. Try use assembly redirect. If this is not working, you can get the source of build 972 from this branch than compile the Raven.Client.Authorization for yourself, after you update the json.net reference there.
P.S. If your app is in development and will be for the next 2 months, I strongly recommend that you'll try out v1.2, which already started the stabilization process.

Resources