I am trying to find end-to-end sample code for the IBM Watson AlchemyLanguage Python SDK and IBM Watson Retrieve and Rank Python SDK. I do have sample code from each SDK but they are very skeletal (just class / function definitions). I am hoping to find samples where the API is called, files are uploaded and functions are called, e.g. entity, sentiment, text, etc.
Each SDK has an examples folder with examples of how to call the different methods.
AlchemyLanguage examples
Retrieve and Rank examples
There is also a resources folder with some example files you can use when calling the APIs.
Related
Using Python 3.4 Google App Engine Flex.
Google documentation on using pull queues with Python says to "from google.appengine.api import taskqueue", but does not explain how to make taskqueue available to Python runtime.
They do link to "Easily Access Google API's from Python", where it explains how to install the api client via "pip install google-api-python-client"
This does not install the taskqueue lib.
From the previous doc, there is a link to "Installation", where it says:
Because the Python client libraries are not installed in the App Engine Python runtime environment, they must be vendored into your application just like third-party libraries.
This links to another page "Using third-party libraries", which states you need to either install a lib to /lib or use requirements.txt. Neither of these make taskueue available.
Searching for taskqueue.py in Google's github shows only an example module with the same name.
There is a documentation page on the module, but no information on how to install it.
There is a Python 2.7 example that google points to here, but it doesn't work. There is no setup of taskqueue, no requirements.txt, no instructions.
There is a stack overflow question on this topic here, and the checked answer says to install the SDK. That takes you to here, which takes you to here, which takes you here, which takes you to here, which provides the gcloud SDK download for deploying and managing gcloud. This does not include the python lib for taskqueue.
There is another similar stackoverflow question here, which says:
... this is now starting to feel like an infinite loop. Yes, it's been made crystal clear you need to import the taskqueue. But how do you make it available?
I've asked the question to Google Support and they haven't been able to answer for 4 days.
I've opened two issues, one here and another here. No answers yet.
Do not want to use Python < 3.4.
Do not want to use HTTP REST API.
Just want a simple pull queue.
Many of the docs you mentioned are standard environment docs and do not apply to the flexible environment.
From the Task Queue section in Migrating Services from the Standard Environment to the Flexible Environment:
The Task Queue service has limited availability outside of the
standard environment. If you want to use the service outside of the
standard environment, you can sign up for the Cloud Tasks alpha.
Outside of the standard environment, you can't add tasks to push
queues, but a service running in the flexible environment can be
the target of a push task. You can specify this using the
target parameter when adding a task to queue or by specifying
the default target for the queue in queue.yaml.
In many cases where you might use pull queues, such as queuing up
tasks or messages that will be pulled and processed by separate
workers, Cloud Pub/Sub can be a good alternative as it offers
similar functionality and delivery guarantees.
I have a question on using tensorflow on google cloud platform.
I heard that Google cloud tensorflow doesnt support Keras (keras.io). However, now i can see that Tensorflow has its own API to access Keras (https://www.tensorflow.org/api_docs/python/tf/contrib/keras).
Given this, can I use the above mentioned API inside google cloud, since it is coming out along with Tensorflow package? Any idea sir?
I am able to access this API from the tensorflow installed on a anaconda machine.
Option 1# Please try package-path option.
As per the docs...
-package-path=PACKAGE_PATH
"Path to a Python package to build. This should point to a directory containing the Python source for the job"
Try and give a relative path to keras from your main script.
More details here:
https://cloud.google.com/sdk/gcloud/reference/beta/ml-engine/local/train
Option 2# If you have a setup.py file
Inside your setup.py file within setup call pass argument install_requires=['keras']
Google Cloud Machine Learning Engine does support Keras (keras.io), but you have to list it as a dependency when starting a training job. For some specific instructions, see this SO post, or a longer exposition on this blog page. If you'd like to serve your model on Google Cloud Machine Learning or using TensorFlow Serving, then see this SO post about exporting your model.
That said, you can also use tf.contrib.keras, as long as you use the --runtime-version=1.2 flag. Just keep in mind that packages in contrib are experimental and may introduce breaking API changes between versions.
Have a look at this example on git which I saw was recenly added:
Keras Cloud ML Example
I'm using Watson API to do some concept annotations.
I'd like to then run word2vec on the returned concepts so I can then measure the distances / similarity between concepts. For that I need to work against the same model. Where can I download the model file watson is using here?
To be more precise I'm using the default one which is wikipedia/en-20120601
You can't download the models. That part of the service is not exposed.
I'm looking for an efficient way to generate API documentation in a readable format, from the generated files from Cloud Endpoint (Java). The generated files are either:
- my_api.api
- my_api*.discovery
Something that looks like this:
- https://github.com/kevinrenskers/raml2html#example-output
Swagger, API blueprint and RAML are all nice options, but don't seem adapt well to endpoint generated API descriptor files.
What methods are you using?
Unfortunately we (Apiary) do not actually offer any code generation tool at the moment for API Blueprint.
If you are looking for a way how to generate a description of your API from the code then API Blueprint isn't probably the best choice as we believe it should represent the contract between everybody involved in the API design lifecycle. This is also the reason why we have built the testing tool – Dredd – https://github.com/apiaryio/dredd
With Dredd you can test your API implementation is matching to your blueprint. It wouldn't make much sense if the blueprint would be generated from the implementation.
Hope it clarifies.
Amazon provides a batch of documents describing the format of the feeds we can send via MWS, however, we also need to know what to expect in their responses, what status codes may be reported or what is the structure of XML when errors reported, etc...
Where can I get the information?
The MWS XML schemata are documented within the Selling on Amazon Guide to XML linked from the Developer Guides section in the Amazon Marketplace Web Service (Amazon MWS) Documentation.
I'm omitting a direct link to the PDF, as this might change once in a while. For the same reason the XSD files you are looking for are not publicly linked by Amazon as well, rather you'll find the links to the most current schema documents within the respective sections of the Selling on Amazon Guide to XML.
You might also be interested in the Amazon MWS Developer Guide, the Feeds API Reference and the guide for the Amazon MWS Scratchpad, which are all available there as well.
Good luck!
I know this is a rather old question but I just wanted to look at the actual XML schema files myself today.
There is an XML Documentation PDF hosted on images-na.ssl-images-amazon.com which I assume will stay there for a while. This PDF contains links to the core schema files amzn-envelope.xsd, amzn-header.xsd, and amzn-base.xsd and some other API schemas like Product.xsd which all appear to be relative to https://images-na.ssl-images-amazon.com/images/G/01/rainier/help/xsd/release_1_9/.
The PDF explicitly states that
The XSD samples shown on the Help pages may not reflect the latest XSDs. We recommend
using the provided XSD links to obtain the latest [ve]rsions.
However, the official MWS Feeds API documentation also links to some XSDs but these are relative to https://images-na.ssl-images-amazon.com/images/G/01/rainier/help/xsd/release_4_1/ now, e.g. Price.xsd. Schema references also seem to be relative to this path. For example, Price.xsd includes amzn-base.xsd via <xsd:include schemaLocation="amzn-base.xsd"/> and sure enough there it is.
Unfortunately, I have no idea whether release_4_1 is the latest release of the schemas but the link from the MWS API documentation is a good indicator to me.
Another way to get the XSD's which I think is the most "official" way is to go to your Seller Central and navigate to Help > XML & data exchange > Reference > XSDs.
There you can download all the XSD's available to your account.
Hope it helps!
It seems that this XSD files are outdated.
Just checked the official sellercentral help page for the XSD files https://sellercentral-europe.amazon.com/gp/help/G1611
For the OrderReport there is still release_4_1 referenced.
Some time ago amazon has added a new field to OrderReport for EU markets. The new field is IsSoldByAB.
I am using the xsd files since many years for automatic code generation. And this fails from time to time because of new fields like this. This field is not descriped in one of this:
release_1_9 ($Revision: #7 $, $Date: 2006/05/23 $)
release_4_1 ($Revision: #10 $, $Date: 2007/09/06 $)
XSD files and I am not able to find a version that include this field.
Since some years I extend the XSD file on my own to generate my code. IsSoldByAB is just a boolean field as IsPrime or IsBusinessOrder. So this was an easy task but not "official"...