Cannot generate a new Machine Learning Entity Model in Watson Knowledge Studio - ibm-watson

Starting a few days ago I've been having issues deploying any new custom entity model in Watson Knowledge Studio.
I have updated the model ID in the passed object.
The deployment status (under the Machine Learning Version page) is stuck on "starting" and my curl call is returning "model temporarily unavailable".
$ curl -X POST -H "Content-Type: application/json" -u "username":"password" -d #C:/Users/Pedro/Desktop/parameters.json https://gateway.watsonplatform.net/natural-language-understanding/api/v1/analyze?version=2017-02-27
"language": "en",
"error": "model temporarily not available",
"code": 500
}
To fix it, I have tried deleting all other Machine Leaning Model versions (in case of space limitations), re-annotated, and re-trained the model to no avail.
Any help is greatly appreciated!

It seems that there was a temporary issue, which has been fixed. Please try again, it should be working fine now.

Related

Create a Team with owner (Application Permissions)

The documentation (example 2) on this page states that to specify an owner when creating a team an 'owners#odata.bind' property should be added to the POST.
The example shows a body of:
POST https://graph.microsoft.com/v1.0/teams
Content-Type: application/json
{
"template#odata.bind": "https://graph.microsoft.com/v1.0/teamsTemplates('standard')",
"displayName": "My Sample Team",
"description": "My Sample Team’s Description",
"owners#odata.bind": [
"https://graph.microsoft.com/v1.0/users('userId')"
]
}
Trying that out in Graph Explorer (with a valid userId) results in a BadRequest error with a message of Invalid bind property name owners in request
Is this a bug? If not then what is the correct way to specify the owner when creating a team.
NOTE: I know there are other methods of creating a team (create group then convert etc), but this question is specifically about POSTing to the /teams endpoint
If I use the v1.0 version, I will encounter the same error as you, just change it to the beta version.
Although the call works correctly on the beta version of the endpoint it does not yet work on the v1.0 version endpoint.
I raised the issue with Microsoft and they confirmed it was an issue and raised a bug for it. I also reported it in the Microsoft docs, that the example provided does not work.

how to identify which VM instances are using v0.1 and v1beta1 endpoints for app engine?

I have got an mail saying Legacy GAE and GCF Metadata Server endpoints will be turned down on April 30, 2020.
I need to update my metadata server endpoints to v1. But how do I know the current versions of my metadata server endpoints.
I have checked the google cloud documentation of migrating to v1 metadata server. It has given two commands but I really don't know what it meant and where it has to be run.
I had an eye on the documentation and tried these two commands
curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/legacy-endpoint-access/0.1
curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/legacy-endpoint-access/v1beta1
but ended up with an error saying
curl: (6) Could not resolve host: metadata.google.internal
When I put my local host I am getting the output as
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>
Don't know how to proceed further.
Please help me.
Thank you in advance!
Searching around about it, as per the documentation Storing and retrieving instance metadata the v0 Metadata version is deprecated and it's recommended to be moved to v1.
I would recommend you to access the documentation Migrating to v1 metadata server endpoint, that will provide more information to you on how to migrate to the version v1 Metadata.
Let me know if the information helped you!
After a thorough reading of documentation, I have understood that my metadata server endpoints will be automatically updated to v1 by the gcloud.
The only thing that we supposed to do is to find the processes, applications, or images that are using the deprecated metadata server endpoints and update the dependencies(related to gcloud) to the latest version.
That's it! It is successfully updated to v1 metadata server.

Solr: change managed schema to schema and add new field in WINDOWS when using DIH

I am trying to learn and create SOLR search engine for text search.
My initial step is to load a table of content from SQL to Solr. I have imported data using data import handler but Solr loaded only id field. Later I realised that the managed schema does not work with DIH. So I am currently switching form managed schema to classic schema.
One of the step that Solr learning material asks me to do is add new field through schema API and it has given below commands for UNIX. I am not sure its equivalent windows command. Since POST command cannot be used in windows.
curl -X POST -H 'Content-type:application/json' --data-binary '{
"add-field":{
"name":"sell-by",
"type":"tdate",
"stored":true
}
}' http://localhost:8983/solr/gettingstarted/schema
Below is the command I used which failed,
curl -X java -jar example\exampledocs\post.jar -H 'Content-type:application/json' --data-binary '{
"add-field":{
"name":"FIN",
"type":"int",
"stored":true
}
}' http://localhost:8983/solr/#/firstcore/schema
Your advice or help would be much appreciated. I am stuck here for long time. I could not find how to add fields in windows. Any advice would be very much appreciated.
There are some problem with your request parameter.
First of all type integer not available by default if you are implemented it then it is fine.
You have not specified request HTTP method so it takes as GET while it require POST. I think you removed it after try requesting it by POST but it may end-up with Method Not Supported.
Above problem Method Not Supported is not because of POST method, it is because your URL was wrong, It should be http://localhost:8983/solr/firstcore/schema.
These are the problem which I find from your provided data and here is my example of adding field.
And Yes I am using Postman as a rest client
After success operation you will see you schema file of you collection will updated in Files menu on SOLR WebApp:
To check that:
Go to the WebApp
Select Collection
Click on Files
Go to your chema file.
Find your added field.

SharePoint 2010 listdata.svc REST API throwing unexpected 500 Error

We wrote an application that consumes the SharePoint 2010 REST API. The application works fine in our Dev and Test environments. When we try our production site we get a 500 status code and the following is in the response body.
{
"error": {
"code": "",
"message": {
"lang": "en-US",
"value": "An error occurred while processing this request."
}
}
}
We have checked for code, and list definition mismatches against all environments. We have checked the SharePoint and Windows application logs. We are checking to see if maybe some bad data is causing the problem.
Really scratching our head over here on this one.
Any ideas would be much appreciated.
Background
SharePoint 2010 Server
Using SharePoint 2010 REST service listdata.svc
Using AngularJS $http service to call the REST API
Only one of 6 Lists return the 500 errors
Can reproduce error using Postman.
Update
We have confirmed that it is not a data issue.
I ran into a similar problem yesterday, I rolled back any changes I had been working on and it turned out to be related to my workflows. I backed up the workflows with this Export to Visio method Export to Visio and deleted all my workflows. And it the REST service started working again for this list, other lists were still working fine. One of my workflows had an impersonate step that I suspect has something to do with locking up the service.
I was about to backup/restore the list but got lucky with this fix.

How to test Mirror API Subscriptions

The restrictions of a https callbackUrl and the nature of the subscriptions as a whole makes it seem like this is something that can only be done with a publicly accessible url.
So far I have come across two potential solutions to make local development / debugging easier.
The first is the Subscription Proxy service offered by google. This workaround essentially lets you remove the SSL restriction and proxy subscription callbacks to a custom URL.
The second and most helpful way I have found to do development locally is to capture a subscription callback request (say from a server that is publicly accessible) into a log and then use curl to reproduce that request on your local/dev machine using something like:
curl -H "Content-type: application/json" -X POST \
-d '{"json for":"the notification"}' http://localhost:8080/notify
Since the requests can sometimes be large, or you might want to test multiple callback types, I also found it useful to put the JSON of the subscript request into various files (ex: timeline-respond.json) and then run
curl -H "Content-Type: application/json" \
--data #timeline-respond.json http://localhost:8080/notify
I'm curious as to what other people are doing to test their application subscriptions locally.
The command line curl technique that you mention is the best I've found to date.
I've experimented with other solutions, such as an App Engine subscription target paired with a local script which pulls that App Engine service for new notifications to relay to localhost, but so far I haven't found one that's worth the added complexity.
Alternatively, there are many localhost proxies available. My favorite is ngrok.com.
You might want to give localtunnel a try.

Resources