IBM Watson Visual Recognition: Received invalid status in 403 in getAllCollections response for guid (...) at endpoint (...) - ibm-watson

I am using IBM Watson Visual Recognition for a custom model. I have uploaded my dataset as .zip files, which is fine so far. However, I cannot train the model. When I go on my Watson services, it says:
Error fetching custom collections: Error in Watson Visual Recognition service: Recieved invalid status 403 in getAllCollections response for guid crn:v1:bluemix:public:watson-vision-combined:us-south:a/649b0335a5a44f6d80d1fd6909e466f9:8a71daa3-b0be-42ac-bb72-1473de835c19:: at endpoint https://gateway.watsonplatform.net/visual-recognition/api/
When I try to train the model, it says:
"Error in Watson Visual Recognition service: Request Entity Too Large"
To the best of my knowledge, I have checked Google and StackOverflow for solutions, but didn't find any. I am using the Lite version. I only have one project, and one Visual Recognition instance. Please note that it worked for a different Visual Recognition model before, but later I could not use or access that model. So I deleted the older, trained model and tried to create a new one with the above mentioned error.
Does anyone know a solution?

Thanks for your interest in Visual Recognition.
HTTP 403 is a standard HTTP status code communicated to clients by an HTTP server to indicate that access to the requested (valid) URL by the client is Forbidden for some reason. It indicates some problem with your account access.
The "Request Entity Too Large" is a bit misleading, it happens sometimes when the error should be a 403 on POST requests, like training.
As a lite plan user, you may have used up your free credits for the month, for example.
You should double check that you are providing the correct credentials, and check the usage dashboard of your IBM Cloud account, which is described here: https://cloud.ibm.com/docs/billing-usage?topic=billing-usage-viewingusage
If this does not resolve your problem, you can open a support request here https://www.ibm.com/cloud/support

Related

Form Recogniser - 404 resource not found when calling Analyse Form API

I used the labelling tool to train my model and successfully generated the modelID. When tried the "Get List Custom Models", successfully received the list of models I trained, but when tried to call the "Analyse Form" API, got the error message 404 "resource not found".
Also tried with the logic app - passed the modelID and the link as per specification:
https://something.cognitiveservices.azure.com/formrecognizer/v2.0-preview/custom/models/modelID/analyze
but again got the error 404. Any idea what might be wrong? Thanks.
Form Recognizer Logic App is currently using Form Recognizer v1.0 (preview) and models that were trained with v2.0 API or the labeling tool are not available via the Logic App Form Recognizer connector. When calling the API please call the v2.0 API using the same resource ID and key you used in the labeling tool project.

Access denied due to invalid subscription key (Face API)

I am having trouble using Microsoft Face API. Below is my sample request:
curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=age,gender" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: 1xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxd" --data-ascii "{\"url\":\"http://www.mrbeantvseries.co.uk/bean3.jpg\"}"
I use the subscription id from my cognitive services account and I got below response:
{
"error": {
"code": "Unspecified",
"message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
}
}
Not sure if I've missed out anything there. Can someone help me on this? Very much appreciated.
I ran into the same problem. I read the API documentation and it states the following.
You must use the same region in your REST API call as you used to obtain your subscription keys.
First, you must find the location of your subscription.
In order to find the location of your subscription region, you must go to Cognitive Services -> Properties under the Label Location, you will find your subscription region.
See below.
Second you must find the correct endpoint to make the call to.
For example, if I want to make a call to the Computer Vision API,
My location is East US, I will use either key 1 or 2, then I will use the following endpoint
East US - https://eastus.api.cognitive.microsoft.com/face/v1.0/detect
You will now be able to have access to the API.
It appears that you've entered your Azure subscription ID instead?
In the Azure portal, you can find the API key under 'Keys', shown below:
It will be a 32-digit hexadecimal number, no hyphens.
I had faced the same issue, it seems like there is some problem with the keys generated newly. To fix this you can actually add your endpoint as well, when you create the object for IFaceServiceClient. You can see the code below.
private readonly IFaceServiceClient faceServiceClient = new FaceServiceClient("your key", "Your endpoint");
CesarB is correct. You must create a Resource of Cognitive Service in Azure first and then get the subscription key from it.
the region is not always 'westus', it really depends on what region you select when you created the resource. You can also check it on the endpoint of overview of the Resource
I ran into a similar problem. I figure it might be helpful to some people, so I am posting it here. (btw Azure support points me to this post here)
I was trying to run through the sample file for ImageSearch of Azure. I was refering to these pages:
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-image-search/quickstarts/csharp
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-image-search/quickstarts/client-libraries?tabs=visualstudio&pivots=programming-language-csharp
https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/BingSearchv7/BingImageSearch/quickstart/bing-image-search-quickstart-csharp.cs
I was receiving a mixture of 404 Not Found error & 401 unauthorized error when send requests to the Bing Search resource, using
Microsoft.Azure.CognitiveServices.Search.ImageSearch. I figure it must be something wrong with either my credentials or my endpoints.
After struggling with it for hours, reading through posts and talking to Azure support member, I finally find the problems:
The base Uri Endpoint I was assigned on the Azure Keys & Endpoints webpage is incomplete. (https://api.bing.microsoft.com/)
The base Uri Endpoint on the sample tutorial pages was outdated because of the 2020.10.30 transition between Cognitive Services to Bing Search Services. (https://api.cognitive.microsoft.com/bing/v7.0/images/search)
As of 2021.09.22, the correct global base Uri Endpoint for Bing Image Search is:
https://api.bing.microsoft.com/v7.0/images/search
Hope this would be helpful to anyone and save mankind some time.
Endpoint
https://westeurope.api.cognitive.microsoft.com/face/v1.0
Endpoint and the subscription key must be consistent.
look at Microsoft Overview for this info!

Getting http 500 backend error when posting to Gmail API

I am using the Gmail API to put messages into a Google Apps email account. I use
the OAuth 2.0 authentication protocol with a service account. This is more or
less working fine. One of our customers has asked us to put messages
directly into a Google Vault. I don't see a Vault API, but I did find this
information related to the "insert" method (which is what we use to add
messages to a normal account):
parameter "deleted" (boolean): Mark the email as permanently deleted
(not TRASH) and only visible in Google Apps Vault to a Vault administrator.
Only used for Google Apps for Work accounts.
When I do this, some messages are accepted, but frequently I get http error
500 in response to the POST. The error text says "Backend Error". I thought
the pattern was that the first time the message was posted, it would work,
but the second time would generate the error. Therefore I was thinking it
was a duplicate check issue. However I now see some examples of messages
that fail immediately. The POST url looks like this:
https://www.googleapis.com/upload/gmail/v1/users/user#domain.com/messages?uploadType=multipart&internalDateSource=dateHeader&deleted=true&access_token=ABC...
As I mentioned, the same message to the same url (without deleted=true) will
always work. Any ideas what is causing the error?
Was just fighting this issue myself. Apparently the error has something to do if the message is compatible with the Google vault retention policies:
If I turn on a default policy of "Retain everything" then I've been able to get the messages to import correctly. HTH!
I'm using the import api method and the backendError seems to be related to filters/policies. For example we asked Google to reject messages with xls and macros and we get the error on mail with that kind of attachment

BizTalk Server and SalesForce - INVALID_SESSION_ID: Invalid Session ID found in SessionHeader: Illegal Session

I'm working on an integration scenario between SalesForce and BizTalk Server 2010. I have read the following blogs
http://seroter.wordpress.com/2009/10/11/orchestrating-the-cloud-part-ii-creating-and-consuming-a-salesforce-com-service-from-biztalk-server/
http://soa-thoughts.blogspot.com.au/2010/08/biztalk-salesforce-and-msmq-part-i.html
http://soa-thoughts.blogspot.com.au/2010/08/biztalk-salesforce-and-msmq-part-ii.html
I set the sessionId in a message assignment shape as described in the posts:
SfdcMessage(WCF.Headers) = "<headers><SessionHeader><sessionId>00DK0000005Du2o!AREAQLnrXpVFRAAgwT_Z7iaK0do1IltgHqDLyDfLhbkUGqvFMvzNURdgRtKdPc47cO9sZpOPJ0x8q496vQJsXKGrXt4BcdLW</sessionId></SessionHeader></headers>";
However when my send port calls the SalesForce custom web service I receive the following error
A message sent to adapter "WCF-BasicHttp" on send port "WcfSendPort_SP" with URI https://abc.xyz is suspended.
Error details: System.ServiceModel.FaultException: sf:INVALID_SESSION_IDINVALID_SESSION_ID: Invalid Session ID found in SessionHeader: Illegal Session
at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult result)
I did some more research and came across these posts:
http://boards.developerforce.com/t5/General-Development/INVALID-SESSION-ID-Invalid-Session-ID-found-in-SessionHeader/td-p/74031
http://boards.developerforce.com/t5/Perl-PHP-Python-Ruby-Development/INVALID-SESSION-ID-Invalid-Session-ID-found-in-SessionHeader/td-p/66846
http://boards.developerforce.com/t5/General-Development/INVALID-SESSION-ID-Invalid-Session-ID-found-in-SessionHeader/td-p/200705
Has anyone encountered this issue?
Any help is appreciated.
Cheers,
A couple of thing in regards to this:
The blog posts I'm referring to in my question are too old so superfell is right in that the namespace needs to be added in the SessionHeader which is also mentioned here: http://boards.developerforce.com/t5/General-Development/INVALID-SESSION-ID-Invalid-Session-ID-found-in-SessionHeader/td-p/200705 "Your SessionHeader and sessionId elements in the soap header are not in any namespace, they need to be in the xml namesapce defined by the WSDL. The newer API endpoints are stricter about this."
A friend pointed me to the book "Microsoft BizTalk 2010: Line of Business Systems Integration" where the author writes: “Do not forget to put a namespace on the SessionHeader node as the Salesforce.com API is strict about this and will return an invalid token message if the namespace is missing.” In the book the correct format of the SOAP header is stated as:
SFDC_QueryRequest(WCF.Headers) = "<headers><SessionHeader xmlns='urn:enterprise.soap.sforce.com'><sessionId>" + Chapter10_SFDC.TokenManager.TokenManager.SessionId + "</sessionId></SessionHeader></headers>";
Basically I was missing the namesace xmlns='urn:enterprise.soap.sforce.com'.
Also when configuring your send port make sure to import the custom binding *_Custom.BindingInfo.xml and NOT the .BindingInfo.xml or else you will still have sessionId issues.
Cheers.

WSDL on SQL Server gives HTTP status 505 Version Not Supported

I am a DBA, not a developer, so forgive me if this is a silly question. But we are having issues with a SQL Server 2005 Web Service end point. On the local network I am able to add the reference in Visual Studio 2010 with out any issues. It uses digest as the authentication scheme.
However, when anyone tries to add the web reference on another network, such as a developer in New Zealand (we are in Dayton, OH USA) he receives this error:
There was an error downloading
'http://server.domain.net:1280/release-single-address?wsdl'. The
request failed with HTTP status 505: HTTP Version not supported.
Metadata contains a reference that cannot be resolved:
'http://server.domain.net:1280/release-single-address?wsdl'. The
remote server returned an unexpected response: (505) HTTP Version not
supported. The remote server returned an error: (505) Http Version Not
Supported. If the service is defined in the current solution, try
building the solution and adding the service reference again.
Again, this works in Visual Studio as Right Click add Reference -> Advanced -> Add Web Reference when done on the local subnet as the server.
When done on any other network the service does not import. We have tried it w/o any proxy. There is a cross domain trust involved but that does not seem to be the issue as the error occurs using accounts from either domain. When I download the raw XML to my hdd I can use that to create the web reference. I believe firmly this is some sort of transport layer issue, such as a proxy, but captures when the proxy server settings are disabled are not conclusive.
Today, years after I posted this question, we finally found the answer to this question. It was not a Squid proxy server as we had come to believe. We continued experiencing issues like this with various web services/sites. The last straw was when we finally needed to deploy an SVN server that was used by multinational software engineering teams. Every single member of the different Ops teams we spoke to swore to us there was nothing between the sites that could break our services.
By a stroke of luck the company's Chief Information Security Officer was visiting our site and a colleague happened to run into him and asked about the issues we were having and what might be the cause of it. He said immediately that there were Riverbed appliances doing caching and layer 7 inspection on all WAN traffic. We finally managed to catch these devices in the act of attempting to "normalize" HTML and XML and we were able to perform a capture of data coming from a machine in New Zealand. We performed a diff on HTML pages that were served as well as XML coming from a web service to compare how it looked on the local network vs. across the WAN. In the pages/XML that were being served across the WAN the closing tags were inserted that were not needed or that actually made the XML malformed. Some tags were even commented out entirely if the appliance didn't know what to do with them. And the smoking gun? A custom header...
X-RBT-Optimized-By: cch-riverbed-1 (RiOS 6.5.6a) SC
"Optimized" You keep using that word, but I do not think that it means what you think that it means.
I'm not a pro of SOAP with VS but it may be that version of SOAP is incompatible with sql server 2005?
If I recall correctly, there is two versions of SOAP: 1.1 and 1.2.
Check the HTTP GET command format is correct?
HTTP GET http:// mydomain.com HTTP/1.1\
note there is a SPACE between 'http://' and 'mydomain.com'. The server can not match this format. The result is 505
I am not sure but, I think you should check your firewall or your IIS configuration.

Resources