Watson Discovery service getting error when i train the data - ibm-watson

I have a data collection in Watson discovery service.
So i trained this data collection using postman request.
After send request i checked in dashboard.
It showing "Rated with an incompatible scale" and top of the dashboard showing "This collection was previously trained using an incompatible scale. To fix this, either delete that training using the API and restart here; or update each rating below" notifications.
Can you explain why this happen?

The tooling uses 10 for relevant and 0 for irrelevant, and does not have an option for sort of relevant.
I suspect that you used 0,1,2 or a similar scale for your training
Through Postman you can check the status of the collection to see whether it has trained correctly or not : https://www.ibm.com/watson/developercloud/discovery/api/v1/#list-collection-details

Related

Azure Form Recognizer - Copy model from QA to PROD

Our team has built more than 1000+ models in development environment and tested the output. We moved the models from QA to Production using StartModelCopyTo method using Form Recognizer client SDK. During each copy model, code is written such a way that if PercentageCompleted is 100%, then move to next model. All 1000+ models copied to production service. Now the problem is, when we use GetCustomModels method to list all models, all models comes as null, but if I use model Id, it returns all details. Has anyone faced this issue? Business team considers this as an issue and not ready to sign off. We are facing other issues with the Form Recognizer service too.
This could be related to the SDK/REST API version you are using for the get operation. Can you validate that you are using the API version and SDK that corresponds to the V2.1 or v3 API based on which version the mode was trained with?
Direct message if you are still having trouble.
Microsoft support came back and said that copy model method has threshold of 1 per minute. Now we created a delay of 1 per minute and it works.

Get all-time-impressions for given contentName in Matomo Reporting API

I'd like to request the core metrics for a given contentName in matomos reporting API. ContentName is in this example anwalt:4247, and I send this request:
https://statistics/?method=Contents.getContentNames&segment=contentName==anwalt:4247&label=anwalt:4247&date=2019-01-01,today&period=range&format=JSON&module=API&idSite=1&format=JSON&token_auth=93exx3
gives
[{"label":"anwalt:4247","nb_visits":27,"nb_impressions":37,"nb_interactions":12,"sum_daily_nb_uniq_visitors":27,"interaction_rate":"32,43\u00a0%","segment":"contentName==anwalt%3A4247","idsubdatatable":1}]
or this
https://statistics/?method=Contents.getContentNames&label=anwalt:4247&date=2019-01-01,today&period=range&format=JSON&module=API&idSite=1&format=JSON&token_auth=93exx3
gives
[{"label":"anwalt:4247","nb_visits":21,"nb_impressions":28,"nb_interactions":8,"sum_daily_nb_uniq_visitors":21,"interaction_rate":"28,57\u00a0%","segment":"contentName==anwalt%3A4247","idsubdatatable":282}]
But both numbers are wrong (other than in matomo UI).
Isn't there any simple request for that common task?
What you tried with &date=2019-01-01,today&period=range should work fine, what is the problem in the output data?

IBM Watson Visual Recognition: Received invalid status in 403 in getAllCollections response for guid (...) at endpoint (...)

I am using IBM Watson Visual Recognition for a custom model. I have uploaded my dataset as .zip files, which is fine so far. However, I cannot train the model. When I go on my Watson services, it says:
Error fetching custom collections: Error in Watson Visual Recognition service: Recieved invalid status 403 in getAllCollections response for guid crn:v1:bluemix:public:watson-vision-combined:us-south:a/649b0335a5a44f6d80d1fd6909e466f9:8a71daa3-b0be-42ac-bb72-1473de835c19:: at endpoint https://gateway.watsonplatform.net/visual-recognition/api/
When I try to train the model, it says:
"Error in Watson Visual Recognition service: Request Entity Too Large"
To the best of my knowledge, I have checked Google and StackOverflow for solutions, but didn't find any. I am using the Lite version. I only have one project, and one Visual Recognition instance. Please note that it worked for a different Visual Recognition model before, but later I could not use or access that model. So I deleted the older, trained model and tried to create a new one with the above mentioned error.
Does anyone know a solution?
Thanks for your interest in Visual Recognition.
HTTP 403 is a standard HTTP status code communicated to clients by an HTTP server to indicate that access to the requested (valid) URL by the client is Forbidden for some reason. It indicates some problem with your account access.
The "Request Entity Too Large" is a bit misleading, it happens sometimes when the error should be a 403 on POST requests, like training.
As a lite plan user, you may have used up your free credits for the month, for example.
You should double check that you are providing the correct credentials, and check the usage dashboard of your IBM Cloud account, which is described here: https://cloud.ibm.com/docs/billing-usage?topic=billing-usage-viewingusage
If this does not resolve your problem, you can open a support request here https://www.ibm.com/cloud/support

Google Data Studio - group requests

Summary
We are using our custom Google Data Studio connector.
We've faced an issue for reports based on it: there are as many requests as widgets on a page, even if filters are same.
Question
Is there any possibility to send one request for all widgets on page (if filters are the same)?
Approach with grouped widgets were already used, it didn't help.
There were 6 widgets + daterange selector and 6 API requests were received.
Additional info
Expected result: while having several widgets with same filter - only 1 request per page is sent to API.
Actual result: there is 1 request per each widget on a page.
Thanks!
This is the expected behavior. Data Studio can't guarantee that batching requests to an end point will result in the same data as separate requests due to aggregations & non-tabular schemas.
Many endpoints can't support all of their schema fields being requested at the same time.

Options for Filtering Data in real time - Will a rule engine based approach work?

I'm looking for options/alternative to achieve the following.
I want to connect to several data sources (e.g., Google Places, Flickr, Twitter ...) using their APIs. Once I get some data back I want to apply my "user-defined dynamic filters" (defined at runtime) on the fetched data.
Example Filters
Show me only restaurants that have a ratting more than 4 AND have more than 100 ratings.
Show all tweets that are X miles from location A and Y miles from location B
Is it possible to use a rule engine (esp. Drools) to do such filtering ? Does it make sense ?
My proposed architecture is mobile devices connecting to my own server and this server then dispatching requests to the external world and doing all the heavy work (mainly filtering) of data based on user preferences.
Any suggestions/pointers/alternatives would be appreciated.
Thanks.
Yes, Drools Fusion allows you to easily deal with this kind of scenario. Here is a very simple example application that plays around with twitter messages using the twitter4j API:
https://github.com/droolsjbpm/droolsjbpm-contributed-experiments/tree/master/twittercbr
Please note that there is an online and an offline version in that example. To run the online version you need to get access tokens on the twitter home page and configure them in the configuration file:
https://github.com/droolsjbpm/droolsjbpm-contributed-experiments/blob/master/twittercbr/src/main/resources/twitter4j.properties
check the twitter4j documentation for details.

Resources