How to tie a backend to Watson's Conversation service? - ibm-watson

I am using Conversation service in my application, at the backend I want to use the corpus I have setup so that I can ask deep technical questions since my corpus has been populated with Technical videos and articles spanning 20+ years.
Can you please point me to examples where Conversation service has been integrated with backend Watson services ?

There is an example of integrating Retrieve and Rank at
http://conversation-enhanced.mybluemix.net/
The code to show this integration is housed at https://github.com/watson-developer-cloud/conversation-enhanced

I did find the code where the query is constructed with the user's input and the backend corpus is searched , the class in github is here : https://github.com/watson-developer-cloud/conversation-enhanced/blob/master/src/main/java/com/ibm/watson/apis/conversation_enhanced/rest/ProxyResource.java

Related

How Azure Search service works with Azure Bot & Language Cognitive Service

I am looking for documentation, blogs or article where I can learn how Azure bot works with Azure Search Service and Language Cognitive Service.
Any reference to document/architecture flow will be highly appreciated.
I have searched MS documentation, but so I couldn't find article beyond their individual usage and benefits.
I am looking for some detailed information about how information (Chat communication) flows among Bot app service, Search Service, Language Cognitive service and ultimately to knowledgebase (Language Custom Question answering)
I am curious to learn how search service is used for document indexing and searching QnA pairs,
How it interact with Bot service, Language service and KB
Thank you.
LUIS (Language Understanding) is the best approach to work with azure search service and azure bot language cognitive service. There is professional documentation to learn the architecture. LUIS is having a hybrid method to implement the cognitive services in language service and bot in combine for IoT applications, Chat bot, and E-Commerce chat bots.
https://www.luis.ai/
Check the above link to learn the architecture.
Note: Based on architecture, we need to learn from this link and internally we have few more links in resources section. Check the screen below to navigate to the resources section to get official documentation.
Extended Answer:
To answer the questions asked in comment, whether "Bot service makes use of Language service and app service makes use of Search service". Then there is question asked to the bot it has to access the application service which was hosted using the azure cognitive services. Then the service was connected successfully, the language is the major part which the bot need to identify. The way of architecture shown in the question is the way they will work and also the below architecture is an another example of implementation

Multi tenancy in Spring boot and React

I am trying to develop a REST API in spring boot and having React as the frontend. React will send GET or POST requests happening on the frontend to modify the MySQL DB in the backend via REST API. In my application, A user can have multiple companies inside the application and each company data is isolated from one another. I have come across Multi-Tenancy in Spring boot. How can I do this implementation for REST API ?? How can I configure my React application for this multi-tenancy?? Is Reactive Core in spring is useful??. Any resources where I can find these answers so that I can implement it. Or any other better way to implement this use case. Please, someone help me. Google results have confused me a lot
You could read a lot about this if you focus on little more details than a broad topic search in Stackoverflow
Here is one way you can achieve your requirement.
React App authenticates your end-users
There will be an API call from UI to get the list of accessible tenants
The list of tenants will be shown in the UI like in a dropdown
The end user will choose a tenant
Once chosen, untill a change next time, you will pass the selected tenant in all the API request headers so that you are identifying the tenant context of the user that is requesting for the data.
Regarding the data-isolation, you do have a lot of options explained in stackoverflow and people have multiple approaches depending on the multi-tenancy levels of course.
The above are the steps that you could achieve in any language (java in your case)
In case of choosing whether your API's are Reactive depends on the business needs. However you should be able to weigh the differences between the aync and reactive implementations, both have their needs, so identify the requirement and choose an approach that suits.
In case you need help in choosing the right approach for a given scenario, do share with us the scenario, how you did it and what you have issue / doubt and the community will be happy to help you.

Possible to upload documents to Watson discovery via Watson Assistant?

I have been trying to find an answer to this question, but can't seem to have any luck.
Is it possible to upload a document to Watson Discovery via Watson Assistant?
If so, could anyone point me in the right direction?
There is nothing out of the box that could do as you ask. Discovery does now come with a number of mechanisms to populate collections, i.e. Box, Web and sharepoint crawlers along with a manual upload. Non of these are integrated with Watson assistant.
That's not to say what you are looking for cannot be done, but you would need to build the mechanism yourself. As an example you could create a response payload (json packet ) within Watson Assistant that triggers some client code which performs an upload. This upload could then directly push the document into a discovery collection via discovery's API methods.

IBM Watson Dialog Concept

I am creating Watson Dialog Service. My question is : do we always need the formatted XML file for the input or is there a way by which we can feed data in to Watson from other sources like from several web URLs etc? Because when we will have a huge amount of data how XML will be able to deal and creating of XML files will also be difficult as data grows. Kindly help to understand the basic concept.
It is possible to configure an existing XML Dialog by setting up profile variables.
The Set Profile Variables API call on the Dialog Service is key to priming up a dialog: https://watson-api-explorer.mybluemix.net/apis/dialog-v1#/
Sadly, this is also the most poorly documented call in their docs.
Have a look at the What's in Theaters example from IBM. This is by far the most comprehensive application that shows how their Dialog Service can interact with other Watson enabled services like Natural Language Classifier and also third party apps.
Source Code: https://github.com/watson-developer-cloud/movieapp-dialog
Documnentation: http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/ega_docs/dialog_ega.shtml
Based on this you have to decide if the Dialog service is suitable for your purpose or can play a small role in your larger application when combined with other services.
Conversational Agent Application Starter Kit: https://github.com/watson-developer-cloud/conversational-agent-application-starter-kit

Text search in Google app store

Our application is using Google App Engine and DataStore for the server side.
We are storing Blogs model with name and description in the datastore.
We want to provide search feature for the description. We could not find correct way to do it.
Can anyone guide us the correct method to search blogs with specific description?
You need to use the search service (which is currently experimental).

Resources