Watson Assistant V2 API: change session timeout - ibm-watson

Using the Watson Assistant V2 API it is necessary to create a session handle first (create_session(assistantid)) which returns the session ID to use in the individual call to message(assistantid,sessionid,request). The session maintains the conversation state and therefore is the equivalent to the context id parameter of the V1 API.
Unfortunately it seems that there's a 5 minute session timeout by default. The response includes the following header attribute:
{...,"x-watson-session-timeout": [
"x-watson-session-timeout",
"session_timeout=300"
],...}
Any attempt to change this parameter by using the set_default_headers() method of the assistant object or by adding the optional header parameter to the create_session() call seems to have no effect. As I have not found any documentation of how to update this parameter correctly I just tried several alternatives:
1) self.assistant.set_default_headers({'x-watson-session-timeout':"['x-watson-session-timeout','session_timeout=3600']"})
2) self.assistant.set_default_headers({'x-watson-session-timeout':"'x-watson-session-timeout','session_timeout=3600'"})
3)self.assistant.set_default_headers({'x-watson-session-timeout':"session_timeout=3600"})
4)self.assistant.set_default_headers({'x-watson-session-timeout':"3600"})
5)self.assistant.set_default_headers({'session_timeout':"3600"})
Nothing is effective. The value of the parameter in the header of the response is still 300.
Do I use incorrect dict pairs to update the parameter? Is there another way to maintain the conversation state longer than 5 minutes using the V2 API? Is it not possible to change it at all?

The value of the session timeout is not under the control of the caller, and is in fact related to the Assistant plan you are using. For the free and standard the timeout is indeed 5 minutes. For the other plans the timeout is larger.
See Retaining information across dialog turns
The current session lasts for as long a user interacts with the assistant, and then up to 60 minutes of inactivity for Plus or Premium plans (5 minutes for Lite or Standard plans).

You can call watson assistant for an other session and resend your message. Keep your context...
Or just increase timeout limit in assistant setting on IBM Cloud with the right plan.
function createSession(end) {
assistant.createSession({
assistantId: watsonID }).then(res => {
sessionId=res.result.session_id;
if(end){
console.log("\x1b[32m%s\x1b[0m","new session "+sessionId);
}else{
console.log("session id :"+ sessionId);
console.log("http://"+host+":"+port);
}
});
}
createSession();
function callWatsonClient(payload,res) {
assistant.message(payload,function(err, data) {
if(data == null){
createSession(true);
//this not keep the context
var data ={result:{context:"",output:{generic:[{text:"session expirée, renvoyez le message"}]}}};
res.send(data);
}else{
//normal job
console.log("\x1b[33m%s\x1b[0m" ,JSON.stringify(data.result.output));
}

Related

Journey builder's custom activity: Fetch data extension data in bulk

I am new to Salesforce Marketing Cloud and journey builder.
https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/creating-activities.html
We are building journey builder's custom activity in which it will use a data extension as the source and when the journey builder is invoked, it will fetch a row and send this data to our company's internal endpoint. The team got that part working. We are using the postmonger.js.
I have a couple of questions:
Is there a way to retrieve the data from data extension in bulk so that we can call our company's internal bulk endpoint? Calling the endpoint for each record in the data extension for our use case would not be efficient enough and won't work.
When the journey is invoked and an entry in the data extension is retrieved and that data is sent to our internal endpoint, is there a machanism to mark this entry as already sent such that next time the journey is run, it won't process the entry that's already sent?
Here is a snippet of our customActivity.js in which this is populating one record. (I changed some variable names.). Is there a way to populate multiple records such that when "execute" is called, it is passing a list of payloads as input to our internal endpoint.
function save() {
try {
var TemplateNameValue = $('#TemplateName').val();
var TemplateIDValue = $('#TemplateID').val();
let auth = "{{Contact.Attribute.Authorization.Value}}"
payload['arguments'].execute.inArguments = [{
"vendorTemplateId": TemplateIDValue,
"field1": "{{Contact.Attribute.DD.field1}}",
"eventType": TemplateNameValue,
"field2": "{{Contact.Attribute.DD.field2}}",
"field3": "{{Contact.Attribute.DD.field3}}",
"field4": "{{Contact.Attribute.DD.field4}}",
"field5": "{{Contact.Attribute.DD.field5}}",
"field6": "{{Contact.Attribute.DD.field6}}",
"field7": "{{Contact.Attribute.DD.field7}}",
"messageMetadata" : {}
}];
payload['arguments'].execute.headers = `{"Authorization":"${auth}"}`;
payload['configurationArguments'].stop.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].validate.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].publish.headers = `{"Authorization":"default"}`;
payload['configurationArguments'].save.headers = `{"Authorization":"default"}`;
payload['metaData'].isConfigured = true;
console.log(payload);
connection.trigger('updateActivity', payload);
} catch(err) {
document.getElementById("error").style.display = "block";
document.getElementById("error").innerHtml = err;
}
console.log("Template Name: " + JSON.stringify(TemplateNameValue));
console.log("Template ID: " + JSON.stringify(TemplateIDValue));
}
});
Any advise or idea is highly appreciated!
Thank you.
Grace
Firstly, i implore you to not proceed with the design pattern of fetching data for each subscriber, from Marketing Cloud, that gets sent through the custom activity, for arguments sake i'll list two big issues.
You have no way of limiting the configuration of data extensions columns or column names in SFMC (Salesforce Marketing Cloud). If any malicious user or by human error would delete a column or change a column name your service would stop receiving that value.
Secondly, Marketing Cloud has 2 sets of API limitations, yearly and minute by minute. Depending on your licensing, you could run into the yearly limit.
The problem you have with limitation on minutes (2500 for REST and 2000 for SOAP) is that each usage of the custom activity in journey builder would multiple the amount of invocations per minute. Hitting this limit would cause issues for incremental data flows into SFMC.
I'd also suggest not retrieving any data from Marketing Cloud when a customer gets sent through a custom activity. Users should pick which corresponding rows/data that should be sent to the custom activity in their segmentation.
The eventDefinitionKey can be picked up from postmonger after requestedTriggerEventDefinition in the eventDefinitionModel function. eventDefinitionKey can then be used to programmatically populate SFMC's POST call with data from the Journey Data model, thus allowing marketers to select what data to be sent with the subscriber.
Following is some code to show how it would work in your customActivity.js
connection.on(
'requestedTriggerEventDefinition',
function (eventDefinitionModel) {
var eventKey = eventDefinitionModel['eventDefinitionKey'];
save(eventKey);
}
);
function save(eventKey) {
// subscriberKey fetched directly from Contact model
// columnName is populated from the Journey Data model
var params = {
subscriberKey: '{{Contact.key}}',
columnName: '{{Event.' + eventKey + '.columnName}}',
};
payload['arguments'].execute.inArguments = [params];
}

Can I send an alert when a message is published to a pubsub topic?

We are using pubsub & a cloud function to process a stream of incoming data. I am setting up a dead letter topic to handle cases where a message cannot be processed, as described at Cloud Pub/Sub > Guides > Handling message failures.
I've configured a subscription on the dead-letter topic to retain messages for 7 days, we're doing this using terraform:
resource "google_pubsub_subscription" "dead_letter_monitoring" {
project = var.project_id
name = "var.dead_letter_sub_name
topic = google_pubsub_topic.dead_letter.name
expiration_policy { ttl = "" }
message_retention_duration = "604800s" # 7 days
retain_acked_messages = true
ack_deadline_seconds = 600
}
We've tested our cloud function robustly and consequently our expectation is that messages will appear on this dead-letter topic very very rarely, perhaps never. Nevertheless we're putting it in place just to make sure that we catch any anomalies.
Given the rarity of which we expect messages to appear on the dead-letter-topic we need to set up an alert to send an email when such a message appears. Is it possible to do this? I've taken a look through the alerts one can create at https://console.cloud.google.com/monitoring/alerting/policies/create however I didn't see anything that could accomplish this.
I know that I could write a cloud function to consume a message from the subscription and act upon it accordingly however I'd rather not have to do that, a monitoring alert feels like a much more elegant way of achieving this.
is this possible?
Yes, you can use Cloud Monitoring for that. Create a new policy and perform that configuration
Select PubSub Topic and Published message. Observe the value every minute and count them (aligner in the advanced option). Now, in the config, when it's above 0 from the most recent value, the alert is raised.
To filter on only your topic you can add a filter by topic_id on your topic name.
Then, configure your alert to send an email. It should work!

Adal js Library - this.adalService.acquireToken method giving "Token renewal operation failed due to timeout" on first time login

Though there are some link related to this questions but I did't find any relevant answer, So hoping someone will answer this time.
Here is the scenario, In my Angular Application I am using adal-angular4 which is wrapper over Adal.js
Issue : this.adalService.acquireToken method during only first time login. I am getting timeout error but after login if i will do page refresh then this.adalService.acquireToken method working properly and the interesting part are following.
Issue is only coming in deployed environment not in the localhost.
Error "Token renewal operation failed due to timeout" coming only sometimes when network is slow or random times.
Here is my request interceptor service
intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> | Observable<HttpSentEvent | HttpHeaderResponse
| HttpProgressEvent | HttpResponse<any> | HttpUserEvent<any>> {
if (req && req.params instanceof CustomAuthParams && req.params.AuthNotRequired) {
return this.handleAuthentication(req, next, null);
} else {
if (!this.adalService.userInfo.authenticated) {
console.log(req, 'Cannot send request to registered endpoint if the user is not authenticated.');
}
var cachedToken = this.adalService.getCachedToken(environment.authSettings.clientId);
console.log('cachedToken', cachedToken);
if (cachedToken) {
return this.adalService.acquireToken(resourceURL).timeout(this.API_TIMEOUT).pipe(
mergeMap((token: string) => {
return this.handleAuthentication(req, next, token);
})
).catch(err => { console.log('acquire token error', err); return throwError(err) })
} else {
this.adalService.login();
}
}
}
Well, after struggling for 1 to 2 days I have found the root cause. So posting this answer so that it will help others.
adal-angular4 library is using 1.0.15 version of adal-angular which is old version where default timeout for loadFrameTimeout is 6 seconds and in this version there is no configuration to increase the loadFrameTimeout. please see below link
Adal configurations
Now during first time login there are many steps happens.
After authentication, application redirect to configured URI by azure AD, By appending ID and Access token in the reply URL.
Then Library set all these token in the local storage or session storage depends on the configuration.
Then your applications loads and start making calls to webapi. Now here is the interesting things was happening, for each request I am calling acquireToken method against webapi application, So if network is slow acquireToken calls will give timeout error since 6 second is not enough sometimes. But for some of the API it will able to get the token.
Now on first call acquireToken method takes time but for subsequent request it takes token from the cache if it is available, so timeout error was coming only for first time not after that.
So, In this library for now there is no way to increase the loadFrameTimeout so I used
Angular5 warpper which is using 1.0.17 version of adal-angular where we can increase loadFrameTimeout which solved my issue.

Getting a users mailbox current history Id

I'd like to start syncing a users mailbox going forward so I need the most recent historyId of the users mailbox. There doesn't seem to be a way to get this with one API call.
The gmail.users.history.list endpoint contains a historyId which seems to be what I need, from the docs:
historyId unsigned long The ID of the mailbox's current history record.
However to get a valid response from this endpoint you must provide a startHistoryId as a parameter.
The only alternative I see is to make a request to list the users messages, get the most recent history id from that, then make a request to gmail.users.history.list providing that historyid to get the most recent one.
Other ideas?
Did you check out https://developers.google.com/gmail/api/guides/sync ?
Depending on what your use-case is, to avoid races between your current state and when you start to forward sync, you'll need to provide an appropriate historyId. If there were a "get current history ID" then anything between your previous state and when you got those results would be lost. If you don't have any particular existing state (e.g. only want to get updates and don't care about anything before that) then you can use any historyId returned (e.g. on a message or thread) as you mention.
Small example for C# users (mentioned in comments by #EricDeFriez).
Nuget package Google.Apis.Gmail.v1 must be installed. See also quickstart for .NET developers.
var service = new GmailService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = ApplicationName,
});
var req = service.Users.GetProfile("me");
req.Fields = "historyId";
var res = req.Execute();
Console.WriteLine("HistoryId: " + res.HistoryId);
This answer is related to the Java Gmail API Client Library using a service account.
I found that the gmail.users.getprofile() will not work as the object that it returns is of type Class Gmail.Users.GetProfile which does not have an interface to getting a historyId.
com.google.api.services.gmail.model.Profile actually has a getHistoryId() function, but calling service.users().getProfile() will return a Class Gmail.Users.GetProfileobject instead.
To get around this, I use the history.list() function which will always return the latest historyId as part of its response.
Gmail service = createGmailService(userId); //Authenticate
BigInteger startHistoryId = BigInteger.valueOf(historyId);
ListHistoryResponse response = service.users().history().list("me")
.setStartHistoryId(startHistoryId).setMaxResults(Long.valueOf(1)).execute();
I set the max number of results to be 1 to limit the unnecessary data that I get returned back and I will receive a payload that looks like:
{"history":[{"id":"XXX","messages":[{"id":"XXX","threadId":"XXX"}]}],"historyId":"123456","nextPageToken":"XXX"}
The historyId (123456) will be the current historyId of the user. You can grab that historyId using response.getHistoryId()
You can also see that the latest historyId is given in the response if you use the API tester for Users.history: list
https://developers.google.com/gmail/api/v1/reference/users/history/list

Why does WebApp2 auth.get_user_by_session() change the token?

I am using WebApp2 with auth for user sessions. My client will occasionally make nearly simultaneous requests to the server. The first one will make a request with session data that looks like this:
{
'cache_ts': 1408106895,
'token': u'GXpsaVQh5ZWtqxJMUBpGTr',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408034938
}
Then after a call to auth.get_user_by_session(), the session comes back like this:
{
'cache_ts': 1408124980,
'token': u'0IVduczdGR5PkrMqNhBvzW',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408124980
}
As you can see the token has been changed, and the timestamps updated.
Nearly simutaneously, another request is made that contains the same initial session data.
{
'cache_ts': 1408106895,
'token': u'GXpsaVQh5ZWtqxJMUBpGTr',
'user_id': 5690665774088192L,
'remember': 1,
'token_ts': 1408034938
}
However, that token is now invalid, so the session data is set to None. This wipes the users session, and causes lots of problems. Is there some setting I should be using to extend the life of the UserToken? Is there a more appropriate method than get_user_by_session()? I woud imagine that nearly simultaneous requests with the same session data shouldn't cause enormous issues. The ideal situation would be that if auth received invalid or expired tokens it would just ignore them, and throw an error.
Update 1
Hoped it was something simple like passing False to get_user_by_session(). That of course killed the session immediately.
Update 2
I've found that I only really need the user_id field, and that comes for free with the cookie data. Implementing that reduces the frequency of the issue. However the problem isn't actually fixed, and I'd love some input from anyone with familiarity of this library.
This is due to token_new_age parameter which defaults to 1 day so... every 24h the token will change.
This is a security measure because if someone hacks that session it will only work for 24h.
Parameter 'token_max_age' will also delete the token when time is consumed.

Resources