Multiple Lines of text message for Push Notification using FCM with Ionic 1 - angularjs

I am trying for push notification with multiple lines text message. I have tried many changes like FCMService.java with setBigStyle text format, HTML.fromHTML, and some other. But not able to get the multiple line message.
I have gone through multiple sites and solution but not work. Marking it at duplicate or vote down, help me. I will up flag for your answer.

Sorry Guys!. I have found the solution.
No need of doing any kind of modification in the any file like FCMService.java or any other custom messages for showing of multiple lines of Notification while using Push Notification V5.
Remove the field "style", from the object you are sending notification.
Automatically the Multiple lines of notification will be seen.
For more information, upvote the answer, ask your query. I will help you out.

Below is my notification payload. Here I don't have any style at
{
"registration_ids" : ["XXXXXX"],
"priority" : "high",
"content-available": "0",
"notification": {
"title": "New Message",
"body": "You have an Instruction received. Please contact your admin to process further and more queries“,
"badge": "2",
"sound" : "default"
},
"data" : {
"message": "New Message",
"body": "You have an Instruction received RM",
"sound" : "default",
"actionId": "123456",
"badgeCount": "1",
"instructionType": "History",
"targetFunction": "actions",
"notificationType": "pvm"
}
}

Related

How to tell if Alexa emits nothing?

If I tell Alexa to emit text in certain languages (Chinese, Russian, etc.) or emojis, it will say nothing to the user. Does the Alexa API have a way to indicate a string will be converted to nothing/silence before or after emit? Alternately, is there a way to test the string outside of Alexa?
You can check the Device Log in the test section of the developer's console for the string or SSML of Alexa's response.
For my case it's the Directive.DeviceSpeechSynthesizer.Speak log.
{
"header": {
"namespace": "SpeechSynthesizer",
"name": "Speak",
"messageId": "0a290293-fe8d-40a5-835e-25f2b2e605eb",
"dialogRequestId": "aa432cda-079a-4e46-a831-55d9f212bb6c"
},
"payload": {
"caption": "ok",
"url": "some url",
"format": "AUDIO_MPEG",
"token": "some token",
"ssml": "<speak><prosody volume=\"x-loud\">ok</prosody><metadata><promptMetadata><promptId>ExecuteAction.CommandExecuted</promptId><namespace>HomeAutomation</namespace><locale>en_US</locale><overrideId>default</overrideId><variant>a836f358-a86c-4e3f-94e9-fe2f3bb24c7d</variant><condition/><weight>1</weight><stageVersion>Adm-20170215_180306-27</stageVersion></promptMetadata></metadata></speak>"
}
}
You will notice that the ssml contains the converted text.
Also there is a discussion about supporting another language in ssml response.

passing selected button value from fb messenger to dialog flow

I am failing to understand the simple passing of parameters to and fro via webhook. i am trying to build a simple bot using dialog flow and fb messenger. i have requirement to show two buttons to the end user to pick a cake type. i am able to show the options using the below custom response:
{
"facebook": {
"attachment": {
"type": "template",
"payload": {
"template_type": "button",
"text": "What kind of cake would you like?",
"buttons": [
{
"type": "postback",
"payload": "witheggs",
"title": "Contain Eggs"
},
{
"type": "postback",
"payload": "noeggs",
"title": "Eggless"
}
]
}
}
}
}
once user tap one of the two buttons then how do i set it to some variable in dialog flow and then ask next set of question?
I guess you're missing few steps here. Before, I explain you what to do, be sure to know what a postback is! Postback, when tapped, the text is sent as a user query to dialogflow.com.
Step-1: I created an intent with custom payload as follows:
Step-2: Now, I created a new intent where I have entered user says as noeggs which is of type postback & payload as noeggs the in previous image.
Step-3: Save & test it in FB Messenger.
So basically, what has happened here is, when you click on Eggless button, postback as noeggs is sent as user query to dialogflow.com where there is an intent which matches user says with noeggs & sends response back.

Can't get all busy times of meeting rooms using the Office365 Calendar API

Trying to fetch calendar events in order to allow a user to pick a time for a meeting where the attendees and the meeting room are available.
We're using Outlook Calendar REST API v2 - findMeetingTimes:
https://msdn.microsoft.com/en-us/office/office365/api/calendar-rest-operations#FindMeetingTimes
The request returns almost all of the events.
For some reason events that were created by the user that executes the request are not included in the response. It means that the meeting room or attendee seem as FREE even though they have an event in their calendar.
Here's a sample request with only the meeting room as attendee. We see the same problematic behavior when requesting events for both meeting rooms and users.
https://outlook.office.com/api/v2.0/users('user#companyname.onmicrosoft.com')/findmeetingtimes
{
"Attendees": [{
"Type": "Required",
"EmailAddress": {
"Name": "Palo Alto meeting room",
"Address": "paloalto#companyname.onmicrosoft.com"
}
}],
"TimeConstraint": {
"Timeslots": [{
"Start": {
"DateTime": "2017-02-11T22:00:00",
"TimeZone": "GMT Standard Time"
},
"End": {
"DateTime": "2017-04-29T20:59:59",
"TimeZone": "GMT Standard Time"
}
}]
},
"LocationConstraint": {
"IsRequired": "false",
"SuggestLocation": "false",
"Locations": [{
"ResolveAvailability": "false",
"DisplayName": "Palo Alto meeting room",
"LocationEmailAddress": "paloalto#companyname.onmicrosoft.com"
}]
},
"MinimumAttendeePercentage": "0",
"MaxCandidates": "1000",
"ReturnSuggestionReasons": "true"
}
Any help will be much appreciated.
Ok, so to clarify, and this is the key point I missed at first: the problem that you're having is that the appointment booked by the authenticated user using the conference room as a location does NOT cause an entry to show up in the FindMeetingTimes response. (At first I thought you were saying it was showing as Free!)
This is correct behavior. FindMeetingTimes is not meant to return an exhaustive list of free/busy results. Rather, it's to find a potential meeting time! The list is based on the availability of the organizer (the authenticated user) and the specified attendees. Because both the organizer AND the room are busy (because the organizer has an appointment already booked in the room), the time slot isn't even presented. When you make the request as another user, they are the organizer, and since they are free at that time, the slot is presented as a possible time.
So I may misunderstand what you're trying to do, but this should work for you. As long as you're only presenting the times returned as possibilities, there isn't a potential for conflict.

Is it possible to process JSON and accessing parameters using service bus?

I have seen that it is possible to add a JSON schema when you are using the "HTTP Request"-trigger and adding the JSON schema in the "Request Body JSON Schema"-box.
I have also looked at adding schema in the "Integration Account", however the section in the documentation says its "to confirm that XML documents you receive are valid", which is not what i am looking for.
I am using a Azure Service Bus Queue.
In this case i am having PeekLock as a trigger, the idea is that the input in the service bus will be of a certain format. It will all be in JSON. I dont "care" or need to know what happens before the service bus, all i know is that each message will contain the same format. What my logic app is supposed to do is to receive the message in the service bus and then mail it to whoever its supposed to go to, and add if there is anything to add from blob storage. I want to be able to access certain "tags" or "parameters", since Service Bus only have its own few tags.
I used the jsonschema.net to get the schema, and here is the JSON of how a format will look like:
{
"items": [
{
"Key": "XXXXXX-XXXX-XXXX-XXXX-XXXXXXX",
"type": "Email",
"data": {
"subject": "Who is the father?",
"bodyBlobUID": "00000000-0000-0000-0000-000000000000",
"to": [
"darth.vader#hotmail.com"
],
"cc": [
"luke.skywalker#nomail.com"
],
"bcc": [
"leia.skywalker#nomail.com"
],
"encoding": "System.Text.UTF8Encoding",
"isBodyHtml": false,
"organisationUID": "00000000-0000-0000-0000-000000000000",
"BlobUIDs": [
"luke.skywalker#nomail.com"
]
}
}
]
}
So my questions are of 2 parts:
1: Is it possible to add JSON schemas without using the HTTP Request
trigger for using service bus?
2: If #1 is possible, or maybe it can
be done in another way. How do i access the tags or parameters of the
JSON format? At this moment i am trying to do transformations using
schemas and maps with the Integration account but it seems
unnecessary.
UPDATE: Parse JSON is now available in Logic Apps.
we will be releasing an action called JSON Parse next week, in which you can specify the service bus output as payload, define schema of the payload, then custom friendly tokens in subsequent steps.

Multiple answers for a node in IBM Watson Conversation service

How I can specify multiple answers for a specific dialog node? For example, when user asks “What is your name”, my VA should reply “My name is Alexander”, or “You can call me Alexander”, or “Friends call me Alex”.
Maybe Conversation service must return a code and application checks the code and choose a random answer from answer's db.
For the application node that gives the response, select advanced mode and change to this:
{
"output": {
"text": {
"values": [
"My name is Alexander.",
"You can call me Alexander",
"Friends call me Alex"
],
"selection_policy": "random",
"append": false
}
}
}

Resources