How to extract JSON object in a Google Apps Script web app? - arrays

Can someone point out why this function would not work?
function doPost(e) {
var contents = JSON.parse(e.postData.contents);
var data = JSON.stringify(contents,null,4);
var x = data["inboundSMSMessageList"];
var y= x.inboundSMSMessage[0].senderAddress;
GmailApp.sendEmail("sample.email#gmail.com", "test5", y);
}
It takes an event listener, e, parses its contents and then stringily the contents using JSON.parse() and JSON.stringify() respectively. This is a sample stringified data:
var data = {
"inboundSMSMessageList": {
"inboundSMSMessage": [
{
"dateTime": "Sun Jan 03 2021 01:25:03 GMT+0000 (UTC)",
"destinationAddress": "tel:21585789",
"messageId": "5ff11cef73cf74588ab2a735",
"message": "Yes",
"resourceURL": null,
"senderAddress": "tel:+63917xxxxx"
}
],
"numberOfMessagesInThisBatch": 1,
"resourceURL": null,
"totalNumberOfPendingMessages": 0
}
}
The script seems to fail on the second to the last line (var y); but when I run it on the sample data, I'm able to access the key and value pair Im targeting- which is the sender address (it sends "tel:+63917xxxxx: to my email). Anybody has an idea why it's failing when it's ran as a web app?

I thought that in your script, var contents = JSON.parse(e.postData.contents); can be used as the parsed object. I thought that the reason of your error is due to that the object is converted to the string by var data = JSON.stringify(contents,null,4). So how about the following modification?
From:
var contents = JSON.parse(e.postData.contents);
var data = JSON.stringify(contents,null,4);
To:
var data = JSON.parse(e.postData.contents);
In this modification, y is tel:+63917xxxxx.
Note:
When you modified the script of Web Apps, please redeploy the Web Apps as new version. By this, the latest script is reflected to the Web Apps. Please be careful this.

Related

Google.Cloud.AppEngine.V1 client libraries and traffic splitting in .NET

I am trying to use the Client Libraries provided by Google to move traffic from one version of an app in AppEngine to another. However, the documentation for doing this just talks about using the rest API and not the client libraries.
Here is some example code:
var servicesClient = Google.Cloud.AppEngine.V1.ServicesClient.Create();
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = "apps/myProject/services/myService";
var updateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask = updateMask;
// See below for what should go here...
var updateResponse = servicesClient.UpdateService(updateServiceRequest);
My question is what format do I use for the update mask?
According to the documentation I should put in:
split {"split": { "allocations": { "newVersion": 1 } } }
But when I try: updateMask.Paths.Add(#"split { ""split"": { ""allocations"": { ""myNewVersion"": 1 } } }");
... I get the exception:
"This operation is only supported on the following field(s): [labels, migration_config, network_settings, split, tag_to_target_map], but got field(s): [split { "split": { "allocations": { "myNewVersion": 1 } } }] from the update request.
Any ideas where I should put the details of the split in the field mask object? The property Paths just seems to be a collection of strings.
The examples for these libraries in Google's doco is pretty poor :-(
I raised a support ticket with Google and despite them suggesting a solution which didn't work exactly (due to trying to assign a string to the UpdateMask which needs a FieldMask object), I managed to use it to find the correct solution.
The code should be:
// appService is a previously retrieved Service object from the ListServices method
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = appService.Name;
updateServiceRequest.UpdateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask.Paths.Add("split");
appService.Split.Allocations.Clear();
appService.Split.Allocations["newServiceVerison"] = 1;
updateServiceRequest.Service = appService;

How can I combine JSON rows in a logic app grouped by another property

I have a logic app that is taking failed runs from an app writing to application insights, and I want to group all the errors by the operation name into a single message. Can someone explain how to do this?
my starting data looks like:
[{ "messageError": "Notification sent to AppName but not received for request: 20200213215520_hUu22w9RZlyc, user email#email.com Status: NotFound",
"transactionKey": "20200213215520_hUu22w9RZlyc"},
{ "messageError": "App to App Import Request: 20200213215520_hUu22w9RZlyc from user email#email.com was unable to insert to following line(s) into App with error(s) :\r\n Line 123: Unable to unlock this record.",
"transactionKey": "20200213215520_hUu22w9RZlyc"}]
What I am trying to get out of that would be a single row that concatenates both messageError values into one statement on a common transaction key. Something like this:
[{ "messageErrors": [{"Notification sent to AppName but not received for request: 20200213215520_hUu22w9RZlyc, user email#email.com Status: NotFound"},
{"App to App Import Request: 20200213215520_hUu22w9RZlyc from user email#email.com was unable to insert to following line(s) into App with error(s) :\r\n Line 123: Unable to unlock this record."}],
"transactionKey": "20200213215520_hUu22w9RZlyc"}]
There might be as many as 20 rows in the dataset, and the concatenation needs to be smart enough to group only if there are multiple rows with the same transactionKey. Has anyone done this, and have a suggestion on how to group them?
For this requirement, I thought that we can use liquid template to do the "group by" operation for your json data at the beginning. But according to some test, it seems azure logic app doesn't support "group by" in its liquid template. So there two solutions for us to choose:
A. One solution is do these operations in logic app by "For each" loop, "If" condition, compose the json data and so many other actions, and also we have to initialize many variables. I tried this solution first, but I gave up it after creating so many actions in logic app. It's too complicated.
B. The other solution is call a azure function in logic app, and we can do the operations for the json data in function code. It's not easy either, but I think it's better than the first solution. So I tried this solution and got success. Please refer to the steps below:
1. We need to create a azure function app with a "HTTP" trigger in it.
2. In your "HTTP" trigger, please refer to my code below:
#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");
string body = await req.Content.ReadAsStringAsync();
JArray array = JArray.Parse(body);
JArray resultArray = new JArray();
JObject tempObj = new JObject();
foreach (var obj in array)
{
JObject jsonObj = JObject.Parse(obj.ToString());
string transactionKey = jsonObj.GetValue("transactionKey").ToString();
string messageError = jsonObj.GetValue("messageError").ToString();
Boolean hasKey = false;
foreach (var item in tempObj)
{
JObject jsonItem = (JObject)item.Value;
string keyInItem = jsonItem.GetValue("transactionKey").ToString();
if (transactionKey.Equals(keyInItem))
{
hasKey = true;
break;
}else
{
hasKey = false;
}
}
if (hasKey.Equals(false))
{
JObject newObj = new JObject();
JArray newArr = new JArray();
newArr.Add(messageError);
newObj.Add("transactionKey", transactionKey);
newObj.Add("messageErrors", newArr);
tempObj.Add(transactionKey, newObj);
}
else
{
JObject oldObj = (JObject)tempObj.GetValue(transactionKey);
JArray oldArr = (JArray)oldObj.GetValue("messageErrors");
oldArr.Add(messageError);
oldObj.Property("messageErrors").Remove();
oldObj.Add("messageErrors", oldArr);
tempObj.Property(transactionKey).Remove();
tempObj.Add(transactionKey, oldObj);
}
}
foreach (var x in tempObj)
{
resultArray.Add(x.Value);
}
return resultArray;
}
3. Test and save the function, and then go to your logic app. In logic app, I initialize a variable named "data" with the json data below to simulate your scene.
4. Then create function in your logic app and choose the "HTTP" trigger which you created just now.
5. After running the logic app, we can get the result shown as below:
[
{
"transactionKey": "20200213215520_hUu22w9RZlyc",
"messageErrors": [
"xxxxxxxxx",
"yyyyyyyy"
]
},
{
"transactionKey": "keykey",
"messageErrors": [
"testtest11",
"testtest22",
"testtest33"
]
}
]

AngularJS GET receives empty reply in Chrome but not in Fiddler

I'm implementing file download using AngularJS and WCF. My back-end is a .NET project hosted in IIS. The file is serialized as an array of bytes and then on the client side I utilize the File API to save the content.
To simplify the problem, back-end is like:
[WebInvoke(Method = "GET", UriTemplate = "FileService?path={path}")]
[OperationContract]
public byte[] DownloadFileBaseOnPath(string path)
{
using (var memoryStream = new MemoryStream())
{
var fileStream = File.OpenRead(path);
fileStream.CopyTo(memoryStream);
fileStream.Close();
WebOperationContext.Current.OutgoingResponse.Headers["Content-Disposition"] = "attachment; filename=\"Whatever\"";
WebOperationContext.Current.OutgoingResponse.ContentType = "application/octet-stream"; // treat all files as binary file
return memoryStream.ToArray();
}
}
And on client side, it just sends a GET request to get those bytes, converts in into a blob and save it.
function sendGetReq(url, config) {
return $http.get(url, config).then(function(response) {
return response.data;
});
}
Save the file then:
function SaveFile(url) {
var downloadRequest = sendGetReq(url);
downloadRequest.then(function(data){
var aLink = document.createElement('a');
var byteArray = new Uint8Array(data);
var blob = new Blob([byteArray], { type: 'application/octet-stream'});
var downloadUrl = URL.createObjectURL(blob);
aLink.setAttribute('href', downloadUrl);
aLink.setAttribute('download', fileNameDoesNotMatter);
if (document.createEvent) {
var event = document.createEvent('MouseEvents');
event.initEvent('click', false, false);
aLink.dispatchEvent(event);
}
else {
aLink.click();
}
setTimeout(function () {
URL.revokeObjectURL(downloadUrl);
}, 1000); // cleanup
});
}
This approach works fine with small files. I could successfully download files up to 64MB. But when I try to download a file larger than 64MB, the response.body is empty in Chrome. I also used Fiddler to capture the traffic. According to Fiddler, Back-end has successfully serialized the byte array and returned it. Please refer to the screenshot below.
In this example, I was trying to download a 70MB file:
And the response.data is empty:
Any idea why this is empty for file over 70MB? Though the response itself is more than 200MB, I do have enough memory for that.
Regarding to the WCF back-end, I know I should use Stream Mode when it comes to large files. But the typical use of my application is to download files less than 10MB. So I hope to figure this out first.
Thanks
Answer my own question.
Honestly I don't know what's going wrong. The issue still persists if I transfer it as a byte array. I eventually gave up this approach by returning a stream instead. Then on the client side, adding the following configuration
{responseType : blob}
and save it as a blob.

Sending ID from as3 to asp & geting sql result parameter at the SAME time?

I can send a parameter from as3 to asp. And I can get a value from db. But unfortunatelly I cant combine both of them. Is it possible to send a ID parameter from as3 to asp where I want to make a sql query on a db. Then query result will return back to the as3. Users can login with their id number. And they can see their own datas on the as3 application. My sample codes are given:
I can send values with these codes:
var getParams:URLRequest = new URLRequest("http://www***********/data.asp");
getParams.method = URLRequestMethod.POST;
var paras:URLVariables = new URLVariables();
paras.parameter1 = ""+userID;
getParams.data = paras;
var loadPars:URLLoader = new URLLoader(getParams);
loadPars.addEventListener(Event.COMPLETE, loadCompleted);
loadPars.dataFormat = URLLoaderDataFormat.VARIABLES;
loadPars.load(getParams);
function loadCompleted(event:Event):void
{
trace("sent")
}
I can get values from db with these codes:
var urlLoader:URLLoader =new URLLoader();
urlLoader.load(new URLRequest("http://www***********/data.asp"));
urlLoader.dataFormat = URLLoaderDataFormat.VARIABLES;
urlLoader.addEventListener(Event.COMPLETE, onXMLLoad);
function onXMLLoad(event:Event):void
{
var loader:URLLoader = URLLoader(event.target);
var scrptVars:URLVariables = new URLVariables(loader.data +"");
returnParameter= scrptVars.LINK0;
high.HighScore.text = returnParameter + "";
}
What is the logic of combining them?
Sory for my English level :)
To combine the second one into the first, you just need to read the URLLoader's data property (which is the response from the server) on the loadCompleted method (same as you're doing in the onXMLLoad method):
function loadCompleted(event:Event):void
{
trace("sent and received", loadPars.data);
high.HighScore.text = loadPars.data.LINK0;
}
The COMPLETE event for a URLLoader fires once the request has received a response. If your server adds data to that response, it can be found in the data property of the URLLoader.
So to summarize, sending and receiving can be done all in one operation with one URLLoader. The data you send to the server, is found in the URLRequest object passed to the URLLoader, the data that comes back from that request, is found in the data property of the URLLoader object (but only after the COMPLETE event fires).

How to get notification from google drive sheet on edit?

I want to send notification to third party application when someone make changes in document stored in google drive.
can someone please help me that how to bound script with any document and when someone make changes in that script should run and send notification to third party application.
I have tried the following code But it is not working.
function onEdit(event){
var sheet = event.source.getActiveSheet();
var editedRow = sheet.getActiveRange().getRowIndex();
var editedolumn = sheet.getActiveRange().getColumnIndex();
var values = sheet.getSheetValues(editedRow, editedolumn, 1, 6);
Logger.log(values);
getSession();
}
function getSession(){
var payload =
{
"username" : "username",
"password" : "password",
};
var options =
{
"method" : "post",
"payload" : payload,
"followRedirects" : false
};
var login = UrlFetchApp.fetch("https://abcd.service-now.com/nav_to.do?uri=login.do" , options);
Logger.log(login);
var sessionDetails = login.getAllHeaders()['Set-Cookie'];
Logger.log(sessionDetails);
sendHttpPost(sessionDetails);
}
function sendHttpPost(data) {
var payload = {"category" : "network","short_description" : "Test"};
var headers = {"Cookie" : data}
var url = 'https://abcd.service-now.com/api/now/table/incident';
var options = {'method': 'post','headers': headers,'payload': payload,'json': true};
var response = UrlFetchApp.fetch(url, options);
Logger.log(response.getContentText());
}
To send notification to third party application when someone make changes in document stored in google drive
Based from this Google Drive Help Forum, this feature hasn't been added yet. However, you may set notifications in a spreadsheet to find out when there's some modifications done in your spreadsheet. To set notifications in a spreadsheet:
Open the spreadsheet where you want to set notifications.
Click Tools > Notification rules.
In the window that appears, select when and how often you want to
receive notifications.
Click Save.
And, to bound script with any document
You may find the complete guide in Scripts Bound to Google Sheets, Docs, or Forms documentation. As mentioned,
To create a bound script, open a Google Sheets, Docs, or Forms file, then select Tools > Script editor. To reopen the script in the future, do the same thing. Because bound scripts do not appear in Google Drive, that menu is the only way to find or open the script.

Resources